id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.04148
Decoherence as a high-dimensional geometrical phenomenon
We develop a mathematical formalism that allows to study decoherence with a great level generality, so as to make it appear as a geometrical phenomenon between reservoirs of dimensions. It enables us to give quantitative estimates of the level of decoherence induced by a purely random environment on a system according to their respectives sizes, and to exhibit some links with entanglement entropy.
Antoine Soulas
2023-02-08T15:50:37Z
http://arxiv.org/abs/2302.04148v2
# Decoherence as a high-dimensional geometrical phenomenon ###### Abstract We develop a mathematical formalism that allows to study decoherence with a great level generality, so as to make it appear as a geometrical phenomenon between reservoirs of dimensions. It enables us to give quantitative estimates of the level of decoherence induced by a purely random environment on a system according to their respectives sizes, and to exhibit some links with entanglement entropy. ## Introduction The theory of decoherence is arguably one of the greatest advances in fundamental physics of the past forty years. Without adding anything new to the quantum mechanical framework, and by assuming that the Schrodinger equation is universally valid, it explains why quantum interferences virtually disappear at macroscopic scales. Since the pioneering papers [13][14], a wide variety of models have been designed to understand decoherence in different specific contexts (see the review [16] or [3] and the numerous references therein). In this paper, we would like to embrace a more general point of view and understand why decoherence is so ubiquitous between quantum mechanical systems. We start by introducing general notations to present as concisely as possible the idea underlying the theory of decoherence (SS1). We then build two simple but very general models to reveal the mathematical mechanisms that make decoherence so universal, thereby justifying why quantum interferences disappear due to the Schrodinger dynamics only (SS2 and SS3). The most important result is Theorem 2.3, proved in SS2.3, giving estimates for the level of decoherence induced by a random environment on a system of given sizes. We conclude in SS2.4 that even very small environments (of typical size at least \(N_{\mathcal{E}}=\ln(N_{\mathcal{S}})\) with \(N_{\mathcal{S}}\) the size of the system) suffice, under assumptions discussed in SS2.5. We also give a general formula estimating the level of classicality of a quantum system in terms of the entropy of entanglement with its environment (SS3.2, proved in the annex A). ## 1 The basics of decoherence The theory of decoherence sheds light on the reason why quantum interferences disappear when a system gets entangled with a macroscopic one, for example an electron in a double-slit experiment that doesn't interfere anymore when entangled with a detector. According to Di Biagio and Rovelli [1], the deep difference between classical and quantum is the way probabilities behave: all classical phenomena satisfy the total probability formula \[\mathbb{P}(B=y)=\sum_{x\in\mathrm{Im}(A)}\mathbb{P}(A=x)\mathbb{P}(B=y\ |\ A=x)\] relying on the fact that, _even though the actual value of the variable \(A\) is not known, one can still assume that it has a definite value among the possible ones._ This, however, is not correct for quantum systems. It is well-known that the diagonal elements of the density matrix account for the classical behavior of a system (they correspond to the terms of the total probability formula) while the non-diagonal terms are the additional interference terms1. Footnote 1: As a reminder, this is because the probability to obtain an outcome \(x\) is: \(\operatorname{tr}(\rho\left|x\right\rangle\left\langle x\right|)=\sum_{i,j=1}^{n} \rho_{ij}\left\langle j|x\right\rangle\left\langle x|i\right\rangle=\sum_{i=1}^ {n}\frac{\rho_{ii}\left\langle x|i\right\rangle\left\langle x|i\right\rangle^{ 2}+\sum_{1\leqslant i<j\leqslant n}\frac{2\mathrm{Re}(\rho_{ij}\left\langle j |x\right\rangle\left\langle x|i\right\rangle)}{\mathbb{P}(\rho)\left\langle x|i \right\rangle}}\) Consider a system \(\mathcal{S}\), described by a Hilbert space \(\mathcal{H}_{\mathcal{S}}\) of dimension \(d\), that interacts with an environment \(\mathcal{E}\) described by a space \(\mathcal{H}_{\mathcal{E}}\) of dimension \(n\), and let \(\mathcal{B}=(\left|i\right\rangle)_{1\leqslant i\leqslant d}\) be an orthonormal basis of \(\mathcal{H}_{\mathcal{S}}\). In the sequel, we will say that each \(\left|i\right\rangle\) corresponds to a **possible history** of the system in this basis (this expression will be given its full meaning in an ongoing article dedicated to the measurement problem [11]). Let's also assume that \(\mathcal{B}\) is a **conserved basis** during the interaction with \(\mathcal{E}\) (in some contexts called a pointer basis). When \(\mathcal{E}\) is a measurement apparatus for the observable \(A\), the eigenbasis of \(\hat{A}\) is clearly a conserved basis; in general, any observable that commute with the interaction Hamiltonian is suitable. Inspired by [1], let's denote \(\left|\Psi\right\rangle=\left(\sum_{i=1}^{d}c_{i}\left|i\right\rangle\right) \otimes\left|\mathcal{E}_{0}\right\rangle\) the initial state of \(\mathcal{S}+\mathcal{E}\) before interaction. After a time \(t\), the total state evolves to \(\left|\Psi(t)\right\rangle=\sum_{i=1}^{t}c_{i}\left|i\right\rangle\otimes \left|\mathcal{E}_{i}(t)\right\rangle\). Let also \(\eta(t)=\max\limits_{i\neq j}\left|\left\langle\mathcal{E}_{i}(t)\middle| \mathcal{E}_{j}(t)\right\rangle\right|\). If \((\left|e_{k}\right\rangle)_{1\leqslant k\leqslant n}\) denotes an orthonormal basis of \(\mathcal{H}_{\mathcal{E}}\), the state of \(\mathcal{S}\), obtained by tracing out the environment, is: \[\rho_{\mathcal{S}}(t) =\operatorname{tr}_{\mathcal{E}}\left|\Psi(t)\right\rangle\left \langle\Psi(t)\right|\] \[=\sum_{k=1}^{n}\left(\sum_{i=1}^{d}\left|c_{i}\right|^{2}\left| \left\langle e_{k}\middle|\mathcal{E}_{i}(t)\right\rangle\right|^{2}\left|i \right\rangle\left\langle i\right|+\sum_{1\leqslant i\neq j\leqslant d}c_{i} \overline{c_{j}}\left\langle e_{k}\middle|\mathcal{E}_{i}(t)\right\rangle \left\langle\mathcal{E}_{j}(t)\middle|e_{k}\right\rangle\left|i\right\rangle \left\langle j\right|\right)\] \[=\sum_{i=1}^{d}\left|c_{i}\right|^{2}\left|i\right\rangle\left\langle i \right|+\sum_{1\leqslant i\neq j\leqslant d}c_{i}\overline{c_{j}}\left\langle \mathcal{E}_{j}(t)\middle|\mathcal{E}_{i}(t)\right\rangle\left|i\right\rangle \left\langle j\right|\] \[\equiv\rho_{\mathcal{S}}^{(d)}+\rho_{\mathcal{S}}^{(q)}(t)\] where \(\rho_{\mathcal{S}}^{(d)}\) stands for the (time independent) diagonal part of \(\rho_{\mathcal{S}}(t)\) (which corresponds to the total probability formula), and \(\rho_{\mathcal{S}}^{(q)}(t)\) for the remaining non diagonal terms responsible for the interferences between the possible histories. It is not difficult to show (see the Annex A) that \(\left\|\rho_{\mathcal{S}}^{(q)}(t)\right\|\leqslant\eta(t)\). Therefore \(\eta\)_measures how close the system is from being classical_ because, as shown in A, we have for all subspaces2\(F\subset\mathcal{H}_{\mathcal{S}}\): Footnote 2: Recall that, in the quantum formalism, probabilistic events correspond to subspaces. \[\mid\operatorname{tr}(\rho_{\mathcal{S}}(t)\Pi_{F})\underset{\text{quantum probability}}{\text{tr}(\rho_{\mathcal{S}}^{(d)}\Pi_{F})}\mid\leqslant\dim(F)\;\eta(t). \tag{1}\] In other words, \(\eta(t)\) estimates how decohered the system is. Notice well that it is only during an interaction between \(\mathcal{S}\) and \(\mathcal{E}\) that decoherence can occur; any future internal evolution \(U\) of \(\mathcal{E}\) lets \(\eta\) unchanged since \(\left\langle U\mathcal{E}_{j}\middle|U\mathcal{E}_{i}\right\rangle=\left\langle \mathcal{E}_{j}\middle|\mathcal{E}_{i}\right\rangle\). The aim of the theory of decoherence is to explain why \(\eta(t)\) rapidly goes to zero when \(n\) is large, so that the state of the system almost immediately3 evolves from \(\rho_{\mathcal{S}}\) to \(\rho_{\mathcal{S}}^{(d)}\) in the conserved basis. As recalled in the introduction, a lot of different models already explain this phenomenon in specific contexts. In this paper, we shall build two (excessively) simple but quite universal models that highlight the fundamental reason why \(\eta(t)\to 0\) so quickly, and that will allow us to determine the typical size of an environment needed to entail proper decoherence on a system. First model: purely random environment When no particular assumption is made to specify the type of environment under study, the only reasonable behaviour to assume for \(|\mathcal{E}_{i}(t))\) is that of a Brownian motion on the sphere \(\mathbb{S}^{n}=\{|\Psi\rangle\in\mathcal{H}_{\mathcal{E}}\mid\|\Psi\|=1\}\subset \mathcal{H}_{\mathcal{E}}\simeq\mathbb{C}^{n}\simeq\mathbb{R}^{2n}\). It boils down to representing the environment as a purely random system with no preferred direction of evolution. This choice will be discussed in SS2.5. Another bold assumption would be the independence of the \((|\mathcal{E}_{i}(t))_{1\leqslant i\leqslant d}\); we will dare to make this assumption anyway. ### Convergence to the uniform measure We will first show that the probabilistic law of each \(|\mathcal{E}_{i}(t))\) converges exponentially fast to the uniform probability measure on \(\mathbb{S}^{n}\). To make things precise, endow \(\mathbb{S}^{n}\) with its Borel \(\sigma\)-algebra \(\mathcal{B}\) and with the canonical Riemannian metric \(g\) which induces the uniform measure \(\mu\) that we suppose normalized to a probability measure. Denote \(\Delta f=\frac{1}{\sqrt{g}}\partial_{i}(\sqrt{g}g^{ij}\partial_{j}f)\) the Laplacian operator on \(\mathcal{C}^{\infty}(\mathbb{S}^{n})\) which can be extended to \(L^{2}(\mathbb{S}^{n})\), the completion of \(\mathcal{C}^{\infty}(\mathbb{S}^{n})\) for the scalar product \((f,h)=\int_{\mathbb{S}^{n}}f(x)h(x)\mathrm{d}\mu\). Finally, recall that the total variation norm of a measure defined on \(\mathcal{B}\) is given by \(\|\sigma\|_{TV}=\sup\limits_{B\in\mathcal{B}}|\sigma(B)|\). **Proposition 2.1**.: _Let \(\nu_{t}\) be the law of the random variable \(|\mathcal{E}_{i}(t))\), that is \(\nu_{t}(B)=\mathbb{P}\big{(}|\mathcal{E}_{i}(t)\rangle\in B\big{)}\) for all \(B\in\mathcal{B}\). Then \(\|\nu_{t}-\mu\|_{TV}\underset{t\to+\infty}{\longrightarrow}0\) exponentially fast._ Proof.: The overall idea is to decompose the density of the measure \(\nu_{t}\) in an eigenbasis of the Laplacian so that the Brownian motion (which is generated by \(\Delta\)) will exponentially kill all modes but the one associated with the eigenvalue \(0\), that is the constant one. It is recalled in [9] that the eigenvalues of \(\Delta\) take the form \(\lambda_{k}=-k(k+2n-2)\) for \(k\in\mathbb{N}\) with multiplicity \(d_{k}=\frac{(k+2n-3)!}{(2n-2)!k!}(2k+2n-2)\). Denote \((f_{k,l})_{\begin{subarray}{c}k\in\mathbb{N}\\ 1\leqslant l\leqslant d_{k}\end{subarray}}\) an orthonormal Hilbert basis in \(L^{2}(\mathbb{S}^{n})\) of eigenfunctions of \(\Delta\), where \(f_{k,l}\) is associated with the eigenvalue \(\lambda_{k}\). Note that \(d_{0}=1\) and that \(f_{0,1}\) is constant (as any harmonic function on a manifold without boundary, due to the maximum principle) so it is the density of a uniform measure. The law \(\nu_{0}\) of the deterministic variable \(|\mathcal{E}_{i}(0)\rangle=|\mathcal{E}_{0}\rangle\) corresponds to a Dirac distribution, which is not strictly speaking in \(L^{2}(\mathbb{S}^{n})\), so we rather consider it as given by a sharply peaked density (with respect to \(\mu\)) denoted \(p_{0}\in L^{2}(\mathbb{S}^{n})\); the latter can be decomposed in the Hilbert basis \((f_{k,l})\) as \(p_{0}=\sum_{k=0}^{+\infty}\sum_{l=1}^{d_{k}}a_{k,l}f_{k,kl}\). The fact that \(\|p_{0}\|_{L^{2}}^{2}=\sum_{k=0}^{+\infty}\sum_{l=1}^{d_{k}}|a_{k,l}|^{2}\) yields \(|a_{k,l}|\leqslant\|p_{0}\|_{L^{2}}\). Denote also \(p_{t}\) the density after a time \(t\), _i.e._\(\nu_{t}(\mathrm{d}x)=p_{t}(x)\mu(\mathrm{d}x)\). The Hille-Yosida theory allows to define the Brownian motion on the sphere as the Markov semigroup of stochastic kernels generated by \(\Delta\); in particular, this implies \(p_{t}=e^{t\Delta}p_{0}=\sum_{k=0}^{+\infty}e^{-k(k+2n-2)t}\sum_{l=1}^{d_{k}}a _{k,l}f_{k,l}\). Note that \(a_{0,1}f_{0,1}\) is a probability density because for all \(k\geqslant 1\), \(\int_{\mathbb{S}^{n}}f_{k,l}=\int_{\mathbb{S}^{n}}\frac{\Delta f_{k,l}}{ \lambda_{k}}=0\) due to Stokes' theorem, thus \(\int_{\mathbb{S}^{n}}a_{0,1}f_{0,1}=\int_{\mathbb{S}^{n}}a_{0,1}f_{0,1}+\sum_{ k=1}^{+\infty}\sum_{l=1}^{d_{k}}a_{k,l}\int_{\mathbb{S}^{n}}f_{k,l}=\int_{ \mathbb{S}^{n}}p_{0}=1\). Hence \(a_{0,1}f_{0,1}=1\), and therefore: \[\|\nu_{t}-\mu\|_{TV} =\frac{1}{2}\int_{\mathbb{S}^{n}}|p_{t}(x)-1|\mathrm{d}\mu\ \ \ \ \ \ \ \ \ \ \ \ (\text{classical result on the total variation norm})\] \[\leqslant\frac{1}{2}\sum_{k=1}^{+\infty}e^{-k(k+2n-2)t}\sum_{l=1}^{d_{k }}|a_{k,l}|\underbrace{\|f_{k,l}\|_{L^{1}}}_{\leqslant 1\ (*)}\] \[\leqslant\frac{1}{2}\|p_{0}\|_{L^{2}}\sum_{k=1}^{+\infty}e^{-k(k+2n -2)t}d_{k}\] where \((*)\) relies on Holder's inequality \(\|f_{k,l}\|_{L^{1}}\leqslant\mu(\mathbb{S}^{n})^{1/2}\|f_{k,l}\|_{L^{2}}=1\). It remains to find a characteristic time after which the above series is efficiently bounded. We reproduce the argument of [9], setting \(u_{k}=e^{-k(k+2n-2)^{t}}d_{k}\) so that \(u_{1}=2ne^{-(2n-1)t}\) and : \[\frac{u_{k+1}}{u_{k}}=\frac{k+2n-2}{k+1}\frac{2k+2n}{2k+2n-2}e^{-[(2n-1)(k+1)+k]t}\] If \(n\geqslant 2\) and \(t\geqslant t_{n}=\frac{\ln(2n-1)}{2n-1}\), then \(\frac{u_{k+1}}{u_{k}}\leqslant\frac{2n-1}{2}\frac{2n+2}{2n}\frac{1}{2n-1} \leqslant\frac{3}{4}\), which implies : \[\|\nu_{t}-\mu\|_{TV}\leqslant\frac{1}{2}\|p_{0}\|_{L^{2}}u_{1}\sum_{k=1}^{+ \infty}(\frac{3}{4})^{k}\leqslant 3\|p_{0}\|_{L^{2}}ne^{-(2n-1)t}\] Interestingly enough, the convergence is faster as \(n\) increases since the characteristic time to equilibrium satisfies \(t_{n}\underset{n\rightarrow\infty}{\longrightarrow}0\) and the exponential is sharper. ### Most vectors are almost orthogonal Consequently, we are now interested in the behavior of the scalar products between random vectors uniformly distributed on the complex \(n\)-sphere \(\mathbb{S}^{n}\). The first thing to understand is that, _in high dimension, most pairs of unit vectors are almost orthogonal_. **Proposition 2.2**.: _Denote by \(S=\langle\mathcal{E}_{1}|\mathcal{E}_{2}\rangle\in\mathbb{C}\) the random variable where \(|\mathcal{E}_{1}\rangle\) and \(|\mathcal{E}_{2}\rangle\) are two independent uniform random variables on \(\mathbb{S}^{n}\). Then \(\mathbb{E}(S)=0\) and \(\mathbb{V}(S)=\mathbb{E}(|S|^{2})=\frac{1}{n}\)._ Proof.: Clearly, \(|\mathcal{E}_{1}\rangle\) and \(-\,|\mathcal{E}_{1}\rangle\) have the same law, hence \(\mathbb{E}(S)=\mathbb{E}(-S)=0\). What about its variance? One can rotate the sphere to impose for example \(|\mathcal{E}_{1}\rangle=(1,0,\ldots,0)\), and by independence \(|\mathcal{E}_{2}\rangle\) still follows a uniform law. Such a uniform law can be achieved by generating \(2n\) independent normal random variables \((X_{i})_{1\leqslant i\leqslant 2n}\) following \(\mathcal{N}(0,1)\) and by considering the random vector \(|\mathcal{E}_{2}\rangle=\left(\frac{X_{1}+iX_{2}}{\sqrt{X_{1}^{2}+\cdots+X_{2n }^{2}}},\ldots,\frac{X_{2n-1}+iX_{2n}}{\sqrt{X_{1}^{2}+\cdots+X_{2n}^{2}}}\right)\). Indeed, for any continuous function \(f:\mathbb{S}^{n}\rightarrow\mathbb{R}\) (with \(\mathrm{d}\sigma^{n}\) denoting the measure induced by Lebesgue's on \(\mathbb{S}^{n}\)): \[\mathbb{E}[f(|\mathcal{E}_{2}\rangle)] =\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{2n}}f\left(\frac{x_{1}+ix_ {2}}{\sqrt{x_{1}^{2}+\cdots+x_{2n}^{2}}},\ldots,\frac{x_{2n-1}+ix_{2n}}{\sqrt{ x_{1}^{2}+\cdots+x_{2n}^{2}}}\right)e^{-(x_{1}^{2}+\cdots+x_{2n}^{2})/2}\mathrm{d}x_{1} \ldots\mathrm{d}x_{2n}\] \[=\frac{1}{(2\pi)^{n}}\int_{0}^{\infty}\left[\int_{\mathbb{S}^{n} }f(u)\mathrm{d}\sigma^{n}(u)\right]e^{-\frac{\pi^{2}}{2}}r^{2n-1}\mathrm{d}r\] \[=\omega_{n}\int_{\mathbb{S}^{n}}f(u)\mathrm{d}\sigma^{n}(u)\] which means that \(|\mathcal{E}_{2}\rangle\) defined this way follows indeed the uniform law. In these notations, \(S=\frac{X_{1}+iX_{2}}{\sqrt{X_{1}^{2}+\cdots+X_{2n}^{2}}}\). Note that, up to relabelling the variables, we have \(\forall k\in\llbracket 1,n\rrbracket,\ \mathbb{E}\left(\frac{X_{1}^{2}+X_{2}^{2}}{X_{1}^{2}+ \cdots+X_{2n}^{2}}\right)=\mathbb{E}\left(\frac{X_{2n-1}^{2}+X_{2n}^{2}}{X_{1 }^{2}+\cdots+X_{2n}^{2}}\right)\) and so: \[\mathbb{V}(S)=\mathbb{E}\left(\frac{X_{1}^{2}+X_{2}^{2}}{X_{1}^{2}+\cdots+X_{ 2n}^{2}}\right)=\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}\left(\frac{X_{2k-1}^{2}+X _{2k}^{2}}{X_{1}^{2}+\cdots+X_{2n}^{2}}\right)=\frac{1}{n}\mathbb{E}(1)=\frac{ 1}{n}.\] Another way to recover this result would have been to define the unitary evolution operators \((U^{(i)}(t))_{1\leqslant i\leqslant d}\) such that \(|\mathcal{E}_{i}(t)\rangle=U^{(i)}(t)\,|\mathcal{E}_{0}\rangle\), resulting from the interaction Hamiltonian. Again, if no direction of evolution is preferred, it is reasonable to consider the law of each \(U^{(i)}(t)\) to be given by the Haar measure \(\mathrm{d}U\) on the unitary group \(\mathcal{U}_{n}\). If moreover they are independent, then \(U^{(i)}(t)^{\dagger}U^{(j)}(t)\) also follows the Haar measure for all \(i,j\) so that, using [12, (112)]: \[\mathbb{V}\big{(}\,\langle\mathcal{E}_{i}(t)|\mathcal{E}_{j}(t)\rangle\,\big{)} =\int_{\mathcal{U}_{n}}\lvert\langle\mathcal{E}_{0}\rvert U\mathcal{E}_{0} \rangle\rvert^{2}dU=\prod_{i=2}^{n}\frac{i-1}{i}=\frac{1}{n}.\] Therefore, \(|\langle\mathcal{E}_{i}(t)|\mathcal{E}_{j}(t)\rangle|\) is, after a very short time, of order \(\sqrt{\mathbb{V}(S)}=\frac{1}{\sqrt{\dim(\mathcal{H}_{\mathcal{E}})}}\). When \(d=2\), if \(\mathcal{E}\) is composed of \(N_{\mathcal{E}}\) particles and each of them is described by a \(p\)-dimensional Hilbert space, then very rapidly: \[\eta\sim p^{-N_{\mathcal{E}}/2} \tag{2}\] which is virtually zero for macroscopic environments, therefore decoherence is guaranteed. Of course, this is not true anymore if \(d\) is large, because there will be so many pairs that some of them will inevitably become non-negligible, and so will \(\eta\). We would like to determine a condition between \(n\) and \(d\) under which proper decoherence is to be expected. In other words, what is the minimal size of an environment needed to decohere a given system? ### Direct study of \(\eta\) To answer this question, we should be more precise and consider directly the random variable \(\eta_{n,d}=\max_{i\neq j}\,|\langle\mathcal{E}_{i}|\mathcal{E}_{j}\rangle|\) where the \(\left(|\mathcal{E}_{i}\rangle\right)_{1\leqslant i\leqslant d}\) are \(d\) random vectors uniformly distributed on the complex \(n\)-sphere \(\mathbb{S}^{n}\). In the following, we fix \(\varepsilon\in\left]0,1\right[\) as well as a threshold \(s\in[0,1[\) close to \(1\), and define \(d^{\varepsilon,s}_{max}(n)=\min\{d\in\mathbb{N}\,|\,\,\mathbb{P}(\eta_{n,d} \geq\varepsilon)\geqslant s\}\), so that if \(d^{\varepsilon,s}_{max}(n)\) points or more are placed randomly on \(\mathbb{S}^{n}\), it is very likely (with probability \(\geqslant s\)) that at least one of the scalar products will be greater that \(\varepsilon\). **Theorem 2.3**.: _The following estimates hold :_ 1. \(d^{\varepsilon,s}_{max}(n)\sim\sqrt{-2\ln(1-s)}\left(\frac{1}{1- \varepsilon^{2}}\right)^{\frac{n-1}{2}}\)__ 2. \(\mathbb{V}(\eta_{n,d})\simeq 0\) _and_ \(\boxed{\eta_{n,d}\simeq\mathbb{E}(\eta_{n,d})\sim\sqrt{1-d^{-\frac{2}{n}}}}\)__ To derive these formulas, we first need the following geometrical lemma. **Lemma 2.4**.: _Let \(A_{n}=|\mathbb{S}^{n}|\) be the area of the complex \(n\)-sphere for \(\mathrm{d}\sigma^{n}\) (induced by Lebesgue's measure), \(C^{\varepsilon}_{n}(x)=\{u\in\mathbb{S}^{n}\,|\,|\langle u|x\rangle|\geqslant\varepsilon\}\) the'spherical cap'4 centered in \(x\) of parameter \(\varepsilon\), and \(A^{\varepsilon}_{n}=|C^{\varepsilon}_{n}|\) the area of any spherical cap of parameter \(\varepsilon\). Then for all \(n\geqslant 1\):_ Footnote 4: We use the quotation marks, because on \(\mathbb{S}^{n}\) equipped with its complex scalar product, this set doesn’t look like a cap as it does in the real case. QM is nothing but a strange way of calculating probabilities (in which all possible histories interfere) based on a geometrical structure, but the geometry in use is also quite different from the intuitive one given by the familiar real scalar product... \[\frac{A^{\varepsilon}_{n}}{A_{n}}=(1-\varepsilon^{2})^{n-1}\] Proof of Lemma.: Recall that \(\mathbb{S}^{n}\subset\mathbb{C}^{n}\simeq\mathbb{R}^{2n}\) can be seen as a real manifold of dimension \(2n-1\). Consider the set of coordinates \((r,\theta,\varphi_{1},\ldots,\varphi_{2n-3})\) on \(\mathbb{S}^{n}\) defined by the chart \[\begin{array}{rll}F:&[0,1]\times[0,2\pi[\times[0,\pi]^{2n-4}\times[0,2\pi[ &\longrightarrow&\mathbb{S}^{n}\\ &(r,\theta,\varphi_{1},\ldots,\varphi_{2n-3})&\longmapsto&(x_{1}+ix_{2}, \ldots,x_{2n-1}+ix_{2n})\simeq(x_{1},\ldots,x_{2n})=\\ &&(r\cos(\theta),r\sin(\theta),\sqrt{1-r^{2}}\cos(\varphi_{1}),\sqrt{1-r^{2}} \sin(\varphi_{1})\cos(\varphi_{2}),\ldots,\\ &&\sqrt{1-r^{2}}\sin(\varphi_{1})\ldots\cos(\varphi_{2n-3}),\sqrt{1-r^{2}} \sin(\varphi_{1})\ldots\sin(\varphi_{2n-3}))\end{array}\] This amounts to choose the modulus \(r\) and the argument \(\theta\) of \(x_{1}+ix_{2}\), and then describe the remaining parameters using the standard spherical coordinates on \(\mathbb{S}^{n-1}\), seen as a sphere of real dimension \(2n-3\), including a radius factor \(\sqrt{1-r^{2}}\). The advantage of these coordinates is that \(C^{\varepsilon}_{n}(1,0,\ldots,0)\) simply corresponds to the set of points for which \(r\geqslant\varepsilon\). Let's determine the metric \(g\). * \(\vec{e_{r}}=\partial_{r}F(r,\theta,\varphi_{1},\ldots,\varphi_{2n-3})=(\cos( \theta),\sin(\theta),\)\(\frac{-r}{\sqrt{1-r^{2}}}[\vec{u}])\), where \([\vec{u}]\) stands for the standard expression of the spherical coordinates on \(\mathbb{S}^{n-1}\) * \(\vec{e_{\theta}}=\partial_{\theta}F(r,\theta,\varphi_{1},\ldots,\varphi_{2n-3 })=(-r\sin(\theta),r\cos(\theta),0,\ldots,0)\) * \(\vec{e_{\varphi_{i}}}=\partial_{\varphi_{i}}F(r,\theta,\varphi_{1},\ldots, \varphi_{2n-3})=(0,0,\sqrt{1-r^{2}}[e_{\vec{\varphi_{i}}}])\) where \([e_{\vec{\varphi_{i}}}]\) stands for the tangent vector on \(\mathbb{S}^{n-1}\) Obviously, \(\vec{e_{r}}\perp\vec{e_{\theta}}\) and \(\vec{e_{\varphi_{i}}}\perp\vec{e_{\theta}}\), as well as \(\vec{e_{\varphi_{i}}}\perp\vec{e_{\varphi_{j}}}\) for \(i\neq j\) as is the case in the standard spherical coordinates. Moreover, since \([\vec{u}]\) is radial and \([\vec{e_{\varphi_{i}}}]\) tangent to \(\mathbb{S}^{n-1}\), we also have \(\vec{e_{r}}\perp\vec{e_{\varphi_{i}}}\), therefore \(g\) is diagonal in these coordinates. Its components are given by : * \(g_{rr}=\langle\vec{e_{\theta}}|\vec{e_{\theta}}\rangle=r^{2}\) * \(g_{\varphi_{i}\varphi_{i}}=(1-r^{2})[g_{\varphi_{i}\varphi_{i}}]\) with \([g]\) the metric corresponding to the spherical coordinates on \(\mathbb{S}^{n-1}\) It is now easy to compute the desired quantity : \[A_{n}^{\varepsilon} =\int_{\varepsilon}^{1}\sqrt{1+\frac{r^{2}}{1-r^{2}}}\mathrm{d}r \int_{[0,2\pi[\times[0,\pi[2^{n-4}\times[0,2\pi[\)}r\mathrm{d}\theta\sqrt{1-r^{ 2}}^{2n-3}\sqrt{[g]}\mathrm{d}\varphi_{1}\ldots\mathrm{d}\varphi_{2n-3}\] \[=2\pi A_{n-1}\int_{\varepsilon}^{1}r(1-r^{2})^{n-2}\mathrm{d}r\] \[=\frac{\pi A_{n-1}}{n-1}\int_{\varepsilon}^{1}2(n-1)r(1-r^{2})^{n- 2}\mathrm{d}r\] \[=\frac{\pi A_{n-1}}{n-1}(1-\varepsilon^{2})^{n-1}\] and, finally, \[\frac{A_{n}^{\varepsilon}}{A_{n}}=\frac{A_{n}^{\varepsilon}}{A_{n}^{0}}=(1- \varepsilon^{2})^{n-1}.\] We are now ready to prove the theorem. Proof of Theorem 2.3.: For this proof, we find some inspiration in [6], but eventually find sharper bounds with simpler arguments. We say that a set of vectors on a sphere are \(\varepsilon\)-separated if all scalar products between any pairs among them are not greater than \(\varepsilon\) in modulus. Consider the following events : * \(A:\forall k\in[\![1,d-1]\!],|\langle\mathcal{E}_{d}|\mathcal{E}_{k}\rangle|\leqslant\varepsilon\) * \(B:(|\mathcal{E}_{k}\rangle)_{1\leqslant k\leqslant d-1}\) are \(\varepsilon-\)separated and write \(\mathbb{P}(\eta_{n,d}\leqslant\varepsilon)=\mathbb{P}(A\mid B)\mathbb{P}(B)= \frac{\mathbb{P}(A\cap B)}{\mathbb{P}(B)}\mathbb{P}(\eta_{n,d-1}\leqslant\varepsilon)\), with : \[\frac{\mathbb{P}(A\cap B)}{\mathbb{P}(B)} =\frac{\frac{1}{A_{n}^{d}}\int_{(\mathbb{S}^{n})^{d-1}}\mathrm{ d}\sigma^{n}(x_{1})\ldots\mathrm{d}\sigma^{n}(x_{d-1})\mathbb{1}_{\{x_{1}, \ldots,x_{d-1}\text{ are $\varepsilon$-separated}\}}\left(A_{n}-\left| \bigcup_{k=1}^{d-1}C_{n}^{\varepsilon}(x_{k})\right|\right)}{\frac{1}{A_{n}^ {d-1}}\int_{(\mathbb{S}^{n})^{d-1}}\mathrm{d}\sigma^{n}(x_{1})\ldots\mathrm{ d}\sigma^{n}(x_{d-1})\mathbb{1}_{\{x_{1},\ldots,x_{d-1}\text{ are $\varepsilon$-separated}\}}}\] \[=1-\mathbb{E}\left(\frac{\left|\bigcup_{k=1}^{d-1}C_{n}^{ \varepsilon}(|\mathcal{E}_{k}\rangle)\right|}{A_{n}}\middle|B\right)\] We need to find bounds on the latter quantity. Obviously, \(\mathbb{E}\left(\frac{\left|\bigcup_{k=1}^{d-1}C_{n}^{\varepsilon}(|\mathcal{ E}_{k}\rangle)\right|}{A_{n}}\middle|B\right)\leqslant(d-1)\frac{A_{n}^{ \varepsilon}}{A_{n}}\), corresponding to the case when all the caps are disjoint. For the lower bound, define the sequence \(u_{d}=\mathbb{E}\left(\frac{\left|\bigcup_{k=1}^{d}C_{n}^{\varepsilon}(| \mathcal{E}_{k}\rangle)\right|}{A_{n}}\right)\), which clearly satisfies \(u_{d}\leqslant\mathbb{E}\left(\frac{\left|\bigcup_{k=1}^{d}C_{n}^{\varepsilon }(|\mathcal{E}_{k}\rangle)\right|}{A_{n}}\middle|B\right)\), because conditioning on the vectors being separated can only decrease the overlap between the different caps. First observe that \(u_{1}=\frac{A_{n}^{\varepsilon}}{A_{n}}\equiv\alpha\), and compute : \[u_{d} =u_{d-1}+\frac{1}{A_{n}}\mathbb{E}\left(\left|C_{n}^{\varepsilon}(| \mathcal{E}_{d})\right.\right\rangle\setminus\bigcup_{k=1}^{d-1}C_{n}^{ \varepsilon}(|\mathcal{E}_{k}))\right|\right)\] \[=u_{d-1}+\frac{1}{A_{n}}\frac{1}{A_{n}^{d}}\int_{(\mathbb{S}^{n} )^{d}}\mathrm{d}\sigma^{n}(x_{1})\dots\mathrm{d}\sigma^{n}(x_{d})\int_{C_{n}^ {\varepsilon}(x_{d})}\mathbb{1}_{\{y\not\in\bigcup_{k=1}^{d-1}C_{n}^{ \varepsilon}(x_{k})\}}\mathrm{d}\sigma^{n}(y)\] \[=u_{d-1}+\frac{1}{A_{n}}\frac{1}{A_{n}^{d-1}}\int_{(\mathbb{S}^{ n})^{d-1}}\mathrm{d}\sigma^{n}(x_{1})\dots\mathrm{d}\sigma^{n}(x_{d-1})\int_{ \mathbb{S}^{n}}|C_{n}^{\varepsilon}(y)|\,\mathbb{1}_{\{y\not\in\bigcup_{k=1}^ {d-1}C_{n}^{\varepsilon}(x_{k})\}}\frac{\mathrm{d}\sigma^{n}(y)}{A_{n}}\] \[=u_{d-1}+\frac{A_{n}^{\varepsilon}}{A_{n}}\frac{1}{A_{n}^{d-1}} \int_{(\mathbb{S}^{n})^{d-1}}\mathrm{d}\sigma^{n}(x_{1})\dots\mathrm{d}\sigma ^{n}(x_{d-1})\left(1-\frac{\left|\bigcup_{k=1}^{d-1}C_{n}^{\varepsilon}(x_{k}) \right|}{A_{n}}\right)\] \[=u_{d-1}+\frac{A_{n}^{\varepsilon}}{A_{n}}(1-u_{d-1})\] \[=(1-\alpha)u_{d-1}+\alpha\] where the main trick was to invert the integrals on \(x_{d}\) and on \(y\). This result is actually quite intuitive: it states that when adding a new cap, only a fraction \(1-u_{d-1}\) of it on average will be outside the previous caps and contribute to the new total area covered by the caps. Hence \(u_{d}=1-(1-\alpha)^{d}\), and the recurrence relation becomes: \[\left(1-(d-1)\frac{A_{n}^{\varepsilon}}{A_{n}}\right)\mathbb{P}(\eta_{n,d-1} \leqslant\varepsilon)\leqslant\mathbb{P}(\eta_{n,d}\leqslant\varepsilon) \leqslant\left(1-\frac{A_{n}^{\varepsilon}}{A_{n}}\right)^{d-1}\mathbb{P}( \eta_{n,d-1}\leqslant\varepsilon).\] Applying the lemma, we get by induction: \[\prod_{k=1}^{d-1}(1-k(1-\varepsilon^{2})^{n-1})\leqslant\mathbb{P}(\eta_{n,d }\leqslant\varepsilon)\leqslant(1-(1-\varepsilon^{2})^{n-1})^{\frac{d(d-1)}{ 2}}\] Note that the left inequality is valid only as long as \(d\leqslant\left(\frac{1}{1-\varepsilon^{2}}\right)^{n-1}\), but when \(d\) is larger than this critical value, the right hand side becomes very small (of order \(e^{-1/2(1-\varepsilon^{2})^{n-1}}\)), so taking \(0\) as a lower bound in this case is actually very precise. The two bounds are in fact extremely close to each other, and get closer as \(n\) goes larger, as exemplified by numerical simulations (a precise argument could be given but it would be quite lengthy and would not bring any additional understanding to the problem). Consequently, we can make the following approximation for the distribution function of \(\eta_{n,d}\): \(\mathbb{P}(\eta_{n,d}\leqslant\varepsilon)\simeq(1-(1-\varepsilon^{2})^{n-1}) ^{\frac{d(d-1)}{2}}\). It is then straightforward to obtain the estimation \(d_{max}^{\varepsilon,s}(n)\simeq\sqrt{-2\ln(1-s)}\left(\frac{1}{1-\varepsilon ^{2}}\right)^{\frac{n-1}{2}}\), which is the first part of the theorem. The function \(\varepsilon\mapsto(1-(1-\varepsilon^{2})^{n-1})^{\frac{d(d-1)}{2}}\) happens to be almost constant equal to \(0\) in the vicinity of \(\varepsilon=0\), almost \(1\) in the vicinity of \(\varepsilon=1\), and to have a very sharp step between the two; this step sharpens as \(n\) and \(d\) grow larger. This explains why the mass of probability is highly peaked around a critical value \(\varepsilon_{c}\), so that \(\mathbb{V}(\eta_{n,d})\simeq 0\) and \(\eta_{n,d}\simeq\mathbb{E}(\eta_{n,d})\) is almost a deterministic variable. This is certainly due to the averaging effect of considering the maximum of a set of \(\frac{d(d+1)}{2}\) scalar products. The critical value \(\varepsilon_{c}\) satisfies: \[(1-(1-\varepsilon_{c}^{2})^{n-1})^{\frac{d(d-1)}{2}}=\frac{1}{2}\Leftrightarrow \varepsilon_{c}=\sqrt{1-(1-2^{-2/d(d-1)})^{1/n-1}}\underset{n,d\to\infty}{ \sim}\sqrt{1-d^{-2/n}}\] It simply remains to use the well-known formula: \(\mathbb{E}(\eta_{n,d})=\int_{0}^{1}\mathbb{P}(\eta_{n,d}\leqslant\varepsilon) \mathrm{d}\varepsilon\simeq\varepsilon_{c}\simeq\sqrt{1-d^{-2/n}}\), which completes the proof. ### Comparison with simulation and consequences The above expressions actually give incredibly good estimations for \(d_{max}^{\varepsilon,s}(n)\) and \(\eta_{n,d}\), as shown in Figures 1 and 2. This theorem has a strong physical consequence. Indeed, \(\mathcal{E}\) induces proper decoherence on \(\mathcal{S}\) as long as \(\eta_{n,d}\ll 1\), that is when \(d^{-2/n}\) is very close to \(1\), _i.e._ when \(d\ll e^{n/2}\). Going back to physically meaningful quantities, we write as previously \(n=p^{N_{\mathcal{E}}}\) and \(d=p^{N_{\mathcal{S}}}\) where \(N_{\mathcal{E}}\) and \(N_{\mathcal{S}}\) stand for the number of particles composing \(\mathcal{E}\) and \(\mathcal{S}\). The condition becomes: \(2\ln(p)N_{\mathcal{S}}\ll p^{N_{\mathcal{E}}}\) or simply : \[\boxed{\frac{\ln(N_{\mathcal{S}})}{\ln(p)}\ll N_{\mathcal{E}}}\] A more precise condition can be obtained using \(d_{max}\), because \(\mathcal{E}\) induces proper decoherence on \(\mathcal{S}\) as long as \(d\leqslant d_{max}^{e,s}(n)\) for an arbitrary choice of \(\varepsilon\) close to \(0\) and \(s\) close to \(1\). This rewrites: \(2\ln(p)N_{\mathcal{S}}\leqslant\ln(\sqrt{-2\ln(1-s)})+\ln\left(\frac{1}{1- \varepsilon^{2}}\right)p^{N_{\mathcal{E}}}\simeq\varepsilon^{2}p^{N_{ \mathcal{E}}}\) or simply: \(\ln(N_{\mathcal{S}})\leqslant 2\ln(\varepsilon)+\ln(p)N_{\mathcal{E}}\). Thus, for instance, a gas composed of thousands of particles will lose most of its coherence if it interacts with only a few external particles. It is rather surprising that so many points can be placed randomly on a \(n\)-sphere before having the maximum of the scalar products becoming non-negligible. _It is this property that makes decoherence an extremely efficient high-dimensional geometrical phenomenon_. ### Discussing the hypotheses On the one hand, this result could be seen as a worst case scenario for decoherence, since realistic Hamiltonians are far from random and actually discriminate even better the different possible histories. This is especially true if \(\mathcal{E}\) is a measurement apparatus for example, whose Hamiltonian is by construction such that the \((|\mathcal{E}_{i}(t)\rangle)_{1\leqslant i\leqslant d}\) evolve quickly and deterministically towards orthogonal points of the sphere. On the other hand, pursuing such a high level of generality led us to abstract and unphysical assumptions. First, realistic dynamics are not isotropic on the \(n\)-sphere (some transitions are more probable than others). Then, the assumption that each \(|{\cal E}_{i}(t)\rangle\) can explore indistinctly all the states of \({\cal H}_{\cal E}\) is very criticizable. As explained in [8]: '...the set of quantum states that can be reached from a product state with a polynomial-time evolution of an arbitrary time-dependent quantum Hamiltonian is an exponentially small fraction of the Hilbert space. This means that the vast majority of quantum states in a many-body system are unphysical, as they cannot be reached in any reasonable time. As a consequence, all physical states live on a tiny submanifold' It would then be more accurate in our model to replace \(\mathbb{S}^{n}\) by this submanifold. By how does it look like geometrically and what is its dimension? If it were a subsphere of \(\mathbb{S}^{n}\) of exponentially smaller dimension, then \(n\) should be replaced everywhere by something like \(\ln(n)\) in what precedes, so the condition would rather be \(N_{S}\ll N_{\cal E}\) which is a completely different conclusion. Some clues to better grasp the submanifold are found in [7, SS3.4]: '...one can prove that low-energy eigenstates of gapped Hamiltonians with local interactions obey the so-called area-law for the entanglement entropy. This means that the entanglement entropy of a region of space tends to scale, for large enough regions, as the size of the boundary of the region and not as the volume. (...) In other words, low-energy states of realistic Hamiltonians are not just "any" state in the Hilbert space: they are heavily constrained by locality so that they must obey the entanglement area-law.' More work is needed in order to draw precise conclusions taking this physical remarks into account... ## 3 Second model: interacting particles ### The environment feels the system At present, let's better specify the nature of the environment. Suppose that the energy of interaction dominates the evolution of the whole system \({\cal S}+{\cal E}\) and can be expressed in terms of the positions \(x_{1},\ldots,x_{N}\) of the \(N\) particles composing the environment, together with the state of \({\cal S}\) (this is the typical regime for macroscopic systems which decohere in the position basis [10, SSIII.E.2.]). If the latter is \(\left|i\right\rangle\), denote \(H(i,x_{1}\ldots x_{N})\) this energy. The initial state \(\left|\Psi\right\rangle=\left(\sum_{i=1}^{d}c_{i}\left|i\right\rangle\right) \otimes\int f(x_{1}\ldots x_{N})\left|x_{1}\ldots x_{N}\right\rangle{\rm d}x_ {1}\ldots{\rm d}x_{N}\) evolves into: \[\sum_{i=1}^{d}c_{i}\left|i\right\rangle\otimes\underbrace{\int f(x_{1}\ldots x _{N})e^{\frac{i}{\hbar}H(i,x_{1}\ldots x_{N})t}\left|x_{1}\ldots x_{N}\right\rangle }_{=\left|{\cal E}_{i}(t)\right\rangle}{\rm d}x_{1}\ldots{\rm d}x_{N}.\] Therefore: \[\langle{\cal E}_{i}(t)|{\cal E}_{j}(t)\rangle=\int\!\left|f(x_{1}\ldots x_{N} )\right|^{2}e^{\frac{i}{\hbar}\Delta(i,j,x_{1}\ldots x_{N})t}{\rm d}x_{1} \ldots{\rm d}x_{N}\] where \(\Delta(i,j,x_{1}\ldots x_{N})=H(j,x_{1}\ldots x_{N})-H(i,x_{1}\ldots x_{N})\) is a spectral gap between eigenvalues of the Hamiltonian, measuring how much the environment feels the transition of \({\cal S}\) from \(\left|i\right\rangle\) to \(\left|j\right\rangle\) in a given configuration of the environment. In a time interval \([-T,T]\), the mean value \(\frac{1}{2T}\int_{-T}^{T}\left\langle{\cal E}_{i}(t)|{\cal E}_{j}(t)\right\rangle {\rm d}t\) yields \(\int\!\left|f(x_{1}\ldots x_{N})\right|^{2}{\rm sinc}(\frac{\Delta(i,j,x_{1} \ldots x_{N})}{\hbar}T)\) which is close to zero for all \(i\) and \(j\) as soon as \(T>\frac{\pi\hbar}{\min_{i,j,x_{1}\ldots x_{N}}\Delta(i,j,x_{1}\ldots x_{N})}\), which is likely to be small if \({\cal E}\) is a macroscopic system, for the energies involved will be much greater than \(\hbar\). Similarly, the empirical variance is: \[\mathbb{V}=\frac{1}{2T}\int_{-T}^{T}\!|\langle{\cal E}_{i}(t)|{\cal E}_{j}(t) \rangle|^{2}{\rm d}t\sim\int\!\left|f(x_{1}\ldots x_{N})\right|^{4}{\rm d}x_ {1}\ldots{\rm d}x_{N}\] plus terms that go to zero after a short time. Note that the variables \(x_{1}\ldots x_{N}\) could be discretized to take \(p\) possible values, in which case \(n={\rm dim}({\cal H}_{\cal E})=p^{N}\), and the integral becomes a finite sum. For a delocalized initial state with constant \(f\), this sum is equal to \(p^{-N}\), and we recover the previous estimate (2) if \(d=2\): \(\eta\sim p^{-N/2}\). This model teaches us that _the more the environment feels the difference between the possible histories, the more they decohere_. ### Entanglement entropy as a measure of decoherence What precedes suggests the following intuition: the smaller \(\eta\) is, the more information the environment has stored about the system because the more distinguishable (_i.e._ orthogonal) the \((|\mathcal{E}_{i}(t))_{1\leqslant i\leqslant d}\) are; on the other hand, the smaller \(\eta\) is, the fewer quantum interferences occur. It motivates the search for a general relationship between entanglement entropy5 (how much \(\mathcal{E}\) knows about \(\mathcal{S}\)) and the level of classicality of a system. Such results have already been derived for specific environments [3, (3.76)][4][5] but not, to our knowledge, in the general case. The following formula is proved in the annex A when \(S\) stands for the linear entropy (or purity defect) \(1-\mathrm{tr}(\rho^{2})\), and some justifications are given when \(S\) denotes the entanglement entropy: Footnote 5: Recall that for a bipartite quantum system, the entanglement entropy is defined as the von Neumann entropy of the reduced density matrix for any of the subsystems. \[\forall F\subset\mathcal{H}_{\mathcal{S}},\quad|\mathrm{tr}(\rho_{\mathcal{S }}(t)\Pi_{F})-\mathrm{tr}(\rho_{\mathcal{S}}^{(d)}\Pi_{F})|\leqslant\dim(F) \ \sqrt{1-\inf_{|\Psi_{\mathcal{S}}(0)|}\frac{S(\rho_{\mathcal{S}}(t))}{S(\rho_{ \mathcal{S}}^{(d)})}}. \tag{3}\] ## Conclusion We introduced general mathematical notations that can be relevant for any study on decoherence, in particular the parameter \(\eta(t)\) that quantifies the level of decoherence at a given instant. Two simple models were then presented, designed to feel more intuitively the general process of decoherence. Most importantly, our study revealed the mathematical reason why it is so fast and universal, namely because surprisingly many points can be placed randomly on a \(n\)-sphere before having the maximum of the scalar products becoming non-negligible. We also learned that decoherence is neither perfect nor everlasting, since \(\eta\) is not expected to be exactly \(0\) and will eventually become large again (according to Borel-Cantelli's lemma for the first model, and finding particular times such that all the exponentials are almost real in the second) pretty much like the ink drop in the glass of water will re-form again due to Poincare's recurrence theorem, even though the recurrence time can easily exceed the lifetime of the universe for realistic systems [15]. Finally, decoherence can be estimated by entanglement entropy because \(\eta\) is linked to what the environment knows about the system. Further works could include the search for a description of the submanifold of reachable states mentioned in SS2.5, and the study of the infinite dimensional case, where the very definition of \(\eta\) is not obvious anymore (since the scalar products vary continuously, their supremum is necessarily \(1\)). Another interesting question would be to investigate how \(\eta\) depends on the basis in which decoherence is considered: quantum interferences are indeed suppressed in the conserved basis, but how strong are they in the other bases? ## Acknowledgements I would like to gratefully thank my PhD supervisor Dimitri Petritis for the great freedom he grants me in my research, while being nevertheless always present to guide me. I also thank my friends Dmitry Chernyak and Matthieu Dolbeault for illuminating discussions. ## Appendix A Annex: decoherence estimated by the entanglement entropy with the environment We establish here the formula (3): we first derive the inequality (1), and then look for a relation between \(\eta\) and the linear entropy or the entanglement entropy. Inserting the second into the first directly yields (3). ### Relation between \(\eta\) and the level of classicality Let's keep the notations of SS1, where we defined \(\rho_{\mathcal{S}}^{(q)}(t)=\sum_{i\neq j}c_{i}\overline{c_{j}}\left\langle \mathcal{E}_{j}(t)|\mathcal{E}_{i}(t)\right\rangle\left|i\right\rangle\left\langle j\right|\). We have \(\left|\!\left|\!\left|\rho_{\mathcal{S}}^{(q)}(t)\right|\!\right|\!\right| \leqslant\eta(t)\) because for all vectors \(\left|\Psi\right\rangle=\sum_{k}\alpha_{k}\left|k\right\rangle\in\mathcal{H}_ {S}\) of norm \(1\), \[\rho_{\mathcal{S}}^{(q)}(t)\left|\Psi\right\rangle=\sum_{1\leqslant i \neq j\leqslant d}c_{i}\overline{c_{j}}\left\langle\mathcal{E}_{j}(t)| \mathcal{E}_{i}(t)\right\rangle\alpha_{j}\left|i\right\rangle\] \[\Rightarrow \left|\!\left|\rho_{\mathcal{S}}^{(q)}(t)\left|\Psi\right\rangle \right|\!\right|^{2}=\sum_{i=1}^{d}\!\left|c_{i}\right|^{2}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The ratio preceding \(\eta^{2}\) is a one-parameter real function in \(|c_{1}|^{2}\) (since \(|c_{2}|^{2}=1-|c_{1}|^{2}\)) defined on \([0,1]\); it turns out that is takes only values in \([-1,-0.7]\) and tends to \(-1\) only when \(|c_{1}|^{2}\) tends to \(0\) or \(1\). Therefore, in dimension \(2\), we still have (at least at leading order): \[\eta(t)=\sqrt{1-\inf_{|\Psi_{\mathcal{S}}(0)|}\frac{S(\rho_{\mathcal{S}}(t))}{ S(\rho_{\mathcal{S}}^{(d)})}}.\] In higher dimension, if we suppose that one of the \(\langle\mathcal{E}_{j}(t)|\mathcal{E}_{i}(t)\rangle\) decreases much slower than the others (assume without loss of generality that it is \(\langle\mathcal{E}_{2}(t)|\mathcal{E}_{1}(t)\rangle\), still denoted \(f(t)\)), then after some time \(\rho_{\mathcal{S}}(t)\) is not very different from: \[\begin{pmatrix}|c_{1}|^{2}&c_{1}\overline{c_{2}}f(t)\\ \overline{c_{1}}c_{2}\overline{f}(t)&|c_{1}|^{2}\\ &|c_{3}|^{2}\\ &&\ddots\\ &&&|c_{d}|^{2}\end{pmatrix}\] Using the previous inequality in dimension \(2\): \[\frac{S(\rho_{\mathcal{S}}(t))}{S(\rho_{\mathcal{S}}^{(d)})}\geqslant\frac{( 1-\eta^{2}(t))(|c_{1}|^{2}+|c_{2}|^{2})+|c_{3}|^{2}+\cdots+|c_{d}|^{2}}{|c_{1} |^{2}+\cdots+|c_{d}|^{2}}\geqslant 1-\eta^{2}(t)\] and, once again, this bound is attained for an appropriate choice of the \((c_{i})_{1\leqslant i\leqslant d}\).
2308.11278
Hybrid sample size calculations for cluster randomised trials using assurance
Sample size determination for cluster randomised trials (CRTs) is challenging as it requires robust estimation of the intra-cluster correlation coefficient (ICC). Typically, the sample size is chosen to provide a certain level of power to reject the null hypothesis in a hypothesis test. This relies on the minimal clinically important difference (MCID) and estimates for the standard deviation, ICC and possibly the coefficient of variation of the cluster size. Varying these parameters can have a strong effect on the sample size. In particular, it is sensitive to small differences in the ICC. A relevant ICC estimate is often not available, or the available estimate is imprecise. If the ICC used is far from the unknown true value, this can lead to trials which are substantially over- or under-powered. We propose a hybrid approach using Bayesian assurance to find the sample size for a CRT with a frequentist analysis. Assurance is an alternative to power which incorporates uncertainty on parameters through a prior distribution. We suggest specifying prior distributions for the standard deviation, ICC and coefficient of variation of the cluster size, while still utilising the MCID. We illustrate the approach through the design of a CRT in post-stroke incontinence. We show assurance can be used to find a sample size based on an elicited prior distribution for the ICC, when a power calculation discards all information in the prior except a single point estimate. Results show that this approach can avoid misspecifying sample sizes when prior medians for the ICC are very similar but prior distributions exhibit quite different behaviour. Assurance provides an understanding of the probability of success of a trial given an MCID and can be used to produce sample sizes which are robust to parameter uncertainty. This is especially useful when there is difficulty obtaining reliable parameter estimates.
S. Faye Williamson, Svetlana V. Tishkovskaya, Kevin J. Wilson
2023-08-22T08:47:51Z
http://arxiv.org/abs/2308.11278v1
# Hybrid sample size calculations for cluster randomised trials using assurance ###### Abstract **Background/Aims:** Sample size determination for cluster randomised trials (CRTs) is challenging because it requires robust estimation of the intra-cluster correlation coefficient (ICC). Typically, the sample size is chosen to provide a certain level of power to reject the null hypothesis in a two sample hypothesis test. This relies on the minimal clinically important difference (MCID) and estimates for the overall standard deviation, the ICC and, if cluster sizes are assumed to be unequal, the coefficient of variation of the cluster size. Varying any of these parameters can have a strong effect on the required sample size. In particular, it is very sensitive to small differences in the ICC. A relevant ICC estimate is often not available, or the available estimate is imprecise due to being based on studies with low numbers of clusters. If the ICC value used in the power calculation is far from the unknown true value, this could lead to trials which are substantially over- or under-powered. **Methods:** In this paper, we propose a hybrid approach using Bayesian assurance to determine the sample size for a CRT in combination with a frequentist analysis. Assurance is an alternative to traditional power, which incorporates the uncertainty on key parameters through a prior distribution. We suggest specifying prior distributions for the overall standard deviation, ICC and coefficient of variation of the cluster size, while still utilising the MCID. We illustrate the approach through the design of a CRT in post-stroke incontinence and compare the results to those obtained from a standard power calculation. **Results:** We show that assurance can be used to calculate a sample size based on an elicited prior distribution for the ICC, whereas a power calculation discards all of the information in the prior except for a single point estimate. Results show that this approach can avoid misspecifying sample sizes when the prior medians for the ICC are very similar, but the underlying prior distributions exhibit quite different behaviour. Incorporating uncertainty on all three of the nuisance parameters, rather than only on the ICC, does not notably increase the required sample size. **Conclusion:** Assurance provides a better understanding of the probability of success of a trial given a particular MCID and can be used instead of power to produce sample sizes which are more robust to parameter uncertainty. This is especially useful when there is difficulty obtaining reliable parameter estimates. **Keywords:** Assurance, Bayesian design, cluster randomised trials, expected power, hybrid approach, intra-cluster correlation, minimal clinically important difference, sample size determination. ## 1 Background Cluster randomised trials (CRTs) are a type of randomised controlled trial (RCT) in which randomisation is at the cluster-level, rather than the individual-level as in standard RCTs. This means that _groups_ of individuals (e.g. general practices, schools or communities) are randomly allocated to different interventions (e.g. vaccination programmes or behavioural interventions). A common reason for implementing this design is to mitigate the risk of contamination or where individual randomisation is not feasible. Other justifications are detailed in Eldridge and Kerry (2012). Individuals within a cluster are likely to share similar characteristics (e.g. demographics), as well as be exposed to extraneous factors unique to the cluster (e.g. delivery of the intervention by the same healthcare professional). Consequently, outcomes from members of the same cluster are often correlated, which can be quantified by the intra-cluster correlation coefficient (ICC). This lack of independence reduces the statistical power compared to a standard RCT of the same size, meaning that the sample size needs to be inflated to allow for the clustering effect. Various methods for sample size determination in CRTs exist (see Rutterford et al., 2015; Gao et al., 2015), which all rely on estimation of the ICC. In practice, ICC estimates are typically based on pilot studies, but these are often too small to provide precise and reliable estimates (Eldridge et al., 2016). An alternative simple approach is to use a conservative estimate of the ICC (e.g. the upper confidence interval limit) in the sample size calculation (Browne, 1995). However, this can lead to over-powered and unnecessarily large trials. A more reliable method is to combine ICC estimates from multiple sources, such as previous trials or databases listing ICC estimates (e.g. Moerbeek and Teerenstra, 2015), and use information on patterns in ICCs (see Korevaar et al., 2021). This raises further issues such as how to effectively combine the ICC estimates, how to adequately reflect their varying degrees of relevance to the planned trial and how to capture the uncertainty in the individual ICC estimates. Lewis and Julious (2021) suggest integrating over a range of possible ICC values, determined by confidence intervals obtained using methods in Ukoumunne (2002), to provide an 'average' sample size with respect to the ICC. However, this does not consider the uncertainty present in other design parameters, such as the treatment effect and variability of the outcome measures. Further, it assumes that each value of the ICC is equally likely. Utilising a Bayesian approach for the trial design, in which prior distributions are assigned to the unknown design parameters such as the ICC, could further circumvent these issues and is particularly useful in settings where ICC estimates are not readily available. In the CRT literature, prior distributions for the ICC have been proposed based on subjective beliefs (Spiegelhalter, 2001) and single or multiple ICC estimates (Turner et al., 2004), which may be weighted by relevance of outcomes and patient population (Turner et al., 2005). These are used to estimate a distribution for the power of the planned trial for a given sample size. Within the Bayesian framework, uncertainty in other design parameters can be incorporated into the sample size calculation in a similar way, and the relative likelihood of different parameter values is encompassed through specification of the prior distribution. For example, Sarkodie et al. (2023) assign a prior to the overall standard deviation, in addition to the ICC, then describe a 'hybrid' approach to determine the sample size required to attain a desired 'expected power', defined as a weighted average of the probability that the null hypothesis is rejected (with weights determined by the priors). Hybrid approaches, which combine a Bayesian design with a frequentist analysis of the final trial data, have gained increasing popularity, particularly with respect to standard RCTs (Kunzmann et al., 2021). In this paper, we adopt a hybrid approach by using the Bayesian concept of _assurance_ to determine the sample size for a two-arm parallel-group CRT with a Wald test for the analysis. In contrast to traditional frequentist power, which represents a conditional probability that the trial is a success, given the values chosen for the design parameters and the hypothesised treatment effect, assurance typically refers to the _unconditional_ probability that the trial will be'successful' (Chen and Ho, 2017). We modify this definition by conditioning on the minimal clinically important difference (MCID) instead of assigning a prior distribution to, and integrating over, the treatment effect (as in Kunzmann et al., 2021; Ciarleglio et al., 2016). This is more representative of the design stage of a trial, in which the treatment effect is typically fixed _a priori_ by investigators. Moreover, this ensures that the assurance will tend to one as the sample size increases so can be used analogously to traditional power, thus aiding interpretation. A key consideration when applying a Bayesian design is how to specify suitable prior distributions. In contrast to the paper by Sarkodie et al. (2023), which assumes independent priors on the ICC and standard deviation, we suggest a joint prior distribution for these parameters, as described in Sections 2.3 and 3.2.2. In addition, we account for the fact that many CRTs have unequal cluster sizes by defining a prior distribution on the coefficient of variation of cluster size. This is often overlooked in standard sample size calculations for CRTs (Eldridge et al., 2006; Zhan et al., 2021). Our approach is motivated by a parallel-group CRT, Identifying Continence OptioNs after Stroke (ICONS), outlined in Section 3.1. In Section 3.2, we illustrate the effects of redesigning this trial using the entire ICC prior distribution to inform sample size determination via an assurance calculation, rather than relying on a single point estimate from this distribution as in Tishkovskaya et al. (2023). The impacts of varying the ICC prior distributions on the chosen sample size are evaluated in Section 3.3. We perform sensitivity analyses on other design parameters in an additional simulation study provided in the Appendix. Jones et al. (2021) summarise the current state of play regarding the use of Bayesian methods in CRTs. In doing so, they highlight the "need for further Bayesian methodological development in the design and analysis of CRTs...in order to increase the accessibility, availability and, ultimately, use of the approach." This paper is therefore a timely contribution. ## 2 Methods ### Analysis for CRTs Suppose that we are designing a two-arm, parallel-group CRT assuming 1:1 randomisation of clusters and normally distributed outcomes. A common analysis following the trial is to use a linear mixed-effects model. That is, if \(Y_{ij}\) is the response for individual \(i=1,\ldots,n_{j}\) in cluster \(j=1,\ldots,C\), then \[Y_{ij}=\alpha+X_{j}\delta+c_{j}+e_{ij}, \tag{1}\] where \(\alpha\) is an intercept term; \(X_{j}\) is a binary variable which takes the value 1 if cluster \(j\) is allocated to the treatment arm and 0 if it is allocated to the control arm, so that \(\delta\) represents the treatment effect; \(c_{j}\sim\mathrm{N}(0,\sigma_{b}^{2})\) is a random cluster effect with \(\sigma_{b}^{2}\) denoting the between cluster variation and \(e_{ij}\sim\mathrm{N}(0,\sigma_{w}^{2})\) is the individual-level error with \(\sigma_{w}^{2}\) denoting the within cluster variation. This can be extended for stepped-wedge CRTs by following the model in Hussey and Hughes (2007). The ratio of the variability between clusters \(\sigma_{b}^{2}\) to the total variability \(\sigma^{2}=\sigma_{b}^{2}+\sigma_{w}^{2}\) determines the extent to which clustering induces correlations between outcomes for individuals in the same cluster. This is referred to as the _intra-cluster correlation coefficient_ (ICC), \(\rho=\sigma_{b}^{2}/\sigma^{2}\)(Kerry and Bland, 1998). The superiority of the treatment is assessed via a hypothesis test of \(H_{0}:\delta\leq 0\) versus \(H_{1}:\delta>0\). Using a Wald test, assuming asymptotic normality, the test statistic is \(Z=\hat{\delta}/\sqrt{\mathrm{Var}(\hat{\delta})}\), where \(\hat{\delta}\) is the estimate of \(\delta\) and \(\mathrm{Var}(\hat{\delta})=4\sigma^{2}[1+\{(\nu^{2}+1)\bar{n}-1\}\rho]/C\bar{n}\), where \(\bar{n}\) is the average sample size per cluster and \(\nu\) is the coefficient of variation of cluster size, i.e. the ratio of the standard deviation of cluster sizes to the mean cluster size. ### Choosing a sample size using assurance The power of the one-sided Wald test for significance level \(\alpha\) can be approximated (Eldridge et al., 2016) by \[P(n\mid\delta,\mathbf{\psi})=\Phi\left(\delta\sqrt{\frac{C\bar{n}}{4\sigma^{2}[1+\{( \nu^{2}+1)\bar{n}-1\}\rho]}}-z_{1-\alpha}\right), \tag{2}\] where \(z_{1-\alpha}\) is the \(100(1-\alpha)\%\) percentile of the standard normal distribution and \(\mathbf{\psi}=(\sigma,\rho,\nu)\) is the vector of "nuisance" parameters, excluding the treatment effect. For a two-sided Wald test, \(z_{1-\alpha}\) would be replaced by \(z_{1-\alpha/2}\). For equal cluster sizes, the power function would take the same form as equation (2), with \(\nu=0\) and \(\bar{n}=n_{j}=n\). In a standard power calculation, the sample size would be chosen as the smallest value which gives 80% or 90% power, based on values for \(\mathbf{\theta}=(\delta,\mathbf{\psi})\). The treatment effect \(\delta\) could be specified as the MCID or an estimate based on a pilot study, similar historical trials or expert knowledge. The values used for \(\mathbf{\psi}\) are typically estimates. Alternatively, we can use assurance to choose the sample size. Whereas the power is conditioned on the chosen estimates for \(\mathbf{\psi}\) and possibly \(\delta\), the assurance represents the _unconditional_ probability that an RCT will achieve a successful outcome (O'Hagan and Stevens, 2001). Assurance has been used almost exclusively when the value to be used for \(\delta\) is an estimate. In this case, suppose that the CRT is a success if the null hypothesis is rejected by the Wald test for \(\delta\). Rather than using point estimates for \(\mathbf{\theta}\), we could assign a prior distribution \(\pi(\mathbf{\theta})\) to it and define the assurance \(A(n)\) as the power, averaged over the uncertainty in \(\mathbf{\theta}\): \[A(n) = \int_{\mathbf{\theta}}\Pr(H_{0}\ \mathrm{rejected}\mid\mathbf{\theta}) \pi(\mathbf{\theta})d\mathbf{\theta}, \tag{3}\] \[= \int_{\mathbf{\theta}}P(n\mid\mathbf{\theta})\pi(\mathbf{\theta})d\mathbf{ \theta}.\] One disadvantage of the assurance is that it tends to \(\Pr(\delta>0)\) under \(\pi(\delta)\) as the sample size increases. That is, unlike power, there may be no sample size for which the assurance is above the typical thresholds of 80% or 90%. Kunzmann et al. (2021) avoid this by conditioning the prior distribution for \(\delta\) on \(\delta>0\) in the assurance calculation. In this paper, we consider the following alternative approach. The assurance in (3) assumes that we choose \(\delta\) in the sample size calculation based on _a priori_ considerations of the likelihood of the treatment effect. Instead, we consider the assurance in conjunction with a trial planned using the relevance argument, that is, using the MCID for \(\delta\), \(\delta_{M}\). In this case, there is no need to define a prior distribution for \(\delta\), and the assurance reduces to \[A(n\mid\delta_{M}) = \int_{\psi}P(n\mid\delta_{M},\mathbf{\psi})\pi(\mathbf{\psi})d\mathbf{\psi}.\] The advantage of this is that the assurance will now tend to one as the sample size increases. To evaluate the assurance in practice, we sample values of \((\mathbf{\psi}_{j})_{j=1,\ldots,S}\) from the prior distribution \(\pi(\mathbf{\psi})\) for some large number of samples \(S\), and use Monte Carlo simulation to approximate the assurance as \[\tilde{A}(n\mid\delta_{M}) = \frac{1}{S}\sum_{j=1}^{S}P(n\mid\delta_{M},\mathbf{\psi}_{j}), \tag{4}\] \[\approx \frac{1}{S}\sum_{j=1}^{S}\Phi\left(\delta_{M}\sqrt{\frac{C\bar{n }}{4\sigma_{j}^{2}[1+\{(\nu_{j}^{2}+1)\bar{n}-1\}\rho_{j}]}}-z_{1-\alpha} \right).\] ### Specification of priors To evaluate the assurance, we are required to specify a prior distribution for \(\mathbf{\psi}\). This simplifies to specifying marginal prior distributions for each parameter if they can be assumed independent. Given that \(\sigma^{2}\) and \(\rho\) are both functions of \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\), it is unlikely that \(\sigma\) and \(\rho\) can be assumed independent. Therefore, we consider a joint prior distribution for \((\sigma,\rho)\) and a marginal prior distribution for \(\nu\). In order for the assurance to be a meaningful representation of the probability that the null hypothesis is rejected, these prior distributions should be informative, representing the current state of knowledge about the possible parameter values. This is an elicitation problem, and information to specify the priors can be obtained from relevant past data, expert knowledge or a combination (an example of this is given in Section 3). Since the coefficient of variation can only take positive values, a gamma distribution \(\nu\sim\mathrm{Gamma}(a_{\nu},b_{\nu})\) is a sensible choice for its prior distribution. The hyperparameters \(a_{\nu}\) and \(b_{\nu}\) could be chosen based on previous studies, via modelling or by eliciting expert knowledge (Eldridge et al., 2016). One way to specify a joint prior distribution for \((\sigma,\rho)\) is to assign independent priors to \(\sigma_{b}^{2}\) and \(\sigma_{w}^{2}\), which will induce a correlation between \(\rho\) and \(\sigma^{2}\). If we sample values of \(\sigma_{b}^{2}\) and \(\sigma_{w}^{2}\) from their priors, we can obtain samples from the joint prior of \((\sigma,\rho)\). Typical choices of prior distributions for \(\sigma_{b}^{2}\) and \(\sigma_{w}^{2}\) are (inverse) gamma distributions because they provide conjugacy. An alternative approach, relevant to our application, is to specify the joint distribution between \(\rho\) and \(\sigma\) directly. For example, we can utilise a bivariate copula to encode the dependence between the parameters. A bivariate copula is a joint distribution function on \([0,1]^{2}\) with standard uniform marginal distributions (Nelson, 2006). It can be used to construct a joint prior for \(\rho\) and \(\sigma\) via \[\pi_{\rho,\sigma}(\rho,\sigma)=\pi_{\rho}(\rho)\pi_{\sigma}(\sigma)c(u,v),\] where \(\pi_{\rho}\) and \(\pi_{\sigma}\) are marginal prior distributions, \(c(u,v)\) is the bivariate copula density function evaluated at \(u=F_{\rho}(\rho)\) and \(v=F_{\sigma}(\sigma)\) for prior cumulative distribution functions (CDFs) \(F_{\rho}\) and \(F_{\sigma}\). One simple choice is the Gaussian copula: \[c(u,v)=\frac{\partial^{2}}{\partial u\partial v}\Phi_{\gamma}(\Phi^{-1}(u), \Phi^{-1}(v)),\] where \(\Phi_{\gamma}\) is the CDF of the bivariate standard normal distribution with correlation \(\gamma\), and \(\Phi^{-1}\) is the inverse univariate standard normal CDF. The advantage of this structure is that it allows specification of the marginal prior distributions for \(\rho\) and \(\sigma\) separately to their dependence, which is given by \(\gamma\). Application ### The ICONS post-stroke incontinence CRT The approach developed in this paper is motivated by a planned parallel-group CRT, "Identifying Continence OptioNs after Stroke" (ICONS), which investigates the effectiveness of a systematic voiding programme in secondary care versus usual care on post-stroke urinary incontinence for people admitted to NHS stroke units (Thomas et al., 2015). The primary outcome is the severity of urinary incontinence at three months post-randomisation, measured using the International Consultation on Incontinence Questionnaire (Avery et al., 2004). Although a feasibility trial, ICONS-I (Thomas et al., 2014), was conducted, the resulting ICC estimate was of low precision and could not be used as a reliable single source to inform the planning of the proposed trial. ICONS therefore considered a Bayesian approach to combine multiple ICC estimates from 16 previous related trials. The opinions of eight experts regarding the relevance of the previous ICC estimates were elicited (O'Hagan, 2019) and used to assign weights to each study and each outcome within a study. The elicited study and outcome weights were combined using mathematical aggregation (O'Hagan et al., 2006) and incorporated into a Bayesian hierarchical model following the method in Turner et al. (2005). The resulting constructed ICC distribution had a posterior median of \(\hat{\rho}=0.0296\) with a \(95\%\) credible interval of \((0.00131,0.330)\). Details of the expert elicitation process and modelling are described in Tishkovskaya et al. (2023). For the ICONS CRT, the sample size was chosen to give 80% power with a 5% significance level to detect \(\delta_{M}=2.52\) using a two-tailed independent-samples \(t\)-test and a common standard deviation \(\sigma\) of 8.32 obtained from the ICONS-I feasibility trial. The ICC was assumed to be less than or equal to \(\hat{\rho}=0.0296\). It was assessed as realistic to recruit between 40 and 50 stroke units, which required total sample sizes of \(N=480\) and \(N=450\), respectively, and an average sample size per cluster of \(n=12\) and \(n=9\), respectively. The original sample size calculation assumed equally sized clusters. However, if we consider unequal cluster sizes with \(\nu=0.49\) (obtained from ICONS-I) and apply the Wald test, the required sample sizes remain the same. ### Redesigning the ICONS CRT using assurance We consider assurance as an alternative to power to determine the sample size for the ICONS CRT. This seems like a more natural approach given the uncertainty in the ICC and the extensive elicitation and modelling that was conducted to construct the ICC posterior distribution (which forms the prior distribution for the assurance-based sample size calculation). Moreover, assurance incorporates the full ICC distribution into the sample size calculation, rather than relying on a single point estimate from it as in the power calculation. We consider the following two forms of assurance. #### 3.2.1 Assurance based on the ICC prior only In the first case, we fix \((\sigma,\nu)\) using the point estimates obtained from ICONS-I and only consider the assurance with respect to the ICC. We sample \(S=10,000\) values of \(\rho\) from its distribution (see Figure 1) and approximate the assurance using (4). To obtain an assurance of 80%, the resulting average Figure 1: Histogram of 10,000 samples of the ICC, \(\rho\) sample sizes per cluster are \(\bar{n}=17\) for \(C=40\) clusters (\(C=20\) per arm) and \(\bar{n}=11\) for \(C=50\) clusters (\(C=25\) per arm), requiring total sample sizes of \(N=680\) and \(N=550\), respectively (Table 1). Thus, the inclusion of uncertainty in the ICC results in a larger sample size than when using the posterior median ICC, but provides a more realistic and robust study design. Compared to the classical approach, the total sample size attained is smaller for the smaller number of clusters. The left-hand side plot of Figure 2 illustrates the trade-off between cluster size and \(\text{assurance}/\text{power}\), for \(C=40\) clusters (\(C=20\) per arm). The power calculation based on the median from the elicited prior distribution of \(\rho\) is represented by the red curve and the assurance with a prior on \(\rho\) only by the black curve. We see that the assurance requires a larger sample size than power when the target lies above 0.5. We also include the power curve corresponding to the commonly used approach of taking the median of the 34 ICC estimates (blue line). For a target power of 0.8 (horizontal line), Figure 2 shows that this method requires a larger sample size per cluster than the aforementioned methods. We illustrate the effect of changing \(\nu=(0,0.1,\ldots,1)\) on the assurance in the right-hand side plot of Figure 2. The red curve corresponds to \(\nu=0.5\), the top curve to \(\nu=0\) and the bottom curve to \(\nu=1\). As \(\nu\) increases, the assurance decreases for a given cluster size. We see that the estimate of \(\nu\) has a relatively strong effect on the assurance, and hence the required sample size. This implies that it needs to be estimated accurately, or its uncertainty should be taken into account in the assurance calculation. #### 3.2.2 Assurance based on the prior for \(\psi\) In the second case, we obtain the sample size required using an assurance calculation which averages over a prior distribution on \(\sigma\) and \(\nu\), as well as the ICC. Using the data from ICONS-I, we give \(\sigma\) and \(\nu\) gamma marginal prior distributions, centred at their estimated values of 8.32 and 0.49, respectively. The standard deviations of the prior distributions are chosen to represent a belief that \(\sigma\) is very likely to be in the range \([5,11]\) and \(\nu\) is very likely to be in the range \([0.3,0.7]\). Specifically, \(\sigma\sim\text{Gamma}(a_{\sigma},b_{\sigma})\) and \(\nu\sim\text{Gamma}(a_{\nu},b_{\nu})\), where \(a_{.}=m_{.}^{2}/v_{.}\) and \(b_{.}=m_{.}/v_{.}\), \(m_{\sigma}=8.32,v_{\sigma}=1^{2}\) and \(m_{\nu}=0.49,v_{\nu}=0.066^{2}\). To incorporate the dependence between \(\rho\) and \(\sigma\), we utilise the Gaussian copula with \(\gamma=0.44\). This is chosen to be consistent with the correlation between \(\rho\) and \(\sigma\) that would result from independent prior distributions on the between and within group variances of \(\sigma_{b}^{2}\sim\mathrm{Gamma}(0.18,0.04)\) and \(\sigma_{w}^{2}\sim\mathrm{Gamma}(21.06,0.32)\), respectively. The hyperparameters of these two gamma prior distributions are chosen to provide the correct marginal means and variances for \(\rho\) and \(\sigma\). To sample values of \(\rho\) and \(\sigma\) from their joint prior distribution, we repeat the following steps: 1. Sample \((x_{i},y_{i})\), \(i=1,\ldots,S\) from \(\mathrm{N}_{2}(\mathbf{0},R)\), where \(R\) is the prior correlation matrix with diagonal elements 1 and off-diagonal elements \(\gamma=0.44\). 2. Calculate \((\rho_{i},\sigma_{i})\) as \(\big{(}F_{\rho}^{-1}(\Phi(x)),F_{\sigma}^{-1}(\Phi(y))\big{)}\). Figure 2: Power and assurance curves for the ICONS CRT (left). The power using the posterior median ICC is red, the power using the median ICC from the 34 ICC estimates is light blue, the assurance with a prior only on the ICC is black and the assurance with a prior on all of the nuisance parameters \(\mathbf{\psi}\) is green. The effect of varying the coefficient of variation \(\nu\) on the assurance (right). \(\nu\) varies between 0 (top curve) and 1 (bottom curve), with the red line at \(\nu=0.5\). The horizontal line indicates the desired power/assurance. Each plot corresponds to \(C=40\) (\(C=20\) per arm). The quantile function \(F_{\sigma}^{-1}\) is that of the relevant normal distribution. The empirical quantile function \(F_{\rho}^{-1}\) is used for \(\rho\), based on the 10,000 prior samples. The resulting joint prior distribution for \((\sigma,\rho)\) and marginal prior distribution for \(\nu\) are illustrated in Figure 3. We see that the marginal prior for \(\rho\) remains as in Figure 1, but the samples are positively correlated with the values of \(\sigma\). The resulting average cluster sample sizes for an assurance of 80% are \(\bar{n}=18\) for \(C=40\) clusters (\(C=20\) per arm) and \(\bar{n}=12\) for \(C=50\) clusters (\(C=25\) per arm), requiring total sample sizes of \(N=720\) and \(N=600\), respectively. By incorporating uncertainty on \(\sigma\) and \(\nu\), as well as \(\rho\), the sample size increases only slightly, as illustrated in the left-hand side plot of Figure 2 (green line). To achieve a target assurance of 80% (dashed horizontal line), the average sample size required per cluster increases Figure 3: The joint prior distribution between \(\rho\) and \(\sigma\) (left) and the marginal prior distribution for \(\nu\) (right), based on 10,000 samples. from 17 to 18 when \(C=40\); an increase in total sample size of approximately 5%. Table 1 summarises the sample sizes required to attain a target power/assurance of 80% for the various approaches applied to the ICONS trial. "Classical approach" refers to the multiple-estimate method of taking the median of the ICC estimates without taking the relevance of the different studies into account. Relative to the classical approach that is often used in practice, the total sample size required when using the assurance-based method remains the same whilst incorporating uncertainty on all three parameters. ### Sensitivity analysis for the ICC prior In the above, we consider the ICC prior distribution based on all eight reviewers and all 16 relevant studies. In this section, we investigate the sensitivity of the assurance-based sample size (with priors on \(\psi\)) to varying assumptions on the reviewers and relevant studies, and compare this to the sensitivity of the sample sizes from power calculations (using the posterior median ICC). To recognise uncertainty in the individual reviewers' responses, and in how these responses were pooled, the mathematical aggregation was refitted with alternative reviewer importance weights: equal weights of 0.125 for all reviewers and using a rank sum approach (see Tishkovskaya et al., 2023). For the rank sum approach, we use Cronbach's alpha score and assign ranks to each reviewer according to this score. In addition, we rerun the Bayesian hierarchical model for only the top 4 (25%), 8 (50%) and 12 (75%) most relevant studies. We refer to the five variations of the original ICC prior distribution as: equal \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Method & Priors & \begin{tabular}{c} Total number \\ of clusters, \(C\) \\ \end{tabular} & \begin{tabular}{c} Mean cluster \\ size, \(\bar{n}\) \\ \end{tabular} & \begin{tabular}{c} Total sample \\ size, \(N\) \\ \end{tabular} \\ \hline Power & NA & 50 & 10 & 500 \\ (classical approach) & & 40 & 18 & 720 \\ \hline Power & NA & 50 & 9 & 450 \\ (based on posterior median) & & 40 & 12 & 480 \\ \hline Assurance & \(\rho\) & 50 & 11 & 550 \\ \hline Assurance & \(\psi=(\sigma,\rho,\nu)\) & 40 & 17 & 680 \\ \hline \end{tabular} \end{table} Table 1: Summary of sample sizes obtained for the ICONS CRT based on power and assurance calculations. weights, differentiated weights, top 4, top 8 and top 12. The differentiated weights prior (red) and equal weights prior (green) are provided alongside the original prior (black) in the left-hand side plot of Figure 4. The top 4 prior (red), top 8 prior (green) and top 12 prior (blue) are given alongside the original prior (black) in the right-hand side plot of Figure 4. In both plots, the prior medians are given by vertical dashed lines. We see that the ICC prior remains similar to the original prior whether differentiated weights or equal weights are used, although both alternative weightings assign more probability to the ICC taking larger values. There is a larger change when using the top 4, top 8 or top 12 studies. In each case, the alternative prior is more diffuse than the original prior. Relatively large changes in the prior can cause only small changes in the prior median (e.g. the original prior compared to the top 12 prior). The effects of the alternative ICC priors on the sample sizes are shown in Table 2. We see smaller changes in sample sizes for \(C=50\) than \(C=40\) using assurance. Overall, we observe Figure 4: Left: The densities of the differentiated weights (red), equal weights (green) and original ICC prior (black). Right: The densities of the top 4 (red), top 8 (green), top 12 (blue) and original ICC prior (black). The prior medians are represented by vertical dashed lines. larger changes in sample size using assurance than power based on the prior median of the ICC. This illustrates the risk with using just the median; it takes no account of the prior probability that the ICC could be relatively large, so has the potential to systematically underestimate the required sample size. In contrast, the assurance-based sample size is sensitive to the entire ICC prior distribution, particularly the upper tail. To illustrate this point, compare the original ICC prior (black) to the top 12 prior (blue) in the right hand side of Figure 4. They have substantially different priors, resulting in large differences in sample sizes required under assurance (600 versus 750 when \(C=50\), respectively). However, their prior medians are almost identical, resulting in identical sample size requirements under power (450 when \(C=50\)). In the Appendix, we further evaluate the properties of the hybrid approach compared to power via a simulation study. ## 4 Conclusions A standard sample size calculation requires pre-specification of parameters that are unknown at the design stage of a trial. Unique to sample size calculations for typical CRTs is the ICC, which requires robust estimation to avoid over- or under-powering the trial. Unnecessarily high ICC values, for example, lead to inefficient trials, increasing the number of clusters and/or participants, and overall trial costs. In practice, parameter uncertainty is typically not considered, which can be problematic given the sensitivity \begin{table} \begin{tabular}{|l|l l|l l|l l|l l|} \hline & \multicolumn{4}{c|}{\(C=50\)} & \multicolumn{4}{c|}{\(C=40\)} \\ \hline & \multicolumn{3}{c|}{Assurance} & \multicolumn{2}{c|}{Power} & \multicolumn{2}{c|}{Assurance} & \multicolumn{2}{c|}{Power} \\ ICC Estimate/Prior & \(\bar{n}\) & \(N\) & \(\bar{n}\) & \(N\) & \(\bar{n}\) & \(N\) & \(\bar{n}\) & \(N\) \\ \hline Original & 12 & 600 & 9 & 450 & 18 & 720 & 12 & 480 \\ Differentiated weights & 13 & 650 & 10 & 500 & 20 & 800 & 13 & 520 \\ Equal weights & 13 & 650 & 9 & 450 & 19 & 760 & 13 & 520 \\ Top 4 & 12 & 600 & 8 & 400 & 18 & 720 & 11 & 440 \\ Top 8 & 14 & 700 & 9 & 450 & 23 & 920 & 12 & 480 \\ Top 12 & 15 & 750 & 9 & 450 & 24 & 960 & 12 & 480 \\ \hline \end{tabular} \end{table} Table 2: The average sample size per cluster \(\bar{n}\) and the total sample size \(N\) required for the ICONS CRT using assurance (with priors on \(\boldsymbol{\psi}\)) and power based on the original ICC estimate/prior and five alternative estimates/priors when \(C=50\) and \(C=40\). The power is based on the posterior median of the ICC. of the sample size to small differences in the ICC. This paper proposes an alternative approach to sample size determination for CRTs using the Bayesian concept of assurance to incorporate parameter uncertainty into the design. The advantage of this approach is that it yields designs that provide adequate power across the likely range of parameter values and is therefore more robust to parameter misspecification. This is particularly important when there is difficulty obtaining a reliable ICC estimate, such as in the ICONS post-stroke incontinence CRT used to motivate this work. We assign prior distributions to the ICC, overall standard deviation and coefficient of variation of the cluster size, whilst setting the treatment effect equal to the MCID in line with standard practice. We consider a joint prior for the ICC and standard deviation to model the dependency between these parameters. In the motivating case-study, we use the entire ICC prior distribution elicited from expert opinion and data from previous studies to inform the sample size. Further work could consider using a commensurate prior to synthesise multiple sources of pre-trial information on the ICC, as in Zheng et al. (2023). Sensitivity analyses of the assurance-based sample size to different ICC priors showed that different behaviour of the prior, particularly in the upper tail, can have quite a strong effect on the required sample size. Using a point estimate from this prior, for example the median, can miss this overall behaviour and result in sample sizes which are systematically too small, based on current knowledge about the ICC. Additional sensitivity analyses conducted on the overall standard deviation showed that the greater the uncertainty expressed in the prior, the more robust the assurance-based sample size is (see Appendix). Uncertainty in the treatment effect can also be incorporated into the assurance calculation in a similar way. This may be appropriate for non-inferiority trials, for example, where the non-inferiority margin is fixed in advance and the treatment difference can be considered a nuisance parameter. In line with regulatory requirements, we have maintained a frequentist analysis to present a hybrid framework. Further work could consider a fully Bayesian approach by using assurance when the success criterion is based on the posterior distribution of the treatment effect (e.g. Spiegelhalter, 2001) The hybrid approach presented in this paper can be applied to avoid incorrectly powered studies resulting from ill-estimated model parameters, to mitigate the impact of uncertainty in the ICC and other nuisance parameters, and to incorporate expert opinion or historical data when designing a CRT.
2303.04261
Direct pulse-level compilation of arbitrary quantum logic gates on superconducting qutrits
Advanced simulations and calculations on quantum computers require high-fidelity implementations of quantum operations. The universal gateset approach builds complex unitaries from a small set of primitive gates, often resulting in a long gate sequence which is typically a leading factor in the total accumulated error. Compiling a complex unitary for processors with higher-dimensional logical elements, such as qutrits, exacerbates the accumulated error per unitary, since an even longer gate sequence is required. Optimal control methods promise time and resource efficient compact gate sequences and, therefore, higher fidelity. These methods generate pulses that can directly implement any complex unitary on a quantum device. In this work, we demonstrate any arbitrary qubit and qutrit gate can be realized with high-fidelity, which can significantly reduce the length of a gate sequence. We generate and test pulses for a large set of randomly selected arbitrary unitaries on several quantum processing units (QPUs): the LLNL Quantum Device and Integration Testbed (QuDIT) standard QPU and three of Rigetti QPUs: Ankaa-2, Ankaa-9Q-1, and Aspen-M-3. On the QuDIT platform's standard QPU, the average fidelity of random qutrit gates is 97.9+-0.5% measured with conventional QPT and 98.8+-0.6% from QPT with gate folding. Rigetti's Ankaa-2 achieves random qubit gates with an average fidelity of 98.4+-0.5% (conventional QPT) and 99.7+-0.1% (QPT with gate folding). On Ankaa-9Q-1 and Aspen-M-3, the average fidelities with conventional qubit QPT measurements were higher than 99%. We show that optimal control gates are robust to drift for at least three hours and that the same calibration parameters can be used for all implemented gates. Our work promises the calibration overheads for optimal control gates can be made small enough to enable efficient quantum circuits based on this technique.
Yujin Cho, Kristin M. Beck, Alessandro R. Castelli, Kyle A. Wendt, Bram Evert, Matthew J. Reagor, Jonathan L DuBois
2023-03-07T22:15:43Z
http://arxiv.org/abs/2303.04261v3
# Direct pulse-level compilation of arbitrary quantum logic gates on superconducting qutrits ###### Abstract Advanced simulations and calculations on quantum computers require high fidelity implementations of quantum circuits. The universal gateset approach builds complex unitaries from many gates drawn from a small set of calibrated high-fidelity primitive gates, which results in a lower combined fidelity. Compiling a complex unitary for processors with higher-dimensional logical elements, such as qutrits, exacerbates the accumulated error per unitary because a longer gate sequence is needed. Optimal control methods promise time- and resource- efficient compact gate sequences and, therefore, higher fidelity. These methods generate pulses that can, in principle, directly implement any complex unitary on a quantum device. In this work, we demonstrate that any arbitrary qutrit gate can be realized with high fidelity. We generated and tested pulses for a large set of randomly selected arbitrary unitaries on two separate qutrit-compatible processors, LLNL Quantum Device and Integration Testbed (QuDIT)'s standard QPU and Rigetti Aspen-11, achieving an average fidelity around 99 %. We show that the optimal control gates do not require recalibration for at least three days and the same calibration parameters can be used for all implemented gates. Our work shows that the calibration overheads for optimal control gates can be made small enough to enable efficient quantum circuits based on this technique. ## I Introduction Superconductor based quantum processors have improved significantly over the past few decades [1; 2]. Recently, several experiments have utilized quantum computers for simulations in quantum chemistry [3; 4] and quantum physics [5]. Although these small scale simulations show the potential of using quantum computers in large and complex simulations, the current usage is still limited by the coherence times of superconducting qubits and errors accumulated during the operations. Building efficient quantum circuits with compact and fast gates can overcome these difficulties. In general, quantum circuits are constructed with primitive gates that have high fidelities, but those circuits could result in a lengthy gate sequence and require significant error mitigation. Reducing the duration of the coherent control allows operations well within the coherence times of the quantum processor. A gate sequence with a small number of gates lowers the control errors introduced by each gate operation. The direct implementation of a unitary can reduce the number of required gates at a lower error rate to complete a simulation. Optimal control algorithms find pulses for a target unitary by solving time-dependent Hamiltonian of a given system [6; 7]. Recently, optimal control algorithms have been developed for closed and open quantum systems [8; 9; 10; 11; 12] to find accurate pulses efficiently. Experimental demonstrations with single qubit gates [13; 14], 0-2 swap qutrit gate [15], and two qubit gates [16] show that optimal control can be used for universal control of a quantum system. This pulse-level control can also improve the fidelity of simulations, as demonstrated with small problems in nuclear physics [17] and plasma physics [18]. In this work, we demonstrate high-fidelity arbitrary quantum logic gates prepared with optimal control technique on two qutrits that have different hardware architectures. Our work shows that practically any random unitaries on qutrits can be prepared with 99 % average fidelity with minimal calibration. Moreover, the calibration process is transferrable to different hardware platforms. The required gate length for a random gate can be shorter than the same operation with primitive gate sets. ## II Method We tested randomly generated optimal control pulses on two superconducting transmon quantum processors, LLNL Quantum Device and Integration Testbed (QuDIT)'s standard QPU and Rigetti Aspen-11. The QDIT device has a single transmon made of tantalum on a sapphire substrate that has a long energy decay time [19; 20]. Rigetti Aspen-11 has 47 transmons [21], and we chose one transmon among them that has good readout distinguishability up to \(|2\rangle\). The hardware parameters on the two systems are listed in Table 1. The \(T_{2}^{*}\) times were measured with Ramsey oscillation. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{1}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{ \dagger}), \tag{2}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{ \dagger}), \tag{3}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{ \dagger}), \tag{4}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{ \dagger}), \tag{5}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{6}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{7}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{8}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{9}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{10}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{11}\] where \(p\) is the number of qubits in the rotating frame and \(a^{\dagger}\) is the number of qubits in the rotating frame. The Hamiltonian \(H\) of a superconducting transmon in the rotating frame is approximated by: \[H=0.5\alpha\,aaa^{\dagger}a^{\dagger}+p(t)(a+a^{\dagger})+q(t)(a-a^{\dagger}), \tag{12}\] up to \(\mathcal{O}(a^{\dagger}a)^{2}\), where \(\alpha=\omega_{12}-\omega_{01}\) is the anharmonicity, \(a\) is the lowering operator, \(p(t)\) and \(q(t)\) are the control pulses given as time-dependent functions that we optimize. \(\omega_{ij}\) indicates the transition frequency between \(|i\rangle\) and \(|j\rangle\). For each target unitary, pulses were obtained using TensorOptimalControl, an LLNL developed GPU-accelerated suite for implementing gradient-based quantum-control fitting protocols. This suite uses a short time Trotter expansion similar to the well-established GRAPE [22; 23] algorithm, however the underlying TensorFlow library enables construction of exact gradients for arbitrary objective functions, and efficient use of GPUs allows for a run-time to scale at a log of the pulse length so long as the entire problem fits within the GPU memory. Using this approach, the typical time to solution for an arbitrary \(SU(3)\) unitary is \(\lesssim 15\) s. We fixed the pulse lengths to be 240 ns for the QuDIT and 120 ns for the Aspen-11. With this duration, the pulses converge with infidelity less than 1e-4 for any chosen gate while the amplitudes of the pulses remained below the hardware limit of the arbitrary waveform generators on the two hardware platforms. For comparison, the standard \(X\) gate on QuDIT is 152 ns long and 60 ns on Aspen-11. The pulses were calculated with a resolution of 32 points per nanosecond and then, down-sampled to 1 point per nanosecond to perform them on the hardware. To achieve the best performance of optimal control gates, we calibrate the pulse amplitudes at a constant, denoted by \(\gamma\), and applied different weights, \(\sigma\), to the \(\omega_{01}\) and \(\omega_{12}\) frequencies. The calibrated pulses, \(\mathcal{C}(f)\), can be written as: \[\mathcal{C}(f)=\gamma\big{[}\mathcal{X}(f<\omega_{c})+\sigma\cdot\mathcal{X} (f>\omega_{c})\big{]}, \tag{2}\] where \(f\) is either \(p(t)\) or \(q(t)\), \(\mathcal{X}(\Delta f)\) is the spectral component in the frequency range \(\Delta f\), \(\omega_{c}\) is the average frequency of \(\omega_{01}\) and \(\omega_{12}\). Amplitude scaling with a constant is to convert the MHz unit of the calculated pulses to voltage on arbitrary waveform generators. To fine-tune \(\gamma\), we repeat a gate up to 10 times, compare it to the predicted trajectories from the quantum master equation, and optimize the constant to minimize the difference between the measured and the predicted evolution. This process allows us to optimize \(\gamma\) for any random gates. Adjusting the weight between \(\omega_{01}\) and \(\omega_{12}\) components is to compensate frequency dependence of signal chain from room temperature electronics to the device at 10 mK. Previously, the weight calibration was done by constructing a spectral filter around the transition frequencies [15], which could take well over 10 minutes to construct the filter. In this work, since most of the spectral components are centered at \(\omega_{01}\) and \(\omega_{12}\) frequencies, we performed two-point calibration; instead of measuring Rabi at detuned frequencies, we applied a constant weight, \(\sigma\), to the two transition frequencies. We used 0-2 swap gate for calibration, which swaps the state population on \(|0\rangle\) and \(|2\rangle\)[15]. After a few iterations of adjusting the amplitude and the weights, the measured fidelity of the optimized 0-2 swap gate was 99.95 %. These calibration parameters were applied to all random gates tested in this work. Over a hundred random gates were tested on LLNL QuDIT and six gates on Rigetti Aspen-11 due to time constraints, nonetheless finding equivalent results. We used quantum process tomography to find the process matrix and evaluate the fidelity of each optimal control random gate. We prepared the initial states to 9 different states [24]: \[|0\rangle,|1\rangle,|2\rangle\] \[(|0\rangle+|1\rangle)/\sqrt{2},(|1\rangle+|2\rangle)/\sqrt{2},(| 0\rangle+|2\rangle)/\sqrt{2} \tag{3}\] \[(|0\rangle+i|1\rangle)/\sqrt{2},(|1\rangle+i|2\rangle)/\sqrt{2},( |0\rangle+i|2\rangle)/\sqrt{2}\] Then we applied an optimal control gate to the initial states and applied measurement operators in the complete gate set, \(A_{m}\), before readout [15]: \[A_{m}=\{I,Z_{01},Z_{12},X_{01},X_{12},Y_{01},Y_{12},X_{01}X_{12},X_{12}X_{01}\} \tag{4}\] The process map, \(\epsilon\), is defined as \[\rho_{out}=\epsilon(\rho_{in})=\sum_{mn}A_{m}\rho_{in}A_{n}^{\dagger}\chi_{mn} \tag{5}\] where \(\rho_{in}\) (\(\rho_{out}\)) is the input (output) density matrix, \(A_{m}\) is \(m-\)th operator in the complete gate set, and \(\chi_{mn}\) is the \((m,n)\) element in the process matrix \(\chi\). The process matrix can be decomposed into \(\chi=W^{\dagger}W\)[25], where the general form of \(W\) can be constructed with 81 parameters; 45 parameters for the real and 36 for the imaginary parts of the components. To assist the convergence of the 81 parameters, we set the initial parameters to the ones obtained from the ideal \(\chi\) matrix. Then we find the best-fit \(\chi\) matrix to the experimental data. We evaluate the entanglement fidelity as follows: [24] \[F_{e}=\sum_{mn}\chi_{mn}\mathrm{Tr}(U_{targ}^{\dagger}A_{m}\rho)\mathrm{Tr}( \rho A_{n}^{\dagger}U_{targ}) \tag{6}\] where \(\chi_{mn}\) is the measured process matrix, \(U_{targ}\) is the target propagator, \(\rho\) is an initial state. We sampled 1000 random initial states to find the distribution of the entanglement fidelities for each gate. \begin{table} \begin{tabular}{c c c} \hline Parameters & LLNL QuDIT & Rigetti Aspen-11 \\ \hline \(\omega_{01}\) & 3.446 GHz & 5.145 GHz \\ \(\omega_{12}\) & 3.237 GHz & 4.930 GHz \\ \(T_{1}^{01}\) & 220 μs & 24 μs \\ \(T_{2}^{*,01}\) & 22 μs & 17 μs \\ \(T_{1}^{12}\) & 145 μs & not measured \\ \(T_{2}^{*,12}\) & 25 μs & not measured \\ \hline \end{tabular} \end{table} Table 1: LLNL QuDIT’s device and Rigetti Aspen-11 parameters. \(\omega_{ij}\) indicates the transition frequency from \(|i\rangle\) to \(|j\rangle\). \(T_{1}^{ij}\) is the energy decay time and \(T_{2}^{ij}\) is the decoherence time in \(i-j\) manifold. ## III Result One example of a randomly generated qutrit unitary is: \[T=\begin{pmatrix}0.44+0.61i&-0.57+0.05i&0.03+0.32i\\ 0.30+0.49i&0.34-0.07i&0.12-0.73i\\ 0.19-0.25i&-0.16-0.72i&0.59+0.00i\end{pmatrix} \tag{7}\] Figure 1(a) shows the generated pulses for gate \(T\). Most of the frequency components of the pulses are centered around \(\omega_{01}\) and \(\omega_{12}\), as shown in Figure 1(b). The rest of the frequency components, including 2-3 transition frequency \(\omega_{23}\), are at least two orders of magnitude smaller than that of \(\omega_{01}\) and \(\omega_{12}\) and therefore, most of the transitions occur on the 0-1 and 1-2 manifolds. For other random gates, the spectral components are also concentrated at \(\omega_{01}\) and \(\omega_{12}\), which validates our two-point calibration method for frequency dependent components in the measurement chain. Figure 2 shows the time dynamics of the quantum states during the gate operation on QuDIT. The measurement sequence was as follows: i) prepare the states to \(|0\rangle\), \(|1\rangle\), or \(|2\rangle\), ii) apply \(p(t)\) and \(q(t)\) for time step (\(t\leq\) gate time). The three panels correspond to different initial states \(|\psi_{\text{init}}\rangle\): \(|0\rangle\) in panel (a), \(|1\rangle\) in panel (b), and \(|2\rangle\) in panel (c). The solid lines (expected) and the open circles (as measured) indicate the state population on \(|0\rangle\) (black), \(|1\rangle\) (red), and \(|2\rangle\) (blue). The expected trajectories were calculated using Python QuTiP package [26; 27]. The measured trajectories match with the expected ones within \(\sim 5\%\) for the three initial states, which shows the control error. We did not correct the state preparation and measurement error. This time evolution measurement provides a quick way of validating the calibration parameters obtained from 0-2 swap gate in random gates. To amplify small errors that may not have been captured with one gate application, we applied the gate \(T\) repeatedly up to 50 times. Figure 3(a) shows the population change on the three states up to 50 times of the gate application. Black, red, and blue open circles (experiment) and solid lines (calculated) correspond to \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\), respectively. The initial state was on the ground state. As we applied the gate repeatedly, the measured population on the three states agree well with the predicted population from the master equation within \(\sim 1\,\%\). At higher repetitions, the measured and the predicted populations differ by \(\sim 6\,\%\) because of the accumulated state preparation and measurement error and the dephasing of the transmon. To measure the fidelities of the gates and observe potential errors, we performed quantum process tomography as described in the methods section. We obtained the fidelity by comparing the expected (left) and the measured (right) \(\chi\) matrices in Fig. 3(b). The entanglement Figure 1: (a) Pulses in the qubit 0-1 rotating frame for a specific unitary \(T\) that is randomly generated. (b) Frequency components of the pulse in panel (a) are concentrated at the qubit 0-1 and 1-2 transition frequencies. Figure 2: Time evolution of quantum states when the random gate \(T\) is being played. Panel (a), (b), and (c) correspond to different initial states at \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\), respectively. fidelity of the gate shown in Fig. 3(a) with 1000 random initial states is \(98.7\pm 0.8\,\%\). We repeated the same analysis for 109 other randomly generated gates to obtain statistics on QuDIT. Figure 3(c) shows the distribution of the entanglement fidelities of 110 random optimal control gates and the solid curve indicates a gaussian fitting of the histogram. The mean fidelity is \(98.6\,\%\) while the maximum fidelity is \(99.4\,\%\) and the minimum fidelity is \(96.0\,\%\). We applied the same calibration procedure to Rigetti Aspen-11 and tested six random gates, indicated by dark-red triangles in Fig. 3(c). The average fidelity on Aspen-11 is \(99.1\,\%\). The data on QuDIT was taken over a time span of three days and the gate fidelities could change due to small fluctuation in the quantum system. To find the range of fidelity over time, we measured the fidelity of the same gate every hour for 80 hours. We did not recalibrate the pulses during this measurement. In Fig. 3(d), the open squares show the fidelity at each time point. The fidelity of a gate ranges from \(98.2\,\%\) to \(99.3\,\%\) with the time-average fidelity at \(99.1\pm 0.3\,\%\), indicated by the blue opaque horizontal line and the shaded area. From the time-dependent measurement, we found that the fidelity of a gate could fluctuate about \(1\,\%\). Another potential reason for relatively low fidelities for some gates is control errors, which will be discussed later. We extended the single gate measurement to multiple different concatenated gates. On LLNL QuDIT, the total gate length is \((240\times m)\,\mathrm{ns}\) where \(m\) is the number of concatenated gates. In Fig. 4, we randomly chose \(2^{n}\) gates, where \(n=1,2,\cdots,7\), from the 110 gates measured above, allowing for gates to be selected multiple times. The data points in Fig. 4 are the measured fidelities with resampled random gates each time. As one concatenated gate becomes longer and the gate length gets closer to the coherence time of the qutrit, the measured process matrix substantially deviates from the expected one, which makes the convergence of a fit difficult and time-consuming. In this part of the fidelity measurement, we constructed a unitary with 9 parameters as \(U=B^{\dagger}B\), where \(B\) is given below: \[B=\begin{pmatrix}x_{1}&x_{2}+ix_{3}&x_{4}+ix_{5}\\ 0&x_{6}&x_{7}+ix_{8}\\ 0&0&x_{9}\end{pmatrix} \tag{8}\] The predicted trajectories were calcualted with quantum master equation with fixed Lindbladian terms, obtained from the coherence times of the qutrits. For accurate fidelity measurement, we repeated the concatenated gate Figure 3: Characterization of optimal control random gates. (a) Population change on \(\ket{0}\), \(\ket{1}\), and \(\ket{2}\) states as the random gate \(T\) is applied repeatedly up to 50 times. The open circles are the measured state populations and the solid lines are the expected evolution. (b) The expected (left) and the measured (right) process matrices are shown in Hinton plots. The X and Y labels are the complete gate set operators. The size and the color of the squares indicate the magnitude and the phase of each element in the unit of \(\pi\), respectively. (c) Fidelity distribution of 110 random gates measured on QuDIT shows that the mean fidelity is \(98.6\,\%\) with the maximum fidelity at \(99.4\,\%\). The dark-red triangles are the fidelities for six different gates measured on Aspen-11. The average fidelity on Aspen-11 is \(99.1\,\%\). (d) We monitored fidelity of the random gate \(T\) over 80 hours without recalibrating the parameters. The time-averaged gate fidelity is \(99.1\,\%\). The highest fidelity is \(99.3\,\%\) while the lowest is \(98.2\,\%\). Figure 4: Average fidelity of the concatenated random gates up to 128 is \(97.4\,\%\) on QuDIT. The measured fidelities are shown in black squares. The blue solid line is a fit with \(F^{d}\) where \(F\) is the averaged fidelity and \(d\) is the gate depth. The black dashed line indicates \(T_{2}^{*}\) on QuDIT in 0-1 manifold. sequence up to 10 times and found the best fit between the measured and the expected trajectories. As a reference, for the same gates, the fidelity measured from the 9-parameter unitary model is \(99.4\,\%\), which is \(0.8\pm 0.2\,\%\) higher than the generalized \(\chi\) matrix fit with 81 parameters. The difference between the two fidelity metrics indicate the control errors on the QuDIT is about \(0.8\,\%\). To find the average fidelity of concatenated gates, \(F\), up to 128 gates, we fit the data with \(F^{d}\) where \(d\) is the number of concatenated gates. The obtained average fidelity \(F\) is \(97.4\,\%\) ## IV Discussion The average fidelity of the randomly generated optimal control gates is lower than 0-2 swap gate that we used as a reference, whose fidelity is \(99.95\,\%\). In the 0-2 swap gate operation, the state goes back and forth between \(|0\rangle\) and \(|2\rangle\), which is not sensitive to phase errors that can arise from imperfect control. Random gates have more complex unitaries. The resulting state after this gate application is often in the superposition of \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\) state. Therefore, phase errors accumulated over time can be more pronounced in this type of arbitrary gates. As shown in Fig. 3(c), the fidelities of different gates vary \(\sim 3\,\%\), which is larger than the time fluctuation that we measured for a single gate in Fig. 3(d). This variation could be induced by small error in the weight factor between \(\omega_{01}\) and \(\omega_{12}\). In this work, we empirically found the weight that generates the best 0-2 swap operations. The 0-2 swap gate was insensitive to \(\pm 1\,\%\) change of weight \(\sigma\), but it is possible that such small change becomes more noticeable in arbitrary gates. To see the range of optimized weights for different type of gates, we tested \(\sqrt{0\text{-}2\;\text{swap}}\) and \(\sqrt{1\text{-}2\;\text{swap}}\), which are equivalent to 0-2 swap and 1-2 swap upon two gate applications, respectively. The weights were optimized for the two gates individually at three different initial states, \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\). We found that the optimal weight factors indeed vary within \(3\,\%\), which could affect the fidelities by \(1-2\,\%\). Although individual optimization would be ideal to achieve the best gate performance for each gate, it would be time-consuming as one optimization could take a few minutes to one hour. Our work shows that we can still achieve \(99\,\%\) fidelity for most of the gates with universal calibration parameters. Using a phase-sensitive gate, such as \(\sqrt{0\text{-}2\;\text{swap}}\) as a reference could provide more precise calibration parameters for any random gates. In this study, we found that the calibration parameters that we found are reusable for at least three days. When we measured the fidelity of the same gate after two weeks, we fine-tuned the amplitude constant by \(\sim 1\,\%\), which we attribute to fluctuations in the lab environment and the quantum system. One advantage of using optimal control gates is to shorten the operation time on a quantum computer. Minimizing the number of gates and the total operation time can significantly reduce the errors accumulating per gate and over time. General qutrit gates can be decomposed into \(Z^{(01)}Z^{(12)}Y^{(12)}Y^{(01)}Z^{(01)}Z^{(12)}Y^{(01)}Z^{(12)}\), where each \(Y\) and \(Z\) gates have different rotation angles. The \(Y\) gates can be further decomposed into \(Z_{\theta_{1}}X_{\pi/2}Z_{\theta_{2}}X_{\pi/2}Z_{\theta_{3}}\)[28], where \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are rotation angles. \(X_{\pi/2}\) is \(\pi/2\) pulse that brings the pure state to the superposition state in the chosen subspace. In most quantum processors, \(Z\) gates are implemented virtually so the gate time for \(Z\) is practically zero. However, due to \(X_{\pi/2}\) gate, the total gate time could take six times longer than a standard \(X_{\pi/2}\) gate. More importantly, optimal control gates can implicitly correct relative phase accumulation error between different frames in the system, such as qubit 0-1 and 1-2 manifolds. Standard gates are mostly in the form of a square, gaussian, or DRAG gaussian [29; 30]. When such pulse is played on one frame, the phase of the idling frame naturally evolves with \(\exp(-i\delta t)\), assuming no ac Stark shift, where \(\delta\) is the difference in frequencies between two frames and \(t\) is the idling time. To implement an arbitrary gate with primitive gate sets, this phase accumulation needs to be tracked and corrected. This optimal control technique could operate quantum algorithms faster with higher fidelity, such as Quantum Fourier Transform [31; 32] or variational quantum eigensolver (VQE) [33; 34; 35; 36]. We can replace a sequence of fixed gates in one of these algorithms with an optimal control pulse to reduce the operation time and the overall gate count in the circuit. Similar ideas have been suggested to use parametrized pulses as an ansatz for higher fidelity VQE calculation [37]. To minimize control errors and generate more robust pulses for a target unitary, it would be useful to build Hamiltonian model that includes the time-dependence of the system and calibration parameters. One way to achieve more accurate pulses is to compute pulses in open quantum system that includes \(T_{1}\) and \(T_{2}\) times in 0-1 and 1-2 manifolds. In Fig. 2, the populations on \(|0\rangle\) and \(|2\rangle\) states exchange through \(|1\rangle\) state as an intermediary, because the direct interaction between \(|0\rangle\) and \(|2\rangle\) is weak [38]. Our data suggests that decoherence or decay in 0-2 frame does not play a significant role in optimal control in qutrits. Another direction would be to explore systematic way to achieve fast control of an arbitrary gate. To achieve the shortest gate time, there are a few challenges to overcome. When a gate becomes shorter, it tends to have higher amplitudes, which can unintentionally drive higher energy excitation. In addition, there could be precision-loss of the pulses when the pulses are downsampled to match the time resolution of an arbitrary waveform generators. Conclusion In this work, we experimentally proved that optimal control technique can prepare any random qutrit gates with minimal calibration. Our calibration procedure is applicable to different hardware architectures, showing that the optimal control is a practical and promising direction for optimized quantum circuits. ## VI Acknowledgement This work was supported by U.S. Department of Energy under grant number SC-FES SCW1736. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
2308.06704
Diamagnetic property and optical absorption in conventional superconductors with magnetic impurities
By solving the renormalization of the $s$-$d$ interaction from magnetic impurities embeded in conventional superconductors at low concentration, we derive the macroscopic superconducting phase fluctuation and electromagnetic properties within the path-integral approach. It is found that there exist two superconducting phase modes, both exhibiting similar behaviors of the Nambu-Goldstone mode. The existence of two phase modes suggests that in addition to the conventional free Cooper pairs as in the BCS case, there emerges a small part of the localized Cooper pairs around magnetic impurities due to the quantum correlation by the $s$-$d$ interaction, acting as Josephson islands. The emerging impurity Shiba bands inside the superconducting gap then correspond to the excitations of the ground state of the localized Cooper pairs, associated with the breaking of these Cooper pairs. In the diamagnetic response, the state of the free Cooper pairs gives rise to the conventional real contribution in the generated supercurrent, whereas the one of the localized Cooper pairs results in an imaginary contribution, leading to the superconducting Friedel oscillation, i.e., oscillation in the decay of the vector potential in the Meissner effect. As for the optical absorption of a conventional superconductor lying in the anomalous-skin-effect region, it is found that besides the conventional interband transition of Bogoliubov quasiparticles as revealed by Mattis-Bardeen theory, there also exist the interband transition between the impurity Shiba bands as well as all interband transitions between Bogoliubov quasiparticle and impurity Shiba bands. These transitions exhibit clear and separate resonance characters, providing a feasible scheme for the experimental detection.
F. Yang, M. W. Wu
2023-08-13T07:29:53Z
http://arxiv.org/abs/2308.06704v2
Diamagnetic property and optical absorption in conventional superconductors with magnetic impurities ###### Abstract By solving the renormalization of the \(s\)-\(d\) interaction from magnetic impurities embeded in conventional superconductors at low concentration, we derive the macroscopic superconducting phase fluctuation and electromagnetic properties within the path-integral approach. It is found that there exist two superconducting phase modes, both exhibiting similar behaviors of the Nambu-Goldstone mode. The existence of two phase modes suggests that in addition to the conventional free Cooper pairs as in the BCS case, there emerges a small part of the localized Cooper pairs around magnetic impurities due to the quantum correlation by the \(s\)-\(d\) interaction, acting as Josephson islands. The emerging impurity Shiba bands inside the superconducting gap then correspond to the excitations of the ground state of the localized Cooper pairs, associated with the breaking of these Cooper pairs. In the diamagnetic response, the state of the free Cooper pairs gives rise to the conventional real contribution in the generated supercurrent, whereas the one of the localized Cooper pairs results in an imaginary contribution, leading to the superconducting Friedel oscillation, i.e., oscillation in the decay of the vector potential in the Meissner effect. As for the optical absorption of a conventional superconductor lying in the anomalous-skin-effect region, it is found that besides the conventional interband transition of Bogoliubov quasiparticles as revealed by Mattis-Bardeen theory, there also exist the interband transition between the impurity Shiba bands as well as all interband transitions between Bogoliubov quasiparticle and impurity Shiba bands. These transitions exhibit clear and separate resonance characters, providing a feasible scheme for the experimental detection. pacs: 74.40.+k, 74.25.Gz, 74.25.F- ## I Introduction In the past few decades, the in-gap excitations in superconducting systems have attracted much attention as they share robust gap protection from superconductors and exhibit a long-range phase coherence, allowing for the desired manipulation in potential application. Various proposals have therefore been put forward in the literature, such as the vortex bound state [1; 2] and Yu-Shiba-Rusinov state (YSR) [3; 4; 5] in conventional BCS superconductors, Andreev bound state confined in normal region of short Josephson junction [6; 7; 8; 9; 10; 11] as well as Majorana bound state localized at the boundaries of topological superconductors [12; 13; 14; 15; 16; 17; 18]. Among them, the YSR state, which was first analytically revealed by Yu [3], and later by Shiba [4] as well as Rusinov [5] in 1960s considering a classical local spin in a conventional BCS superconductor, has recently attracted the renewed and growing interest. This type of state is characterized as a pair of in-gap bound states that appear around single magnetic impurity embedded in an \(s\)-wave superconductor, with the particle-hole-symmetric excitation energies \(\pm\eta\Delta_{0}\) associated with the local Cooper-pair breaking by magnetism. Here, \(0<\eta<1\) and \(\Delta_{0}\) denotes the superconducting gap. Induced by local magnetism, the YSR state as the in-gap excitation is expected to enable the investigation in the Andreev tunneling process [19] as well as the study of the magnetic phenomena in superconductors [20; 21; 22; 23] with high energy resolution. As for the case at finite impurity concentration, Shiba predicted a pair of impurity bands inside the superconducting gap formed by hybridization of the YSR states from individual magnetic impurities [4], via numerically calculating the self-consistent self-energy within the random phase approximation. Nowadays, thanks to the advanced fabrication technique that can tailor and control down to each individual atom, the band formed by hybridization of the YSR states is predicted to achieve the topological superconductivity [24; 25; 26; 27; 28; 29; 30; 31; 32] as potential platform for quantum computational architectures. Inspired by the renewed attention, a great deal of experimental efforts have been devoted to the search for the existence of the YSR state. The scanning tunneling microscopy and spectroscopy (STM/STS) techniques have been widely applied in the literature to identify a pair of the in-gap resonance peaks that are symmetrically located around zero-bias. Such observation was first reported on the surface of superconducting Nb sample with Mn and Gd adatoms [33], and have now been observed in a variety of systems, ranging from different magnetic adatoms [34; 35; 36; 37; 38; 39; 40; 41], magnetic molecules [42; 43; 44; 45; 46], magnetic nanostructures (such as ferromagnetic nanowires [27] or artificial atomic chains [29]) and magnetic islands [47; 48] embedded in conventional superconductors, over magnetic molecular junctions with proximity-induced superconductivity [49], to iron-based unconventional superconductors [50; 51; 52]. In comparison with the tremendous experimental progress for the YSR state around single magnetic impurity, the scheme to detect the impurity Shiba band at finite impurity concentration is still absent in the literature, since such detection requires the macroscopic measurements concerning the non-equilibrium properties, which are beyond the STM/STS technique. For elucidating the physics of superconductivity and exploring the novel properties, the electromagnetic responses have played a significant role in the past. On one hand, the diamagnetic effect caused by induced supercurrent in magnetic response, referred to as Meissner effect [53; 54], is known as one of the fundamental phenomena of superconductors. On the other hand, the optical spectroscopy in superconductors has been proved as powerful tool to access the properties of superconductivity. Particularly, for superconductors lying in the anomalous-skin-effect region with smaller skin depth compared with the mean free path [55; 56], by measuring the optical conductivity \(\sigma_{s}(\Omega)=\sigma_{1s}(\Omega)+i\sigma_{2s}(\Omega)\) in linear optical response, fitting \(1/\Omega\)-like divergent behavior in the imaginary part \(\sigma_{2s}(\Omega)\) at low-frequency regime gives rise to the density of the superfluid [57; 58; 59; 60; 61; 62; 63]. The real part \(\sigma_{1s}(\Omega)\) (i.e., optical absorption) around terahertz-frequency regime at \(T=0\) K is attributed to the interband transition of Bogoliubov quasiparticles, according to the Mattis-Bardeen theory [64; 65]. Thus, in \(s\)-wave superconductors at \(T=0\) K, \(\sigma_{1s}(\Omega)\) vanishes at \(\Omega<2\Delta_{0}\) but becomes finite for \(\Omega\) above \(2\Delta_{0}\)[64], providing a clear feature to measure the value and symmetry of the superconducting gap [66; 67; 68; 69; 70; 71; 72; 73]. Consequently, as the YSR state is assoccial with the local Cooper-pair breaking, finit-concentration magnetic impurities embededin conventional superconductors is expect to influence the superfluid density as well as optical absorption. By studying this influence, one can therefore reveal the feasible scheme to detect the impurity Shiba bands, and gain a deeper understanding of the competition/coexistence of the superconductivity and local magnetism, which has been one of the focus and intriguing topics in the field. In this work, in conventional superconductors with magnetic impurities at low concentration, by analytically solving the renormalization of the \(s\)-\(d\) interaction, we derive the superconducting phase fluctuation and electromagnetic properties within the path-integral approach. Specifically, to elucidate the macroscopic physical picture of the ground state behind the emerging impurity Shiba bands, we calculate the superconducting phase fluctuation within the path-integral approach. It is found that there exist two superconducting phase modes, and both become inactive after the coupling with the long-range Coulomb interaction which causes the original gapless spectra lifted up to the high-energy frequency as a consequence of Anderson-Higgs mechanism [76], similar to the conventional Nambu-Goldstone mode [77; 78; 79; 80; 82; 83]. As the existence of the collective phase mode is a direct consequence of the formation of the superconducting state due to the spontaneous breaking of the continuous \(U(1)\) symmetry [77; 78; 79; 80; 81; 82; 83], two phase modes in superconductors with magnetic impurities suggests that there exist two types of states of the Cooper pairs, forming the ground state through the direct product: a small part of the Cooper pairs become localized around individual magnetic impurities due to the quantum correlation by the \(s\)-\(d\) interaction, acting as Josephson islands, similar to the case of the granular superconductors [84; 85]; the remaining part of the Cooper pairs is still conventional free one as in the BCS case. The Bogoliubov quasiparticle continuum and emerging impurity Shiba bands then correspond to the excitations of the ground states of the free and localized Cooper pairs, associated with the corresponding pair breaking, respectively. The proposed picture of the ground state with free and localized Cooper pairs can well capture the derived electromagnetic properties in the linear response. On one hand, in the diamagnetic response, the state of the free Cooper pairs gives rise to the conventional real contribution in the generated supercurrent as in the BCS case, whereas the one of the localized Cooper pairs results in an imaginary contribution, which can be understood by the \(\pi/2\)-phase difference between wave-vectors of the free and localized Cooper pairs. Consequently, in the diamagnetic response, in contrast to the exponential decay in the conventional Meissner effect [54; 74], the imaginary contribution in supercurrent due to the \(s\)-\(d\) interaction from magnetic impurities leads to an oscillation in the decay of the vector potential from the surface to the interior of superconductors, similar to the Friedel oscillation in normal metals [75] due to the local modulation of the charge density by defect. We therefore refer to this oscillation in superconductors with magnetic impurities as superconducting Friedel oscillation. On the other hand, it is noted that the impurity Shiba bands and Bogoliubov quasiparticle continuum, as cor Figure 1: Interband transitions in conventional superconductors with magnetic impurities. In the figure, the blue and gray shadow regions denote the impurity Shiba bands and Bogoliubov quasiparticle continuum, respectively; \(E_{t}\) and \(E_{b}\) stand for the top and bottom edges of the electron-type (e-type) impurity Shiba band, respectively; the dashed arrow denotes the interband transition from electron type to electron type or from hole type to hole type that occurs at non-zero temperature; the solid arrow represents the interband transition from hole type to electron type that can occur at zero temperature. the ground states of the localized and free Cooper pairs, are similar to each other. Because of this similarity, in the optical absorption of a conventional \(s\)-wave superconductor lying in the anomalous-skin-effect region [55; 56], at zero temperature, besides the conventional interband transition (channel I in Fig. 1) of Bogoliubov quasiparticles as revealed by Mattis-Bardeen theory [64], there also exist the interband transitions (from hole type to electron type) between the impurity Shiba bands (channel II in Fig. 1) as well as between Bogoliubov quasiparticle and impurity Shiba bands (channels III and IV in Fig. 1). The channel II leads to a resonance peak centered around \(\Omega=2\eta\Delta_{0}\) in optical spectroscopy, whereas channles III and IV cause a crossover at \(\Omega=\Delta_{0}+E_{b}\), with \(E_{b}\) being the bottom edge of the impurity Shiba band. Interestingly, with increase of temperature from zero, between Bogoliubov quasiparticle and impurity Shiba bands, there gradually emerge the interband transitions from electron (hole) type to electron (hole) type, as shown by channel V (VI) in Fig. 1, leading to a crossover at \(\Omega=\Delta_{0}-E_{t}\), with \(E_{t}\) being the top edge of the impurity Shiba band. Consequently, a feasible scheme for experimental detection of the impurity Shiba band is proposed, by measuring the emerging character in diamagnetic property and/or optical spectroscopy. ## II Model In this section, we first introduce the Hamiltonian of superconductors in the presence of the \(s\)-\(d\) interaction between electrons and magnetic impurities, and show the corresponding renormalized Green function revealed by Shiba at finite impurity concentration within the random phase approximation. In contrast to the numerical formulation by Shiba, we present the analytical solution of the complex renormalization by \(s\)-\(d\) interaction in the Green function at a low concentration of the magnetic impurities. A finite density of states which is centered around \(\eta\Delta_{0}\) with bandwidth proportional to the square root of the impurity concentration emerges inside the superconducting gap, suggesting the emergence of the impurity Shiba band as revealed by numerical calculation from Shiba. Then, using the analytically obtained Green function, we present the diagrammatic formalism within the path-integral approach to investigate the electromagnetic properties of superconductors in the linear regime. ### Hamiltonian and renormalized Green function In conventional \(s\)-wave superconductors, the total Hamiltonian with the \(s\)-\(d\) interaction between electrons and magnetic impurities in Nambu\(\otimes\)spin space reads [4] \[H = \frac{1}{2}\sum_{\bf k}\psi^{\dagger}_{\bf k}(\xi_{\bf k}\tau_{3} \!-\!\Delta_{0}\tau_{2}\sigma_{2})\psi_{\bf k}\!-\!\frac{1}{2}J\sum_{\bf kk^{ \prime}}\psi^{\dagger}_{\bf k}\vec{\sigma}\psi_{\bf k^{\prime}}\!\cdot\!{\bf S}, \tag{1}\] where the field operator \(\psi_{\bf k}=(\psi_{\bf k\uparrow},\psi_{\bf k\downarrow},\psi^{\dagger}_{- \bf k\uparrow},\psi^{\dagger}_{-\bf k\downarrow})^{T}\); \(\xi_{\bf k}=k^{2}/(2m)-\mu\) with \(m\) denoting the effective mass and \(\mu\) being the chemical potential; \(\sigma_{i}\) and \(\tau_{i}\) are the Pauli matrices in spin and Nambu particle-hole space, respectively; \(\vec{\sigma}=\vec{\sigma}(1+\tau_{3})/2+\sigma_{2}\vec{\sigma}\sigma_{2}(1- \tau_{3})/2\); \({\bf S}\) and \(J\) denote the local spin and exchange interaction in the \(s\)-\(d\) interaction, respectively. Particularly, in consideration of a classical spin, one has \(S_{x}^{2}=S_{y}^{2}=S_{z}^{2}=S^{2}/3\) and \(S_{x}S_{y}=S_{x}S_{z}=S_{y}S_{z}=0\). It is established that the Green-function formalism provides an efficient approach to elucidate the single-particle excitation spectrum. In general, the Green function is defined as \(G_{\bf k}(\omega)=-i\langle\psi_{\bf k}(\omega)\psi^{\dagger}_{\bf k}(\omega)\rangle\), which can be solved through the Dyson equation [74; 75]: \[G_{\bf k}(\omega)=G_{0\bf k}(\omega)+G_{0\bf k}(\omega)\Sigma(\omega)G_{\bf k} (\omega). \tag{2}\] Here, \(G_{\bf k}^{(0)}(\omega)\) represents the bare Green function and \(\Sigma(\omega)\) denotes the self-energy due to the external interaction. The bare Green function of the conventional BCS superconductors is established as [74] \[G_{0\bf k}(\omega)=\frac{\omega\!+\!\xi_{\bf k}\tau_{3}\!-\!\Delta_{0}\tau_{2} \sigma_{2}}{\omega^{2}-\xi_{\bf k}^{2}-\Delta_{0}^{2}}, \tag{3}\] and within the random phase approximation to take random spatial distribution and random orientation of individual local spins, the self-energy due to the \(s\)-\(d\) interaction between electrons and magnetic impurities is given by [4] \[\Sigma(\omega)=n_{i}({\bf S}\!\cdot\!\vec{\sigma})Z(\omega)({\bf S}\!\cdot\! \vec{\sigma})\!+\!({\bf S}\!\cdot\!\vec{\sigma})Z(\omega)\Sigma(\omega)Z( \omega)({\bf S}\!\cdot\!\vec{\sigma}), \tag{4}\] where \(Z(\omega)=\sum_{\bf k}G_{\bf k}(\omega)\) and \(n_{i}\) represents the impurity concentration. To self-consistently calculate the Green function from Eqs. (2) and (4), based on the bare one in Eq. (3), one can consider a renormalized Green function as [4; 74] \[G_{\bf k}(\omega)=\frac{\tilde{\omega}\!+\!\xi_{\bf k}\tau_{3}\!-\!\tilde{ \Delta}_{0}\tau_{2}\sigma_{2}}{\tilde{\omega}^{2}-\xi_{\bf k}^{2}-\tilde{ \Delta}_{0}^{2}}, \tag{5}\] and the self-energy in Eq. (4) becomes \[\Sigma(\omega)=\frac{n_{i}(JS/2)^{2}Z(\omega)}{1-[JSZ(\omega)/2]^{2}}. \tag{6}\] Then, substituting Eqs. (5)-(6) into Eq. (2), one arrives at the renormalization equations revealed by Shiba [4]: \[\frac{\omega}{\Delta_{0}}=\frac{\tilde{\omega}}{\tilde{\Delta}_{0}}\bigg{[}1\! -\!\frac{\gamma_{s}}{\Delta_{0}}\frac{\sqrt{1\!-\!(\frac{\tilde{\omega}}{ \tilde{\Delta}_{0}})^{2}}}{\eta^{2}\!-\!(\frac{\tilde{\omega}}{\tilde{\Delta}_{ 0}})^{2}}\bigg{]}, \tag{7}\] and \[\tilde{\Delta}_{0}=\Big{[}1-\frac{1-(JSD\pi/2)^{2}}{2}\Big{(}1-\frac{\omega/ \Delta_{0}}{\tilde{\omega}/\tilde{\Delta}_{0}}\Big{)}\Big{]}\Delta_{0}. \tag{8}\] Here, \(\gamma_{s}=2n_{i}D\pi(JS/2)^{2}/[1+(JSD\pi/2)^{2}]^{2}\) denotes the relaxation rate due to the \(s\)-\(d\) interaction; \(D\) denotes the density of states at the Fermi level in the normal state; the coefficient \(\eta\) is written as \[\eta=\frac{1-(JSD\pi/2)^{2}}{1+(JSD\pi/2)^{2}}, \tag{9}\] which is related to the energies \(\pm\eta\Delta_{0}\) of the pair of the YSR state around single magnetic impurity in a conventional \(s\)-wave superconductor [3; 4; 5]. As the imaginary part of the \(\sigma_{0}\tau_{0}\) component of the retarded Green function corresponds to the spectra function, one can calculate the density of states as \[\rho(\omega)=\mathrm{Im}\mathrm{Tr}[Z(\omega+i0^{+})/4]=-\mathrm{Im}\Big{[} \frac{\pi D\tilde{\omega}}{\sqrt{\tilde{\Delta}_{0}^{2}-\tilde{\omega}^{2}}} \Big{]}. \tag{10}\] Without magnetic impurities (\(\gamma_{s}=0\)) and hence the renormalization according to Eqs. (7) and (8), the density of states \(\rho(\omega)\) from Eq. (10) becomes finite at \(\omega\geq\Delta_{0}\) but vanishes for \(0<\omega<\Delta_{0}\) as it should be, since the continuum of the Bogoliubov quasiparticle lies above the superconducting gap. In the presence of the magnetic impurities, as seen from Eq. (7), there exist the complex solutions of the renormalization \(\tilde{\omega}/\tilde{\Delta}_{0}\) when \(\omega>\Delta_{0}\), suggesting the existence of the interaction between Bogoliubov quasiparticles and magnetic impurities. Particularly, a further numerical calculation of Eq. (7) by Shiba [4] revealed that there exists additional complex solutions of \(\tilde{\omega}/\tilde{\Delta}_{0}\) when \(0<\omega<\Delta_{0}\), and this complex renormalization leads to a finite density of states \(\rho(\omega)\) [Eq. (10)] inside the superconducting gap, suggesting the emergence of the impurity Shiba band. It is also revealed [4] that the density of states of the emerging electron-type impurity Shiba band is centered around \(\eta\Delta_{0}\) with bandwidth proportional to the square root of the impurity concentration, whereas in consideration of the particle-hole symmetry, a corresponding hole-type impurity Shiba band emerges symmetrically at \(\omega<0\). ### Solution of renormalization at low concentration For the formulation of the electromagnetic properties within the Green-function formalism, the numerical results of the impurity Shiba bands are hard to handle for the practical calculation. In this part, we present the analytical solution of Eq. (7) at low impurity concentration with small dimensionless ratio \(r=\gamma_{s}/\Delta_{0}\). By defining \(x=\omega/\Delta_{0}\), we consider a complex solution of the renormalization: \[\tilde{\omega}/\tilde{\Delta}_{0}=x+\delta x+im, \tag{11}\] in which the real parameters \(\delta x\) and \(m\) are small quantities for weak renormalization at low impurity concentration. For the branch of the solutions of the impurity Shiba bands at \(\omega>0\), considering the fact that the narrow impurity Shiba band is away from the edge of the continuum of Bogoliubov quasiparticle, Eq. (7) can be written as \[\delta x+im\approx r\frac{(x+\delta x+im)(\sqrt{1-x^{2}}-\frac{imx}{\sqrt{1-x^ {2}}})}{\eta^{2}-(x+\delta x+im)^{2}}. \tag{12}\] From above equation, one can analytically derive the solutions of the renormalization (refer to Appendix A): \[m^{2}=-B(x)+\sqrt{[B(x)]^{2}+rW(x)-(\eta^{2}-x^{2})^{2}}, \tag{13}\] and \[\delta x=\frac{rx\sqrt{1-x^{2}}(\eta^{2}-x^{2}+m^{2})}{(\eta^{2}-x^{2}+m^{2})^ {2}+4m^{2}x^{2}}. \tag{14}\] Here, \(B(x)=\eta^{2}+x^{2}-\frac{r/2}{\sqrt{1-x^{2}}}\) and \(W(x)=\sqrt{1-x^{2}}(\eta^{2}+x^{2})-\frac{x^{2}(\eta^{2}-x^{2})}{\sqrt{1-x^{2}}}\). It is noted from Eq. (13) that the imaginary part of the renormalization has defined solutions only in the regime with \(m^{2}\geq 0\), which limits the energy regime \([E_{b},E_{t}]\) of the emerging density of states inside the superconducting gap and hence the impurity Shiba band. Mathematically, since the condition of \(m^{2}\geq 0\) prefers a low factor \((\eta^{2}-x^{2})^{2}\) in Eq. (13), the solution of \(m\) is centered around \(\eta\Delta_{0}\), whereas the factor \(rW(x)\) determines the bandwidth \(\Delta E=E_{t}-E_{b}\) of the solution, as one requires \(rW(x)\geq(\eta^{2}-x^{2})^{2}\) for condition of \(m^{2}\geq 0\) in Eq. (13). Particularly, from Eq. (13), by solving \(m^{2}=0\) and hence \(rW(x)=(\eta^{2}-x^{2})^{2}\) for \(x>0\), at low concentration of the magnetic impurities, one has the solutions: \[x_{m=0} = [\eta^{2}\pm\sqrt{rW(x)}]^{1/2}\approx[\eta^{2}\pm\sqrt{rW( \eta)}]^{1/2} \tag{15}\] \[= \eta\pm\sqrt{2r\sqrt{1-\eta^{2}}}/2,\] which correspond to the top and bottom edges of the energy spectrum of the impurity Shiba band (i.e., \(E_{t}/\Delta_{0}\) and \(E_{b}/\Delta_{0}\)). Therefore, with \((E_{t}+E_{b})/2=\eta\Delta_{0}\) and \(\Delta E=\Delta_{0}\sqrt{2r\sqrt{1-\eta^{2}}}\), the density of states of the impurity Shiba band is centered around \(\eta\Delta_{0}\) with bandwidth proportional to the square root of the impurity concentration as well as to the factor \((1-\eta^{2})^{1/4}\). Particularly, with \(n_{i}\to 0^{+}\) and hence the vanishing hybridization of the YSR state, for \(m^{2}\geq 0\), one finds the sole solution of \(x=\eta\) in Eq. (13), which corresponds to the YSR state around single magnetic impurity. All these characters from our analytical derivation agree well with the ones from the calculation by Shiba [4]. As for the branch of the solutions of the continuum of the Bogoliubov quasiparticle, which is away from the narrow impurity Shiba bands at low impurity concentration, Eq. (7) can be written as \[\delta x+im\approx r\frac{(x+\delta x+im)\sqrt{1-(x+\delta x+im)^{2}}}{\eta^{2} -x^{2}}, \tag{16}\] and then, considering the weak renormalization, the solutions of the renormalization read (refer to Appendix A) \[m^{2}=\frac{r^{2}x^{2}(x^{2}-1)/[(\eta^{2}-x^{2})^{2}]}{1+r^{2}x^{2}/(\eta^{2}-x^ {2})^{2}}, \tag{17}\] and \[\delta x=\frac{rx\sqrt{|1-x^{2}+m^{2}-2imx|+1-x^{2}+m^{2}}}{\sqrt{2}(\eta^{2}- x^{2})}. \tag{18}\] In this situation, the imaginary part has defined solutions only in the regime with \(x\geq 1\) and hence \(m^{2}\geq 0\) as it should be, since the continuum of the Bogoliubov quasiparticle lies above the superconducting gap. In other regimes (\(x<E_{b}/\Delta_{0}\) and \(E_{t}/\Delta_{0}<x<1\)), one has the vanishing imaginary part (i.e., \(m=0\)), and in this situation, at low impurity concentration with small \(r\), the solution of the renormalization reads \[\delta x=\frac{rx\sqrt{1-x^{2}}}{(\eta^{2}-x^{2})}. \tag{19}\] It is noted that the solution of the real part \(\delta x\) of the renormalization in Eqs. (14), (18) and (19) is analytically continuous at the boundaries with \(m=0\), guaranteeing the analytic continuity of the derived solution in the entire energy regime for the practical calculation. Consequently, in contrast to the numerical formulation by Shiba, we obtain the analytical solutions of the complex renormalization \(\tilde{\omega}/\tilde{\Delta}_{0}\) [Eq. (11)], and then, the renormalized gap \(\tilde{\Delta}_{0}\) [Eq. (8)] as well as the renormalized Green function in Eq. (5) and density of states in Eq. (10) can be obtained for the practical formulation. ### Diagrammatic formalism for electromagnetic properties in linear regime In this part, in the presence of the \(s\)-\(d\) interaction from magnetic impurities at finite concentration, we present the diagrammatic formalism for the electromagnetic properties of superconductors in the linear response. Specifically, considering the presence of the vector potential \({\bf A}\) and long-range Coulomb interaction for the formulation of the electromagnetic properties, with the \(s\)-\(d\) interaction between electrons and magnetic impurities, the action of an \(s\)-wave superconductor after the Hubbard-Stratonovich transformation is written as [74; 79] \[S = \!\!\int dx\bigg{\{}\sum_{s=\uparrow,\downarrow}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where the first-order correlation due to the direct density-vertex contribution by pump effect: \[\chi_{v}=\eta_{f}-\sum_{n=0}^{\infty}i\bar{\rm Tr}[G_{0}\tau_{3}(G_{0}J{\bf S} \cdot\tilde{\mathbf{\sigma}})^{2n}]=\eta_{f}-i\bar{\rm Tr}(G\tau_{3}), \tag{26}\] and the second-order current-current correlation because of the drive effect: \[\chi_{jj} = -i\bar{\rm Tr}\Big{\{}\frac{k^{2}}{3m}(G_{0}\tau_{0})^{2}[1\!+\!2 (G_{0}J{\bf S}\!\cdot\!\tilde{\mathbf{\sigma}})^{2}\!+\!O(J^{n>2})]\Big{\}} \tag{27}\] \[\approx -i\bar{\rm Tr}\big{[}\frac{k^{2}}{3m}(G\tau_{0})^{2}\big{]},\] as well as the second-order density-density correlation due to the effective field \(\mu_{\rm eff}\): \[\chi_{\rho\rho} = \frac{i\bar{\rm Tr}}{4}\Big{\{}(G_{0}\tau_{3})^{2}[1\!+\!2(G_{0} J{\bf S}\!\cdot\!\tilde{\mathbf{\sigma}})^{2}+O(J^{n>2})]\Big{\}} \tag{28}\] \[\approx \frac{i\bar{\rm Tr}}{4}\big{[}(G\tau_{3})^{2}\big{]}.\] Here, to consider the influence of the impurity Shiba bands, we have neglected the non-equilibrium vertex and cross-diagram corrections [96; 74; 75] by \(s\)-\(d\) interaction in both current-current and density-density correlations, and only kept the self-consistent Born corrections (i.e., self-consistent complex renormalization by \(s\)-\(d\) interaction) as the renormalized equilibrium Green function in Dyson equation [Eq. (2)] [74; 75]. As for the direct density-vertex contribution in Eq. (26), the \(s\)-\(d\) interaction provides the renormalization to the fermion bubble (i.e., bare Green function), exactly same as the one by the equilibrium self-energy \(\Sigma(\omega)\) in Eq. (2). Moreover, in the second order of the external field, the coupling term between current-vertex related \({\bf p}_{s}\!\cdot\!\hat{\bf p}/m\) and density-vertex-related \(\mu_{\rm eff}\tau_{3}\) vanishes as a consequence of the particle-hole symmetry. Then, with the solved renormalized Green function in Sec. II.2, by calculating the non-equilibrium action \(\delta S\) in Eq. (25), one can derive the electromagnetic properties of superconductor in the linear response to study the influence of the \(s\)-\(d\) interaction from magnetic impurities. ## III Results In this section, to elucidate the macroscopic physical picture behind the emerging impurity Shiba bands, we first derive the equation of motion of the superconducting phase fluctuation in the presence of the magnetic impurities. Then, by formulating the non-equilibrium action in consideration of the stationary magnetic and optical responses, we calculate the diamagnetic property and optical absorption, respectively, and study the influence of the impurity Shiba bands on these properties. ### Superconducting phase fluctuation and physical picture Using the analytically obtained renormalized Green function in Sec. II.2, we derive the superconducting phase fluctuation in the presence of the magnetic impurities. Specifically, by taking the superconducting momentum \({\bf p}_{s}=\nabla\delta\theta/2\), in the center-of-mass momentum space, after the integration over the Hartree field, the non-equilibrium action in Eq. (25) directly becomes the effective one of the phase fluctuation: \[\delta S_{\delta\theta}\!=\!\int\!\frac{dtd{\bf q}}{\varepsilon_{\bf q}}\Big{[} \chi_{\rho\rho}\Big{(}\frac{\partial_{t}\delta\theta}{2}\Big{)}^{2}\!-\!\frac {(1\!+\!2V_{q}\chi_{\rho\rho})\eta_{s}n}{m}\Big{(}\frac{i{\bf q}\delta\theta} {2}\Big{)}^{2}\Big{]}, \tag{29}\] where \(\varepsilon_{\bf q}=1+2V_{\bf q}\chi_{\rho\rho}\) denotes the dielectric function; \(\eta_{s}=(\chi_{v}+\chi_{jj})/(2n)\) stands for the ratio of the superfluid density \(n_{s}\) to the electron density \(n\). To consider the case at nonzero temperature, we preform the formulation within the Matsubara representation [\(\omega\to i\omega_{l}=(2l+1)\pi T\), \(-i\bar{\rm Tr}\to\bar{\rm Tr}\)], and then, the related correlation coefficients are given by \[\chi_{\rho\rho} = -T\sum_{{\bf k}l}\frac{\rm Tr}{4}[G_{\bf k}(i\omega_{l})\tau_{3}G _{\bf k}(i\omega_{l})\tau_{3}]=-T\sum_{{\bf k}l}\frac{\rm Tr}{4} \tag{30}\] \[\times\left[\tau_{3}\partial_{\xi_{\bf k}}G_{\bf k}(i\omega_{l}) \right]=\sum_{l}\frac{2DT\omega_{D}}{\tilde{\omega}_{l}^{2}\!+\!\omega_{D}^{2} \!+\!\tilde{\Delta}_{0}^{2}},\] and \[\eta_{s}n = \tag{31}\] \[=\] \[= \sum_{{\bf k}l}\frac{4k_{F}^{2}T\tilde{\Delta}_{0}^{2}/(3m)}{[(i \tilde{\omega}_{l})^{2}\!-\!\xi_{k}^{2}\!-\!\tilde{\Delta}_{0}^{2}]^{2}}=\sum_ {l}\frac{n\pi T\tilde{\Delta}_{0}^{2}}{(\tilde{\Delta}_{0}^{2}\!+\!\tilde{\omega }_{l}^{2})^{3/2}},\] where \(\omega_{D}\) denotes the Debye frequency. Figure 2: Diagrammatic formalism of the corresponding correlation coefficients in non-equilibrium action in Eq. (25). In the figure, the dashed line with cross represents the \(s\)-\(d\) interaction; the wavy line is associated with the external field; the thin and thick solid lines denote the bare and renormalized Green functions, respectively. Due to the complex renormalization by the _s-d_ interaction from magnetic impurities (the complex solution of the renormalization within the Matsubara-frequency representation refers to Appendix B), from the effective action of the phase fluctuation in Eq. (29), there emerge two separate equations of motion of the phase modes: \[\left[\partial_{t}^{2}+\frac{\text{Re}(\eta_{s})ne^{2}}{\epsilon_{ 0}m}+\frac{\text{Re}(\eta_{s})n}{\text{Re}(\chi_{\rho\rho})m}q^{2}\right]\frac{ \delta\theta}{2}=0, \tag{32}\] \[\left[\partial_{t}^{2}+\frac{\text{Im}(\chi_{\rho\rho}\eta_{s})ne ^{2}}{\text{Im}(\chi_{\rho\rho})\epsilon_{0}m}+\frac{\text{Im}(\eta_{s})n}{ \text{Im}(\chi_{\rho\rho})m}q^{2}\right]\frac{\delta\theta}{2}=0. \tag{33}\] It is noted that both phase modes in Eqs. (32) and (33) exhibit gapless linear energy spectrum at free case (i.e., without long-range Coulomb interaction), and show gapped energy spectrum at long-wavelength limit after the coupling to the long-range Coulomb interaction as a consequence of the Anderson-Higgs mechanism [76]. Hence, both become inactive, and the original global and rigid phase coherence for achieving robust superconductivity in conventional superconductors [97] remains even in the presence of the magnetic impurities. Clearly, the phase mode in Eq. (32) corresponds to the conventional Nambu-Goldstone mode [77; 78; 79; 80; 81], as this equation of motion in the case without magnetic impurities exactly recovers the one [78; 81; 97] as in the BCS case, whereas the one in Eq. (33) emerges totally due to the complex renormalization by _s-d_ interaction from magnetic impurities. According to the Goldstone theorem [82; 83], as the existence of the collective gapless phase mode is a direct consequence of the formation of the macroscopic superconducting state due to the spontaneous breaking of the continuous \(U(1)\) symmetry [77; 78; 79; 80; 81], the emerging two phase modes here suggests that there exist two types of states of the Cooper pairs, forming the ground state in superconductors with magnetic impurities through the direct product. Specifically, with magnetic impurities, a small part of the Cooper pairs become localized around individual magnetic impurities due to the quantum correlation by the _s-d_ interaction, acting as Josephson islands and hence leading to the phase mode in Eq. (33), similar to the case of the granular superconductors [84; 85]. However, within the random phase approximation that takes random spatial distribution [4; 75], the emerging Josephson islands by localized Cooper pairs does not manifest themselves explicitly. The remaining part of the Cooper pairs is still conventional free type, resulting in the phase mode in Eq. (32). The impurity Shiba bands and Bogoliubov quasiparticle continuum then correspond to the excitations of the ground state of the localized and free Cooper pairs, respectively. Based on this picture with localized and conventional free Cooper pairs, one can understand the properties of the single-particle energy spectra in superconductors with magnetic impurities. On one hand, due to the small proportion of the localized Cooper pair compared with the free ones, the YSR state around single magnetic impurity exhibits a small (i.e., in-gap) excitation energy \(\eta\Delta\) associated with the breaking of the localized Cooper pair, whereas the enhancement of the exchange interaction profiting the pair breaking suppresses the excitation energy \(\eta\Delta_{0}\). The hybridization of the YSR states in ensembles of magnetic impurities at finite concentration then leads to the impurity Shiba band [4], showing the finite density of states centered around \(\eta\Delta_{0}\) with bandwidth proportional to the square root of the impurity density. On the other hand, with the increase of the magnetic impurities, the loss of the free Cooper pairs leads to a suppressed energy gap \(\Delta_{0}\) of the Bogoliubov quasiparticle as revealed by Shiba by self-consistently solving the gap equation [4]. Moreover, one can also understand the similarities in the behaviors of the phase modes in Eqs. (32) and (33), as the revealed phase mode on Josephson islands in describing the granular superconductors [84] exhibits similar behavior of the Nambu-Goldstone phase mode [77; 78; 79; 80; 81]. Particularly, it is noted that for the phase mode on the state of localized Cooper pair in Eq. (33), the energy gap \(\sqrt{\frac{\text{Im}(\chi_{\rho\rho}\eta_{s})ne^{2}}{\text{Im}(\chi_{\rho\rho} )\epsilon_{0}m}}=\sqrt{\frac{\text{Im}(\chi_{\rho\rho})Re(\eta_{s})+\text{Re }(\chi_{\rho\rho})\text{Im}(\eta_{s})ne^{2}}{\text{Im}(\chi_{\rho\rho}) \epsilon_{0}m}}\) at long-wavelength limit involves not only the contribution of the superfluid density \(\text{Im}(\eta_{s})n\) in the localized state, but also the one \(\text{Re}(\eta_{s})n\) in the free state. This is because that the electric long-range Coulomb interaction between the free and localized states is inevitable. Furthermore, we show in the following sections that the proposed picture of the ground state with free and localized Cooper pairs can well capture the obtained electromagnetic properties in conventional superconductors with magnetic impurities. ### Diamagnetic property In this part, by considering a stationary and transverse vector potential, we derive the diamagnetic response of conventional superconductors with _s-d_ interaction from magnetic impurities. In this situation, one has \(\mu_{\text{eff}}=0\) and hence \(\mu_{H}=0\) as well as \(\mathbf{p}_{s}=-e\mathbf{A}\) in Eq. (25). The generated diamagnetic supercurrent from the non-equilibrium action in Eq. (25) then reads \[\mathbf{j}_{s}=-e\partial_{\mathbf{p}_{s}}\delta S=-\frac{\chi_{v}+\chi_{jj}} {2}\frac{e^{2}\mathbf{A}}{m}=-\frac{\eta_{s}ne^{2}\mathbf{A}}{m}, \tag{34}\] with the ratio of the superfluid density to the electron density from Eq. (31) written as \[\eta_{s}=\sum_{l}\frac{\pi T\tilde{\Delta}_{0}^{2}}{(\tilde{\Delta}_{0}^{2}+ \tilde{\omega}_{l}^{2})^{3/2}}. \tag{35}\] As a self-consistent check, with the vanishing renormalization (i.e., \(\tilde{\omega}\rightarrow\omega\) and \(\tilde{\Delta}_{0}\rightarrow\Delta_{0}\)) in the absence of the magnetic impurities, one has \(\eta_{s}=1\) at \(T=0\) K and \(\eta_{s}=\frac{7\Delta_{0}^{2}\zeta(3)}{4(\pi T)^{2}}\) near \(T_{c}\) from Eq. (35), which are exactly same as the established superfluid density in the literature by various approaches [74; 97; 98; 99]. Moreover, with the magnetic impurities, at the case above \(T_{c}\), one has \(\Delta_{0}=0\) and hence \(\tilde{\Delta}_{0}=0\) from Eq. (8), and then, the supercurrent in Eq. (35) vanishes, i.e., the drive current exactly cancels the pump current one in normal metals as it should be [74], since the stationary magnetic vector potential can not drive the normal-state current even with the magnetic impurities. With the magnetic impurities in superconductors, due to the complex renormalization by _s-d_ interactions, there emerges an imaginary part in the superfluid-density ratio \(\eta_{s}\), i.e., the presence of the magnetic impurities lead to a finite imaginary part in the generated supercurrent. This imaginary part can be understood as follows based on the proposed picture of the ground state with free and localized Cooper pairs in Sec. III.1. Specifically, between the states of the free and localized Cooper pairs, the wave-vectors exhibit a \(\pi/2\)-phase difference, and hence, the induced center-of-mass momenta by vector potential, which are related to the generation of the supercurrent, also have a \(\pi/2\)-phase difference. Therefore, in comparison with the state of the free Cooper pairs that contributes to the real part in the supercurrent, the state of the free Cooper pairs leads to an imaginary part in the supercurrent. It is noted that the real part in supercurrent guarantees the diamagnetic effect in the magnetic response, whereas in contrast to the conventional exponential decay at the case without magnetic impurities [54], the induced imaginary part in supercurrent due to the magnetic impurities is incapable of causing the relaxation of the supercurrent, but leads to an oscillation in the decay of the vector potential from the surface to the interior of superconductors in the diamagnetic response, similar to the Friedel oscillation in normal metals due to the local modulation of the charge density by defect [75]. Therefore, we refer to this oscillation as superconducting Friedel oscillation. Particularly, from Eq. (34), together with the Maxwell equation, one can obtain the equation of the vector potential, and then, solve the penetration depth \(\lambda_{d}\) as well as the characteristic length \(\lambda_{o}\) of the oscillation through the following equation: \[\left(\frac{1}{\lambda_{d}}+\frac{i}{\lambda_{o}}\right)^{2}=\frac{4\pi\eta_{s }ne^{2}}{m}. \tag{36}\] At low concentration of the magnetic impurities, one has \[\lambda_{d} = \lambda_{c}/\sqrt{\text{Re}\eta_{s}}, \tag{37}\] \[\lambda_{o} = 2\lambda_{c}\sqrt{\text{Re}\eta_{s}}/\text{Im}\eta_{s}, \tag{38}\] where \(\lambda_{c}=\sqrt{m/(4\pi ne^{2})}\) denotes the London clean-limit penetration depth at zero temperature. The oscillatory decay of the vector potential provides a feasible detection scheme for the involved _s-d_ interaction and in particular, impurity Shiba bands in superconductors with magnetic impurities, via using the muon spin relaxation (\(\mu\)SR) measurements [100]. It is also noted that the oscillatory decay in superconductors with magnetic impurities has totally different origin from the observed one in superconducting proximity structure with triplet Cooper pairs induced by magnetism [100]. In that case, the emerging oscillation comes from the paramagnetic Meissner effect [i.e., \(\text{Re}(\eta_{s})<0\), directly leading to an imaginary \(\lambda_{d}\) in Eq. (37)] by triplet Cooper pairs [101], and the decay is due to the suppressed gap during the diffusion in proximity structure [102]. ### Optical absorption We next derive the optical absorption of conventional superconductors with magnetic impurities to present a more determined detection scheme for the impurity Shiba bands. Following the Mattis-Bardeen theory [64; 65], we also consider a conventional _s_-wave superconductor lying in the anomalous-skin-effect region with a mean free path \(L\) larger compared with the skin depth \(\lambda\)[55; 56], where the excited current at one space point depends not only on the electric field at that point but also on the ones nearby. This non-local effect in fact provides an effective dipole in the optical response, leading to the emergence of the optical absorption. The excited current in this situation reads [64; 65]: \[\mathbf{j}(\mathbf{r})=\int\frac{\mathbf{R}[\mathbf{R}\cdot\mathbf{A}( \mathbf{r}^{\prime})]I(\Omega,\mathbf{R})e^{-R/L}}{R^{4}}d\mathbf{r}^{\prime}, \tag{39}\] where \(\mathbf{R}=\mathbf{r}-\mathbf{r}^{\prime}\); the normalized linear-response coefficient \(I(\Omega,\mathbf{R})=\Pi(\Omega,\mathbf{R})/(k_{F}^{2}/3)\) with \(\Pi(\Omega,\mathbf{R})\) denoting the linear-response coefficient and \(\Omega\) representing the optical frequency. At dirty limit with a larger coherence length \(\xi\) compared with the mean free path \(L\) (i.e., \(\xi>L>\lambda\)), by the mean value theorem of integrals, one has \[\mathbf{j}(\mathbf{r}){\approx}I(\Omega,\mathbf{R}=0)\mathbf{A}(\mathbf{r}){ \int}\frac{e^{-R/L}}{3R^{2}}d\mathbf{r}^{\prime}, \tag{40}\] which leads to the optical conductivity: \[\sigma_{s}=\sigma_{1s}+i\sigma_{2s}=\frac{4\pi L}{3i\Omega}\sum_{\mathbf{q}}I( \Omega,\mathbf{q}). \tag{41}\] The artificial scheme of taking the external optical frequency as imaginary bosonic Matsubara frequency within the Matsubara representation makes it hard to directly distinguish the influence (complex renormalization) of the _s-d_ interaction. We therefore perform the formulation within the Keldysh formalism [96]. Specifically, it is noted that the direct density-vertex contribution by pump effect in the non-equilibrium action as an unphysical non-gauge-invariant current makes no contribution to the optical absorption, and only the current-current correlation contributes to \(\sigma_{1s}\) as a connected diagram in Fig. 2. Therefore, within the Keldysh space, substituting Eq. (27) into Eq. (41), one has \[\sigma_{1s}=-\mbox{Re}\Big{[}\frac{2e^{2}\pi L}{i\Omega mk_{F}^{2}} \sum_{\bf q}\chi_{jj}(\Omega,{\bf q})\Big{]}\] \[= \frac{2e^{2}\pi L}{3\Omega m^{2}}\!\int\!\frac{dE}{2\pi}\!\sum_{ \bf kq}\frac{\mbox{TrRe}}{4}\{[\hat{G}_{{\bf k}^{+}}(E^{+})\hat{G}_{\bf k}(E)]_ {K}\}, \tag{42}\] where \({\bf k}^{+}={\bf k}+{\bf q}\) and \(E^{+}=E+\Omega\); the subscript "K" denotes the Keldysh component; the Green function matrices \(\hat{G}_{\bf k}(E)\) is defined as [96] \[\hat{G}_{\bf k}(E)=\left(\begin{array}{cc}G_{\bf k}^{R}&G_{\bf k}^{K}\\ 0&G_{\bf k}^{A}\end{array}\right), \tag{43}\] and it is established in the literature [95; 96; 103] that the retarded (R), advanced (A) and Keldysh (K) Green functions can be obtained by \(G_{\bf k}^{R}(E)=G_{\bf k}(E+i0^{+})\), \(G_{\bf k}^{A}(E)=G_{\bf k}(E-i0^{+})\) and \(G_{\bf k}^{K}(E)=h(E)[G_{\bf k}^{R}(E)-G_{\bf k}^{A}(E)]\), respectively, with the distribution function \(h(E)=\tanh(\beta E/2)\). Here, \(\beta=1/(k_{B}T)\). Using the facts that \(\mbox{Re}G_{\bf k}^{R}(E)=\mbox{Re}G_{\bf k}^{R}(E)\) and \(\mbox{Im}G_{\bf k}^{A}(E)=-\mbox{Im}G_{\bf k}^{A}(E)\), the optical absorption becomes \[\sigma_{1s}= \frac{2e^{2}\pi L}{3\Omega m^{2}}\int\frac{dE}{2\pi}\sum_{\bf kq }\mbox{Tr}[\mbox{Im}G_{\bf k^{+}}^{R}(E^{+})\mbox{Im}G_{\bf k}^{R}(E)] \tag{44}\] \[\times\,\frac{h(E^{+})\!-\!h(E)}{2},\] and through the replacement \(\sum_{\bf kq}\rightarrow\sum_{\bf kk^{+}}\), one obtains \[\sigma_{1s}=\sigma_{n}\int dE\frac{f(E)\!-\!f(E^{+})}{\Omega}\frac{m(E)\rho(E ^{+})\rho(E)}{\pi^{2}D^{2}}, \tag{45}\] where \(\sigma_{n}=\frac{ne^{2}\tau}{m}\) represents the electrical conductivity in normal metals with \(\tau\) being the momentum-relaxation time; \(m(E)=1+\tilde{\Delta}_{0}(E)\tilde{\Delta}_{0}(E^{+})/(\bar{E}\bar{E}^{+})\) and \(m(E)L\) behaves as an effective dipole mediated by the scattering; \(f(E)\) denotes the Fermi distribution function. As a self-consistent check, with the vanishing renormalization in the absence of the magnetic impurities, the density of states \(\rho(E)\) becomes finite only when energy \(|E|\) lies above the superconducting gap, and one has \(\rho(E)=\pi D\frac{E_{\rm sgn}(E)}{\sqrt{E^{2}-\Delta_{0}^{2}}}\theta(|E|- \Delta_{0})\) from Eq. (10), with \(\theta(x)\) being the step function. Then, substituting \(\rho(E)\) to Eq. (45), the optical absorption becomes \[\frac{\sigma_{1s}}{\sigma_{n}} = \Big{[}\Big{(}\int_{\Delta_{0}}^{\infty}\!+\!\int_{-\infty}^{- \Delta_{0}-\Omega}\Big{)}-\theta(\Omega\!-\!2\Delta_{0})\int_{\Delta_{0}- \Omega}^{-\Delta_{0}}\Big{]} \tag{47}\] \[\times\,\frac{f(E)\!-\!f(E^{+})}{\Omega}\frac{(EE^{+}\!+\! \Delta_{0}^{2})dE}{\sqrt{E^{2}\!-\!\Delta_{0}^{2}}\sqrt{(E^{+})^{2}\!-\!\Delta _{0}^{2}}}\] \[= \Big{[}2\int_{\Delta_{0}}^{\infty}\frac{f(E)\!-\!f(E^{+})}{\Omega }-\theta(\Omega\!-\!2\Delta_{0})\int_{\Delta_{0}-\Omega}^{-\Delta_{0}}\] \[\times\,\frac{1\!-\!2f(E^{+})}{\Omega}\Big{]}\frac{(EE^{+}\!+\! \Delta_{0}^{2})dE}{\sqrt{E^{2}\!-\!\Delta_{0}^{2}}\sqrt{(E^{+})^{2}\!-\! \Delta_{0}^{2}}},\] which exactly recovers the one from the Mattis-Bardeen theory [64; 65]. It is noted that the first and second terms in Eq. (46) correspond to the intraband and interband transitions of the Bogoliubov quasiparticles, respectively. As mentioned in the introduction, at \(T=0\) K with only the contribution of the interband transition, the optical absorption \(\sigma_{1s}(\Omega)\) vanishes when \(\Omega<2\Delta_{0}\) but becomes finite above \(2\Delta_{0}\), leading to a crossover point at \(2\Delta_{0}\). At finite temperature, an additional quasiparticle contribution appears below \(2\Delta_{0}\) due to the intraband transition. As for the case with magnetic impurities at finite concentration, according to the proposed picture of the ground state with free and localized Cooper pairs in Sec. III.1, the impurity Shiba bands and Bogoliubov quasiparticle continuum correspond to the excitations of the ground states of the localized and free Cooper pairs, respectively, and hence, are similar to each other. Then, based on the revealed inter- and intraband transitions of the Bogoliubov quasiparticle by the Mattis-Bardeen theory [64], one expects the inter and intraband transitions of the impurity Shiba bands as well as all interband transitions between Bogoliubov quasiparticle and impurity Shiba bands. Specifically, with magnetic impurities, due to the emergence of the impurity Shiba bands inside the superconducting gap, the density of states becomes finite not only above the superconducting gap but also in the Shiba-band regime of \(E_{b}<|E|<E_{\rm t}\). In this situation, considering the case at zero temperature with only the interband transition, the optical absorption becomes \[\frac{\sigma_{1s}}{\sigma_{n}} = \Big{\{}\theta(\Omega-2\Delta_{0})\int_{\Delta_{0}-\Omega}^{- \Delta_{0}}+\theta(\Omega-\Delta_{0}-E_{b})\int_{E_{b}-\Omega}^{\min(-\Delta_{0 },E_{\rm t}-\Omega)}+\theta(\Omega-E_{b}-\Delta_{0})\int_{\max(\Delta_{0}- \Omega,-E_{\rm t})}^{-E_{b}} \tag{48}\] \[+\,\theta(2E_{\rm t}-\Omega)\theta(\Omega-2E_{b})\int_{\max(-E_{ \rm t},E_{\rm b}-\Omega)}^{\min(-E_{b},E_{\rm t}-\Omega)}\Big{\}}\frac{m(E) \rho(E^{+})\rho(E)}{\Omega\pi^{2}D^{2}}dE.\] It is noted in above equation that the first term corresponds to the interband transition (channel I in Fig. 1) from the Bogoliubov quasiholes to quasielectrons, which is finite at \(\Omega>2\Delta_{0}\), leading to a crossover at \(\Omega=2\Delta_{0}\). The second term denotes the interband transition (channel III in Fig. 1) from the Bogoliubov quasiholes to electron-type impurity Shiba band, and the third one represents the interband transition (channel IV in Fig. 1) from the hole-type impurity Shiba band to Bogoliubov quasielectrons. The second and third terms are symmetric and hence both are finite at \(\Omega>\Delta_{0}+E_{b}\), causing a crossover at \(\Omega=\Delta_{0}+E_{b}\). The forth term stands for the interband transition (channel II in Fig. 1) from the hole-to electron-type impurity Shiba bands, which is finite at \(2E_{t}>\Omega>2E_{b}\) and hence leads to a resonance peak from \(\Omega=2E_{b}\) to \(\Omega=2E_{t}\) and centered around \(2\eta\Delta_{0}\). Consequently, in addition to the conventional interband transition of Bogoliubov quasiparticles as revealed by Mattis-Bardeen theory [64], due to the emergence of the impurity Shiba bands by \(s\)-\(d\) interaction from the magnetic impurities, at zero temperature, there also exist the interband transitions (from hole type to electron type) between the impurity Shiba bands as well as between Bogoliubov quasiparticle and impurity Shiba bands, causing a resonance peak centered around \(2\eta\Delta_{0}\) and a crossover at \(\Delta_{0}+E_{b}\) in the optical absorption, respectively, providing clear features for the detection of the impurity Shiba bands in the optical spectroscopy. With increase of temperature from zero, there gradually emerge the intraband transitions inside the Bogoliubov quasiparticle continuum and inside the impurity Shiba band. Interestingly, two additional interband transitions also emerge at nonzero temperature: from electron-type impurity Shiba band to the Bogoliubov quasielectrons (channel V in Fig. 1); from the Bogoliubov quasiholes to the hold-type impurity Shiba band (channel VI in Fig. 1), leading to the contribution: \[\frac{\sigma_{1s}}{\sigma_{n}}\Big{|}_{hB\to hS}^{eS\to eB}=\Big{[}\int_{\max( \Delta_{0}-\Omega,E_{t})}^{E_{t}}+\int_{-E_{t}\Omega}^{\min(-\Delta_{0},-E_{ t}-\Omega)}\Big{]}m(E)\] \[\times\theta(\Omega+E_{t}-\Delta_{0})\frac{[f(E)-f(E^{+})]\rho(E^ {+})\rho(E)}{\Omega\pi^{2}D^{2}}dE. \tag{48}\] This contribution becomes finite at \(\Omega>\Delta_{0}-E_{t}\), and hence, a crossover at \(\Omega=\Delta_{0}-E_{t}\) in the optical absorption gradually emerges with increase of temperature from \(T=0\ K\), also providing a clear feature for the detection of the impurity Shiba bands in the optical spectroscopy. ## IV Summary and discussion In summary, in conventional superconductors with magnetic impurities, via analytically solving the renormalized Green function by the \(s\)-\(d\) interaction at low impurity concentration, we have derived the macroscopic superconducting phase fluctuation, and found that there exist two superconducting phase modes. Consequently, by the Goldstone theorem [82; 83] of the collective phase mode in superconductors [87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 335; 336; 337; 338; 339; 340; 341; 342; 343; 35; 359; 361; 358; 35; 362; 363; 364; 365; 366; 367; 368; 379; 380; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 425; 416; 426; 431; 432; 447; 445; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 614; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 777; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 80; 81; 84; 87; 89; 80; 82; 85; 87; 86; 88; 89; 91; 88; 80; 82; 89; 80; 83; 86; 89; 81; 87; 88; 82; 89; 80; 84; 88; 87; 89; 80; 85; 88; 89; 82; 86; 87; 89; 83; 88; 83; 89; 80; 84; 89; 80; 85; 89; 80; 85; 86; 87; 88; 89; 84; 80; 86; 87; 89; 88; 89; 87; 88; 89; 80; 89; 80; 88; 89; 80; 89; 80; 81; 89; 80; 81; 80; 82; 83; 85; 89; 86; 87; 89; 80; 87; 88; 83; 89; 84; 81; 85; 86; 87; 89; 80; 89; 88; 80; 89; 81; 88; 89; 80; 89; 82; 89; 80; 83; 81; 89; 80; 84; 82; 86; 87; 88; 89; 80; 89; 80; 89; 80; 81; 89; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 80; 82; 89; 80; 83; 82; 83; 85; 87; 88; 83; 86; 89; 87; 89; 88; 83; 88; 85; 89; 80; 89; 80; 81; 89; 80; 81; 80; 82; 84; 86; 88; 87; 89; 80; 83; 89; 80; 81; 85; 86; 89; 87; 88; 89; 80; 89; 80; 81; 88; 83; 82; 89; 83; 84; 86; 87; 88; 89; 89; 80; 89; 82; 89; 80; 83; 84; 85; 86; 89; 87; 89; 83; 88; 83; 89; 84; 89; 80; 86; 89; 87; 88; 89; 80; 89; 80; 89; 80; 89; 81; 89; 80; 82; 86; 89; 81; 80; 83; 87; 83; 88; 83; 84; 88; 85; 86; 89; 87; 88; 83; 86; 89; 83; 87; 89; 89; 80; 89; 83; 84; 89; 86; 89; 87; 89; 84; 89; 86; ## Appendix A Derivation of solution of the renormalization within real-frequency representation In this part, we present the derivation of the solution of the renormalization of \(\tilde{\omega}/\tilde{\Delta}\). At low concentration of magnetic impurities, the narrow impurity Shiba band is away from the edge of the Bogoliubov quasiparticle continuum. For the branch of the solutions of the impurity Shiba bands at \(\omega>0\), the real and imaginary parts of Eq. (12) are written as \[\delta x = r\frac{[(x+\delta x)\sqrt{1-x^{2}}+m^{2}x/\sqrt{1-x^{2}}][\eta^{ 2}-(x+\delta x)^{2}+m^{2}]+2m^{2}(x+\delta x)^{2}x/\sqrt{1-x^{2}}-2m^{2}(x+ \delta x)\sqrt{1-x^{2}}}{[\eta^{2}-(x+\delta x)^{2}+m^{2}]^{2}+4m^{2}(x+ \delta x)^{2}}, \tag{14}\] \[m = mr\frac{[\sqrt{1-x^{2}}-(x+\delta x)x/\sqrt{1-x^{2}}][\eta^{2}-( x+\delta x)^{2}+m^{2}]+2(x+\delta x)^{2}\sqrt{1-x^{2}}+2m^{2}(x+\delta x)x/ \sqrt{1-x^{2}}}{[\eta^{2}-(x+\delta x)^{2}+m^{2}]^{2}+4m^{2}(x+\delta x)^{2}}. \tag{15}\] Considering the fact that the real part \(\delta x\) of the renormalization is a small quantity compared to \(x\), keeping the lowest order of \(r\), the solution of \(\delta x\) is directly given by Eq. (14). Moreover, one can also neglect \(\delta x\) in the equation of the imaginary part, and then, Eq. (15) becomes \[(\eta^{2}-x^{2}+m^{2})^{2}+4m^{2}x^{2}=r\sqrt{1-x^{2}}(\eta^{2}-x^{2}+m^{2})- rx^{2}/\sqrt{1-x^{2}}(\eta^{2}-x^{2}+m^{2})+2rx^{2}\sqrt{1-x^{2}}+2rm^{2}x^{2}/ \sqrt{1-x^{2}}, \tag{16}\] which can be re-written as \[m^{4}+2B(x)m^{2}+(\eta^{2}-x^{2})^{2}-rW(x)=0, \tag{17}\] leading to the solution in Eq. (13). Similarly, for the branch of the solutions of the continuum of the Bogoliubov quasiparticle, considering the fact that the real part \(\delta x\) of the renormalization is a small quantity compared to \(x\), the real part of Eq. (16) directly becomes the solution of \(\delta x\) in Eq. (18), whereas the imaginary part reads \[m=\frac{rx}{\eta^{2}-x^{2}}\sqrt{\frac{\sqrt{(x^{2}-1-m^{2})^{2}+4m^{2}x^{2}} +x^{2}-1-m^{2}}{2}}, \tag{18}\] and can be re-written as \[\Big{[}1+\frac{r^{2}x^{2}}{2(\eta^{2}-x^{2})^{2}}\Big{]}m^{2}-\frac{r^{2}x^{ 2}(x^{2}-1)}{2(\eta^{2}-x^{2})^{2}}=\frac{r^{2}x^{2}}{2(\eta^{2}-x^{2})^{2}} \sqrt{(x^{2}-1-m^{2})^{2}+4m^{2}x^{2}}\approx\frac{r^{2}x^{2}(x^{2}-1)}{2(\eta ^{2}-x^{2})^{2}}, \tag{19}\] where we have kept the lowest order of \(r\). Consequently, the solution of \(m\) in Eq. (17) is obtained. ## Appendix B Derivation of solution of the renormalization within Matsubara-frequency representation In this part, we present the derivation of the solution of the renormalization within the Matsubara-frequency representation. For Matsubara frequency \(\omega_{l}\), by defining \(x_{l}=\omega_{l}/\Delta_{0}\), we consider a complex solution of the renormalization: \[\tilde{\omega}_{l}/\tilde{\Delta}_{0}=x_{l}+\delta x_{l}+im_{l}, \tag{20}\] in which the parameters \(\delta x_{l}\) and \(m_{l}\) are small quantities for weak renormalization at low impurity concentration. It is noted that with \(\omega\to i\omega_{l}=(2l+1)\pi T\), Eq. (8) is unchanged, whereas Eq. (7) becomes different and is written as \[\frac{\omega_{l}}{\Delta_{0}}=\frac{\tilde{\omega}_{l}}{\tilde{\Delta}_{0}} \bigg{[}1-r\frac{\sqrt{1+(\frac{\tilde{\omega}_{l}}{\tilde{\Delta}_{0}})^{2}} }{\eta^{2}+(\frac{\tilde{\omega}_{l}}{\tilde{\Delta}_{0}})^{2}}\bigg{]}, \tag{21}\] which can be re-written as \[\delta x_{l}+im_{l}=r\frac{(x_{l}+\delta x_{l}+im_{l})\sqrt{1+(x_{l}+\delta x _{l}+im_{l})^{2}}}{\eta^{2}+(x_{l}+\delta x_{l}+im_{l})^{2}}. \tag{22}\] Considering the facts that \(\delta x\) is a small quantity compared to \(x\) and \(m_{l}^{2}\ll 1+x_{l}^{2}\) for weak renormalization at low impurity concentration, one approximately has \[\delta x_{l}+im_{l}=r(x_{l}+im_{l})\frac{\sqrt{1+x_{l}^{2}}+im_{l}x_{l}/\sqrt{1+x _{l}^{2}}}{[\eta^{2}\!+\!(x_{l}+im_{l})^{2}]}, \tag{10}\] which can be separated into two equations: \[(\eta^{2}+x_{l}^{2}-m_{l}^{2})\delta x_{l}-2x_{l}m_{l}^{2}=rx_{l} \sqrt{1+x_{l}^{2}}-rm_{l}^{2}x_{l}\sqrt{1+x_{l}^{2}}, \tag{11}\] \[2x_{l}\delta x_{l}=r\sqrt{1+x_{l}^{2}}+rx_{l}^{2}/\sqrt{1+x_{l}^ {2}}+m_{l}^{2}-\eta^{2}-x_{l}^{2}. \tag{12}\] By keeping the lowest two orders of \(r\) and solving Eqs. (11) and (12), at \(\omega_{l}>0\), one finds the solutions: \[2x_{l}\delta x=\frac{r/2}{\sqrt{1+x_{l}^{2}}}+\frac{2rx_{l}^{2}} {\sqrt{1+x_{l}^{2}}}-2x_{l}^{2}+2ix_{l}\eta\Big{(}1-\frac{r/4}{\sqrt{1+x_{l}^ {2}}}\Big{)} \tag{13}\] \[m=-\eta\Big{(}1-\frac{r}{4}\frac{\sqrt{1+x_{l}^{2}}}{\eta^{2}+x _{l}^{2}}\Big{)}-ix_{l}\Big{[}1+\frac{r}{4}\frac{1-\eta^{2}}{\sqrt{1+x_{l}^{2} }(\eta^{2}+x_{l}^{2})}\Big{]} \tag{14}\] and hence, \[\Big{(}\frac{\tilde{\omega}_{l}}{\tilde{\Delta}_{0}}\Big{)}^{2}=x_{l}^{2}+2x_ {l}(\delta x_{l}+im_{l})=\frac{r}{2}\Big{[}\frac{1+4x_{l}^{2}}{\sqrt{1+x_{l}^ {2}}}+\frac{x_{l}^{2}(1-\eta^{2})}{\sqrt{1+x_{l}^{2}}(\eta^{2}+x_{l}^{2})} \Big{]}+\frac{i\eta r}{2}\frac{x_{l}}{\eta^{2}+x_{l}^{2}}\frac{1-\eta^{2}}{ \sqrt{1+x_{l}^{2}}}. \tag{15}\] Consequently, differing from the solution in the real-frequency representation as obtained in Sec. A, the solution of the renormalization by the \(s\)-\(d\) interaction in the Matsubara-frequency representation is always complex. As a self-consistent check, in the case without magnetic impurities (\(r=\rightarrow 0\)), the renormalization in Eq. (15) vanishes as it should be. Moreover, due to the factor \(x_{l}/(\eta^{2}+x_{l}^{2})\), the imaginay part of the renormalization in Eq. (15), which is related to the contribution from the state of the localized Cooper pairs, achieves the maximum at \(x_{l}=\eta\). Consequently, as the minimum of \(x_{l}\) is \(\pi T/\Delta_{0}\), the increase of temperature at \(\pi T>\eta\Delta_{0}\) leads to the suppressions on the imaginay part of the renormalization and hence the imaginary part of the superfluid-density ratio \(\eta_{s}\) [Eq. (31)], suggesting the breaking of the localized Cooper pairs by the excitation of the YSR state. By further increasing temperature until \(x_{l}\gg\eta\) at all \(l\), the imaginay part of the renormalization nearly vanishes due to the vanishing localized Cooper pairs.
2310.03436
Generalized unistochastic matrices
We study a class of bistochastic matrices generalizing unistochastic matrices. Given a complex bipartite unitary operator, we construct a bistochastic matrix having as entries the normalized squares of Frobenius norm of the blocks. We show that the closure of the set of generalized unistochastic matrices is the whole Birkhoff polytope. We characterize the points on the edges of the Birkhoff polytope that belong to a given level of our family of sets, proving that the different (non-convex) levels have a rich inclusion structure. We also study the corresponding generalization of orthostochastic matrices. Finally, we introduce and study the natural probability measures induced on our sets by the Haar measure of the unitary group. These probability measures interpolate between the natural measure on the set of unistochastic matrices and the Dirac measure supported on the van der Waerden matrix.
Ion Nechita, Zikun Ouyang, Anna Szczepanek
2023-10-05T10:21:54Z
http://arxiv.org/abs/2310.03436v2
# Generalized unistochastic matrices ###### Abstract. We study a class of bistochastic matrices generalizing unistochastic matrices. Given a complex bipartite unitary operator, we construct a bistochastic matrix having as entries the normalized squared Frobenius norm of the blocks. We show that the closure of the set of generalized unistochastic matrices is the whole Birkhoff polytope. We characterize the points on the edges of the Birkhoff polytope that belong to a given level of our family of sets, proving that the different (non-convex) levels have a rich inclusion structure. We also study the corresponding generalization of orthostochastic matrices. Finally, we introduce and study the natural probability measures induced on our sets by the Haar measure of the unitary group. These probability measures interpolate between the natural measure on the set of unistochastic matrices and the Dirac measure supported on the van der Waerden matrix. ###### Contents * 1 Introduction * 2 Generalized unistochastic matrices * 3 Generalizing the bracelet framework * 3.1 Generalized bracelet conditions * 3.2 Generalized bracelet matrices * 4 Generalized orthostochastic matrices * 5 Random generalized unistochastic matrices ## 1. Introduction Bistochastic and unistochastic matrices is a classical topic that comes up repeatedly in various domains of mathematics and mathematical physics. To set the stage and introduce notation, let us recall that a square real matrix of size \(d\), the set of which we shall denote by \(\mathcal{M}_{d}(\mathbb{R})\), is called _bistochastic_ (or _doubly stochastic_) if it has non-negative entries that add up to one in every row and column. That is, \(B=(B_{ij})_{i,j=1}^{d}\) is bistochastic when its entries satisfy the following conditions: \[\forall i,j\quad B_{ij}\geq 0\qquad\forall i\quad\sum_{j=1}^{d}B_{ij}=1\quad \text{ and }\quad\forall j\quad\sum_{i=1}^{d}B_{ij}=1.\] We shall denote by \(\mathsf{B}_{d}\) the set of all bistochastic matrices of order \(d\). A bistochastic matrix is called _unistochastic_ when its entries are the squared absolute values of some unitary matrix of the same size. More formally, we consider the map \[\varphi_{d}\colon\mathcal{U}(d)\ni(U_{ij})_{i,j=1}^{d}\longmapsto\left(|U_{ij} |^{2}\right)_{i,j=1}^{d}\in\mathcal{M}_{d}(\mathbb{R}).\] The image of \(\mathcal{U}(d)\) under \(\varphi_{d}\) constitutes the set \(\mathsf{U}_{d}\) of unistochastic matrices of order \(d\). Alternatively, using the Hadamard (entrywise) product \(\circ\) of matrices, we can write \[\mathsf{U}_{d}:=\varphi_{d}(\mathcal{U}(d))=\{U\circ\bar{U}\mid U\in\mathcal{ U}(d)\}.\] By unitarity, we have \(\mathsf{U}_{d}\subseteq\mathsf{B}_{d}\). One of the reasons behind the prominence of bistochastic matrices is the fact that its entries can be regarded as the probabilities that some (classical) physical system evolves from one state to another. If the bistochastic matrix is also unistochastic, then the system under consideration can be quantized. There are many references related to the applications of unistochastic matrices in various areas, e.g., in quantum information theory and in particle physics, see [1] and references therein. It is well known that the _Birkhoff polytope_\(\mathsf{B}_{d}\) is convex and compact. The extreme points of \(\mathsf{B}_{d}\) are permutation matrices, so the Birkhoff polytope has \(d!\) vertices. A bistochastic matrix lies at the boundary of the Birkhoff polytope iff it has a zero entry. There are \(d^{2}\) faces and they correspond to the inequalities that the matrix entries must satisfy. For instance, consider \(d=3\) and \[B=\begin{bmatrix}a&b&*\\ c&d&*\\ *&*&*\end{bmatrix}\] with \(a,b,c,d\in\mathbb{R}\). The 9 inequalities corresponding to the faces of the Birkhoff polytope are \[\begin{cases}a,b,c,d\geq 0\\ a+b\leq 1,\quad c+d\leq 1,\quad a+c\leq 1,\quad b+d\leq 1\\ (1-a-c)+(1-b-d)\leq 1\end{cases}\] An obvious example of a unistochastic matrix is a permutation matrix. Another well-known example is the van der Waerden (flat) matrix, i.e., the matrix whose entries are all equal to \(1/d\). It is worth noting that the unitary matrices that induce the van der Waerden matrix are precisely the renowned complex Hadamard matrices, for instance the Fourier matrix \(\frac{1}{\sqrt{d}}\bigl{(}\exp\bigl{(}\frac{2\pi\mathrm{i}}{d}jk\bigr{)} \bigr{)}_{j,k=0}^{d-1}\). One immediately sees that for \(d=2\) every bistochastic matrix is unistochastic, i.e., we have \(\mathsf{U}_{2}=\mathsf{B}_{2}\). This, however, is a sole exception as for every dimension \(d\) higher than two we have \(\mathsf{U}_{d}\subsetneq\mathsf{B}_{d}\) and \(\mathsf{U}_{d}\) is known to be non-convex. A lot of effort has been put into characterizing unistochastic matrices and one of the key tools turned out to be the _bracelet condition_. It allows us to distinguish the set of _bracelet matrices_, which is a superset of unistochastic matrices. Namely, let \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\) and \(\beta=(\beta_{1},\ldots,\beta_{d})\) be probability vectors (i.e., in each vector the entries are non-negative and sum up to one). We say that \((\alpha,\beta)\) satisfies the bracelet condition if \[2\max_{j=1,\ldots,d}\sqrt{\alpha_{j}\beta_{j}}\leq\sum_{j=1}^{d}\sqrt{\alpha_ {j}\beta_{j}}. \tag{1}\] Now, a bistochastic matrix is said to be a _bracelet matrix_ if every pair of its rows and every pair of its columns, regarded as pairs of probability vectors, satisfies the bracelet condition; we shall denote the set of bracelet matrices of order \(d\) by \(\mathsf{L}_{d}\). The bracelet condition plays an instrumental role in the study of unistochastic matrices because it characterizes the first non-trivial case \(d=3\)[1], i.e., \[\mathsf{U}_{3}=\mathsf{L}_{3}\subsetneq\mathsf{B}_{3}\] and, as we already mentioned, it provides a necessary (but not sufficient) condition for unistochasticity in higher dimensions (see [11]): \[\forall d\geq 4,\qquad\mathsf{U}_{d}\subsetneq\mathsf{L}_{d}\subsetneq\mathsf{ B}_{d}.\] In the present paper we introduce the notion of _generalized unistochastic matrices_, denoted by \(\mathsf{U}_{d,s}\); here, \(s\) is an integer parameter. The idea is to replace \(\mathcal{U}(d)\) with \(\mathcal{U}(ds)\) and regard a unitary matrix from \(\mathcal{U}(ds)\) as a \(d\times d\) matrix consisting of \(s\times s\) submatrices (blocks). Then it suffices to replace the absolute values of entries by the normalized squares of Frobenius (or Schatten-2) norms of blocks to arrive at a \(d\times d\) bistochastic matrix again. Formally, we consider the map \[\varphi_{d,s}\colon\mathcal{U}(ds)\ni\big{(}U_{ij}(k,l)\big{)}_{\begin{subarray}{ c}1\leq i,j\leq d\\ 1\leq k,l\leq s\end{subarray}}\longmapsto\big{(}\tfrac{1}{s}||U_{ij}||_{F}^{2} \big{)}_{1\leq i,j\leq d}\in\mathsf{B}_{d},\] where \(U_{ij}(k,l)\) is the \((k,l)\)-th entry in the \((i,j)\)-th block, and we define \(\mathsf{U}_{d,s}\) as the image of \(\varphi_{d,s}\). Let us recall that the Frobenius norm of a square complex matrix \(X\in\mathcal{M}_{n}(\mathbb{C})\) is given by \[\|X\|_{F}:=\Big{(}\sum_{i,j=1}^{n}|X_{ij}|^{2}\Big{)}^{1/2}=\operatorname{Tr}( XX^{*})^{1/2}.\] In Propositions 2.4 & 2.5 we show that generalized unistochastic matrices do indeed generalize the notion of unistochastic matrices, i.e., for every \(s\) we have \[\mathsf{U}_{d}\subseteq\mathsf{U}_{d,s}\subseteq\mathsf{B}_{d}.\] One of the main results of the present paper is Theorem 2.8, where we show that every bistochastic matrix can be arbitrarily well approximated by a generalized unistochastic matrix of some order. The key ingredient in proving this result is the convexity-type property of generalized unistochastic matrices: \[\forall s,t\quad\tfrac{s}{s+t}\mathsf{U}_{d,s}+\tfrac{t}{s+t}\mathsf{U}_{d,t} \subseteq\mathsf{U}_{d,s+t},\] see Proposition 2.6. Then in Corollary 2.7 we investigate further non-trivial inclusion relations between the sets of generalized unistochastic matrices of different orders. Let us point our that an alternative generalization of unistochastic matrices was proposed by Gutkin in [11]. Unfortunately, as we show at the end of Section 2, the proposed generalization yields only stochastic (and generally not bistochastic) matrices, so it is quite far from the usual unistochastic matrices. Generalized unistochastic matrices were also considered in [1], as classical channels associated to generalized unistochastic channels. More precisely, given a bipartite unitary matrix \(U\in\mathcal{U}(ds)\), Shahbeigi, Amaro-Alcala, Puchala, and Zyczkowski consider the quantum channel \(\Phi:\mathcal{M}_{d}(\mathbb{C})\to\mathcal{M}_{d}(\mathbb{C})\) given by \[\Phi(X)=\operatorname{Tr}_{s}\left[U\left(X\otimes\frac{I_{s}}{s}\right)U^{*} \right].\] The classical transition matrix corresponding to this channel, \(B_{ij}:=\langle i|\Phi(|j\rangle\!\langle j|)|i\rangle\) corresponds precisely to the generalized bistochastic matrices we study. In this work, we further the understanding of these objects, providing new insights on their structure and relation to uni- and bi-stochastic matrices. We shall refer to [1] at different points of this paper, emphasizing the new contributions of our research. Our focus will be on generalized unistochastic matrices, and not on the unistochastic channels, as in [1]. The main tool we develop to investigate \(\mathsf{U}_{d,s}\) is the _generalized bracelet condition_. A pair of probability vectors \(\alpha,\beta\) is said to satisfy the generalized bracelet condition of order \(s\) if they correspond to the normalized squares of Frobenius norms of the blocks of some unitary matrix \(U\in\mathcal{U}(ds)\), i.e., if there exist matrices \(A_{1},\dots,A_{d},B_{1},\dots,B_{d}\in\mathcal{M}_{s}(\mathbb{C})\) satisfying \[\forall i\quad\tfrac{1}{s}\|A_{i}\|_{F}^{2}=\alpha_{i}\quad\text{ and }\quad\tfrac{1}{s}\|B_{i}\|_{F}^{2}=\beta_{i}\] as well as \[\sum_{i=1}^{d}A_{i}A_{i}^{*}=\sum_{i=1}^{d}B_{i}B_{i}^{*}=I_{s},\quad\sum_{i=1} ^{d}A_{i}B_{i}^{*}=0,\] which means that the \(2\)-row block matrix \[\begin{bmatrix}A_{1}&\cdots&A_{d}\\ B_{1}&\cdots&B_{d}\end{bmatrix}\] (of size \(2s\times ds\)) can be expanded to a unitary matrix of size \(ds\times ds\). See Proposition 3.4 for the proof that these conditions do indeed generalize the standard bracelet condition (1). Since the generalized unistochastic matrices \(\mathsf{U}_{d,s}\) are defined as the image of \(\mathcal{U}(ds)\) under \(\varphi_{d,s}\), it is natural to equip \(\mathsf{U}_{d,s}\) with the probability measure obtained by pushing forward the Haar measure from \(\mathcal{U}(ds)\) via \(\varphi_{d,s}\). We compute the first few joint moments of the elements of a random matrix \(B\in\mathsf{U}_{d,s}\). In particular, the expected value of \(B_{ij}\) equals \(\ 1/d\), while its variance decreases as \(s\) grows (and \(d\) is fixed), which means that the probability distribution on \(\mathsf{U}_{d,s}\) tends to concentrate around the van der Waerden matrix. We also draw some conclusions regarding the covariance and correlation of the elements of \(B\). The paper is organized as follows. In Section 2 we introduce generalized unistochastic matrices and present their basic properties. Section 3 contains a suitable generalization of the bracelet conditions for unistochastic matrices; these conditions allow us to showcase the complexity of the different levels of the generalized unistochastic sets. In Section 4 we discuss the corresponding generalizations of orthostochastic matrices. Finally, in Section 5 we explore the properties of the probability measures induced on the set of generalized unistochastic matrices by the Haar distribution on the unitary group. ## 2. Generalized unistochastic matrices The main idea of this work can be summarized in the following table: \begin{tabular}{|c|c|c|} \hline _Source_ & _Operation_ & _Result_ \\ \hline \hline Unitary group & & Unistochastic matrix \\ \(U\in\mathcal{U}(d)\) & & \(B\in\mathsf{U}_{d}\) \\ \hline Larger unitary group & & Generalized unistochastic matrix \\ \(U\in\mathcal{U}(ds)\subseteq\mathcal{M}_{d}(\mathcal{M}_{s}(\mathbb{C}))\) & & \(B\in\mathsf{U}_{d,s}\) \\ \hline \end{tabular} Let \(d\geq 2\) and \(s\geq 1\) be integers. In what follows we regard \(B\in\mathcal{M}_{ds}(\mathbb{C})\) as a \(d\times d\) block matrix consisting of \(s\times s\) blocks. We shall write \(B_{ij}\) for the the \((i,j)\)-th block and \(B_{ij}(k,l)\) for the \((k,l)\)-th coefficient inside this block, where \(k,l\in[s]\) and \(i,j\in[d]\). For brevity, we put \([n]:=\{1,2,\ldots,n\}\) and \(\mathfrak{S}_{n}\) for the group of permutations of \([n]\). We come now to the main definition of this work, that of generalized unistochastic matrices. These objects have previously been considered in [1], in relation to classical actions of quantum channels. **Definition 2.1**.: _Consider the map_ \[\varphi_{d,s}\colon\mathcal{U}(ds) \longrightarrow\mathcal{M}_{d}(\mathbb{R})\] \[\left(U_{ij}(k,l)\right)_{i,j\in[d];k,l\in[s]} \longmapsto\left(\tfrac{1}{s}||U_{ij}||_{F}^{2}\right)_{i,j\in[d]}\] _We define \(\mathsf{U}_{d,s}:=\varphi_{d,s}(\mathcal{U}(ds))\) to be the set of generalized unistochastic matrices. A matrix \(B\) in the range of \(\varphi_{d,s}\) will be called \(s\)-unistochastic [1]._ **Example 2.2**.: For \(d=s=2\), \(P=\left[\begin{array}{cccc}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\end{array}\right]\xmapsto{\varphi_{2,s}\left[\begin{array}{cc}0&1\\ 1&0\end{array}\right]}\in\mathsf{B}_{2}\). Importantly, there exist generalized unistochastic matrices which are not unistochastic. This makes the definition above interesting and justifies the study of generalized unistochastic matrices. **Example 2.3**.: For \(d=3\), \(P=\left[\begin{array}{cccccc}0&0&0&0&1&0\\ 0&0&0&1&0&0\\ 1&0&0&0&0&0\\ 0&0&0&0&0&1\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\end{array}\right]\)\(\stackrel{{\varphi_{3,2}}}{{\longmapsto}}B=\left[\begin{array}{cccc}0& \frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}&0\end{array}\right]\in\mathsf{U}_{3,2}\setminus\mathsf{ U}_{3}\). Indeed, \(P\) is a permutation matrix corresponding to the permutation \((1\,3\,6\,4\,2\,5)\in\mathfrak{S}_{6}\), hence \(B\) is \(2\)-unistochastic, i.e., \(B\in\mathsf{U}_{3,2}\). However, \(B\) is not unistochastic, i.e., \(B\notin\mathsf{U}_{3}\), since it does not satisfy the bracelet conditions, see Section 3. In the next two propositions, we show that generalized unistochastic matrices are bistochastic and that they contain, for every value of the parameter \(s\), the set of (usual) unistochastic matrices \(\mathsf{U}_{d}\), which coincides with the generalized family at \(s=1\). The special case \(s=d\) of the latter result, relevant in the study of some class of quantum channels, has been considered in [1, Proposition 21]. **Proposition 2.4**.: _For every \(d\geq 2\) and \(s\geq 1\), we have \(\mathsf{U}_{d,s}\subseteq\mathsf{B}_{d}\)._ Proof.: Let \(d\geq 2\) and \(s\geq 1\). By unitarity, \[\sum_{i=1}^{d}\sum_{k=1}^{s}|U_{ij}(k,l)|^{2}=\sum_{j=1}^{d}\sum_{l=1}^{s}|U_{ij }(k,l)|^{2}=1.\] Therefore, for all \(i\in[d]\) we have \[\sum_{j=1}^{d}B_{ij}=\tfrac{1}{s}\sum_{j=1}^{d}\|U_{ij}\|_{F}^{2}=\tfrac{1}{s} \sum_{k=1}^{s}\sum_{j=1}^{d}\sum_{l=1}^{s}|U_{ij}(k,l)|^{2}=1\] and, analogously, \(\sum_{i=1}^{d}B_{ij}=1\), which concludes the proof. **Proposition 2.5**.: _For every \(d\geq 2\) and \(s\geq 1\), we have \(\mathsf{U}_{d}=\mathsf{U}_{d,1}\subseteq\mathsf{U}_{d,s}\)._ Proof.: Let \(d\geq 2\) and \(s\geq 1\), and let \(B\in\mathsf{U}_{d}\), i.e., there exists \(U\in\mathcal{U}(d)\) such that \(B=\varphi_{d}(U)\). Consider \(V:=U\otimes I_{s}\in\mathcal{U}(ds)\). Then for all \(i,j\in[d]\) we have \(V_{ij}=U_{ij}\otimes I_{s}\), which implies that \[\tfrac{1}{s}\|V_{ij}\|_{F}^{2}=|U_{ij}|^{2}=B_{ij};\] hence, \(B\in\mathsf{U}_{d,s}\), as desired. Next, we show that the sets \(\mathsf{U}_{d,s}\) satisfy a kind of convexity property. This result will be key in showing one of our main results, Theorem 2.8. **Proposition 2.6**.: _For every \(d\geq 2\) and \(s,t\geq 1\), we have \(\frac{s}{s+t}\mathsf{U}_{d,s}+\frac{t}{s+t}\mathsf{U}_{d,t}\subseteq\mathsf{U }_{d,s+t}\)._ Proof.: Fix \(d\geq 2\) and \(s,t\geq 1\), and let \(B\in\mathsf{U}_{d,s}\) and \(C\in\mathsf{U}_{d,t}\). There exist \(V\in\mathcal{U}(ds)\) and \(W\in\mathcal{U}(dt)\) such that \(B_{ij}=\frac{1}{s}\|V_{ij}\|_{F}^{2}\) and \(C_{ij}=\frac{1}{t}\|W_{ij}\|_{F}^{2}\). Consider \(U\in\mathcal{M}_{d(s+t)}(\mathbb{C})\) defined as \[U_{ij}:=\begin{bmatrix}V_{ij}&0\\ 0&W_{ij}\end{bmatrix}.\] In particular, up to a permutation of blocks, \(U\) coincides with \(V\oplus W\). Thus, \(U\in\mathcal{U}(d(s+t))\) and \(\frac{1}{s+t}\|U_{ij}\|_{F}^{2}=\frac{1}{s+t}(\|V_{ij}\|_{F}^{2}+\|W_{ij}\|_{F }^{2})=\frac{1}{s+t}(sB_{ij}+tC_{ij})\). That is, \(\varphi_{d,t+s}(U)=\frac{s}{s+t}B+\frac{t}{s+t}C\in\mathsf{U}_{d,s+t}\), as desired. **Corollary 2.7**.: _Let \(d\geq 2\). From Proposition 2.6 we easily conclude that_ 1. _For all orders_ \(s_{1},\ldots,s_{k}\geq 1\)_, we have_ \[\frac{s_{1}}{s_{1}+\ldots+s_{k}}\mathsf{U}_{d,s_{1}}+\ldots+\frac{s_{k}}{s_{1}+ \ldots+s_{k}}\mathsf{U}_{d,s_{k}}\subseteq\mathsf{U}_{d,s_{1}+\ldots+s_{k}}.\] 2. _For all_ \(s,n\geq 1\)_, we have_ \(\mathsf{U}_{d,s}\subseteq\mathsf{U}_{d,ns}\)_._ 3. _For all_ \(s,t\geq 1\)_, we have_ \(\mathsf{U}_{d,s}\cap\mathsf{U}_{d,t}\subseteq\mathsf{U}_{d,s+t}\)_._ In relation to the second point of the corollary above, note that, in general, we do not have \[s\leq t\implies\mathsf{U}_{d,s}\subseteq\mathsf{U}_{d,t},\] see Corollary 3.14 for counterexamples in this direction. We now prove the main theorem of this section: the closed union of all generalized unistochastic matrices constitutes the whole set of bistochastic matrices. **Theorem 2.8**.: _For every dimension \(d\geq 2\), we have_ \[\overline{\bigcup_{s\geq 1}\mathsf{U}_{d,s}}=\mathsf{B}_{d}.\] Proof.: We only need to prove the "\(\supseteq\)" inclusion. Fix \(d\geq 2\) and \(\varepsilon>0\). Let \(B\in\mathsf{B}_{d}\). We shall construct \(N\in\mathbb{N}\) and \(B_{\varepsilon}\in\mathsf{U}_{d,N}\) such that \(\|B-B_{\varepsilon}\|_{F}\leq\varepsilon\). As a bistochastic matrix, \(B\) can be written as a convex combination of permutation matrices, i.e., there exists a family \(\{t_{\sigma}\,|\,\sigma\in\mathfrak{S}_{d}\}\) of non-negative coefficients such that \(\sum_{\sigma\in\mathfrak{S}_{d}}t_{\sigma}=1\) and \(B=\sum_{\sigma\in\mathfrak{S}_{d}}t_{\sigma}P_{\sigma}\), where \(P_{\sigma}\) is the permutation matrix corresponding to \(\sigma\in\mathfrak{S}_{d}\). Take \(\delta:=\varepsilon/[2(d!-1)\sqrt{d}]\) and let \(N\) be large enough so that \[\max_{\sigma\neq id}(t_{\sigma}-\tfrac{k_{\sigma}}{N})\leq\delta,\] where \(k_{\sigma}:=\lfloor Nt_{\sigma}\rfloor\). Define \(k_{id}:=N-\sum_{\sigma\neq id}\;k_{\sigma}\). Then \[\tfrac{k_{id}}{N}-t_{id}=\sum_{\sigma\neq id}(t_{\sigma}-\tfrac{k_{\sigma}}{N })\in[0,\delta(d!-1)].\] Let us now consider \(B_{\varepsilon}:=\sum_{\sigma\in\mathfrak{S}_{d}}\tfrac{k_{\sigma}}{N}P_{\sigma}\). Since \(P_{\sigma}\in\mathsf{U}_{d}\subset\mathsf{U}_{d,k_{\sigma}}\) if \(k_{\sigma}\neq 0\), from Corollary 2.7 it follows that \(B_{\varepsilon}\in\mathsf{U}_{d,N}\). Therefore, \[\|B-B_{\varepsilon}\|_{F}=\Big{\|}\sum_{\sigma\in\mathfrak{S}_{d}}(t_{\sigma} -\tfrac{k_{\sigma}}{N})P_{\sigma}\Big{\|}_{F}\leq\sum_{\sigma\in\mathfrak{S}_ {d}}|t_{\sigma}-\tfrac{k_{\sigma}}{N}|\cdot\sqrt{d}\leq 2(d!-1)\delta \sqrt{d}=\varepsilon,\] as desired. Next, we present some numerical simulations regarding the set \(\mathsf{U}_{3,1}\) of \(3\times 3\) unistochastic matrices and its generalized version \(\mathsf{U}_{3,2}\), see Figure 1. To decide whether a bistochastic matrix \(B\) is an element of \(\mathsf{U}_{d,s}\), we use the NMinimize function of Wolfram Mathematica to try finding a unitary matrix \(U\in\mathcal{U}(ds)\) such that \(\varphi_{d,s}(U)=B\). This method does not guarantee finding the global minimum of non-convex functions, so the results in Figure 1 are empirical; note however that there is a perfect fit with the theory in the case \((d,s)=(3,1)\). We end this section by discussing Gutkin's generalization of unistochastic matrices [11]. In that paper, the author generalizes unistochastic matrices starting from an isometry \[V\colon\mathbb{C}^{d}\to\mathbb{C}^{d}\otimes\mathbb{C}^{s}.\] This isometry can be seen as a \(\mathbb{C}^{s}\)-valued \(d\times d\) matrix: \[\mathcal{M}_{ds\times d}(\mathbb{C})\cong\mathcal{M}_{d}(\mathbb{C}^{s}).\] Denoting the vector elements of \(V\) as \(v_{ij}\in\mathbb{C}^{s}\), where \(i,j\in[d]\), Gutkin defines \[B_{ij}:=\|v_{ij}\|^{2}.\] Unfortunately, in general the resulting matrix \(B\) is then only column stochastic, and not row stochastic. This can be seen, for example, by considering the case \(d=s=2\) and the isometry \[V=\begin{bmatrix}1&0\\ 0&1\\ 0&0\\ 0&0\end{bmatrix}\] which corresponds to the vectors \[v_{11}=\begin{bmatrix}1\\ 0\end{bmatrix},\quad v_{12}=\begin{bmatrix}0\\ 1\end{bmatrix},\quad v_{21}=v_{22}=\begin{bmatrix}0\\ 0\end{bmatrix},\] which, in turn, lead to the matrix \[B=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}.\] The error in [1] seems to stem from Lemma 1 being wrong, and thus equations (1) and (2) in that paper not being equivalent. ## 3. Generalizing the bracelet framework In this section we generalize, in the same spirit as the main Definition 2.1, the notions of _bracelet conditions_ and _bracelet matrices_, ideas originating in [1]. These notions play an important role in the study of unistochastic matrices, since bracelet conditions fully characterize the first non-trivial case, that of dimension \(d=3\). Indeed, a \(3\times 3\) bistochastic matrix is unistochastic if and only if it satisfies the bracelet conditions [1], i.e. iff it is a bracelet matrix. We first review the standard notion of bracelet condition. The intuitive idea behind it is that the elements of two rows of a unistochastic matrix corresponding to the same column cannot be too large simultaneously with respect to the other row elements. This is because the scalar product Figure 1. In the simplex defined by the identity, (123), (321) permutation matrices, we plot 1000 uniformly sampled random bistochastic matrices. In the left panel, we plot in blue unistochastic elements, i.e., samples \(B\in\mathsf{U}_{3,1}\), and in red samples outside this set. In the right panel, we use the same colours to plot the elements inside and outside the set \(\mathsf{U}_{3,2}\). The gray curves correspond to the bracelet conditions (4) characterizing unistochastic \(3\times 3\) matrices, see Section 3. of the corresponding rows of the unitary matrix needs to be zero, so each individual term of the sum cannot be too large in magnitude with respect to the others. This intuition is encoded in the following definition: \[\mathsf{Brac}_{d}:=\bigg{\{}(\alpha,\beta)\in\Delta_{d}^{2}\,:\,\forall i\in[d], \,\sqrt{\alpha_{i}\beta_{i}}\leq\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d}\sqrt{\alpha_{j}\beta_{j}}\bigg{\}}. \tag{2}\] In the formula above, recall that the \((d-1)\)-dimensional probability simplex is the set of all probability vectors in \(\mathbb{R}^{d}\): \[\Delta_{d}:=\Big{\{}\alpha\in\mathbb{R}^{d}\,:\,\forall i\in[d],\,\alpha_{i} \geq 0\text{ and }\sum_{i=1}^{d}\alpha_{i}=1\Big{\}}.\] We have the following important definition; note that the term "bracelet matrix / condition" was introduced in [13]. **Definition 3.1**.: _A bistochastic matrix \(B\in\mathsf{B}_{d}\) is said to be a bracelet matrix if all pairs of rows and all pairs of columns of \(B\) satisfy the bracelet condition from (2). We introduce the set of bracelet matrices_ \[\mathsf{L}_{d}:=\Big{\{}B\in\mathsf{B}_{d}:\,\forall i_{1}\neq i_{2}\left(B_{ i_{1},.},B_{i_{2},.}\right)\in\mathsf{Brac}_{d}\,\text{ and }\,\forall j_{1}\neq j_{2}\left(B_{.j_{1}},B_{.j_{2}}\right)\in\mathsf{Brac}_{ d}\big{\}}. \tag{3}\] It was observed in [13] that being bracelet is a necessary condition for unistochasticity. We give here the proof of this claim for the sake of completeness. **Proposition 3.2**.: _For all dimensions \(d\geq 2\), we have \(\mathsf{U}_{d}\subseteq\mathsf{L}_{d}\)._ Proof.: Let \(B=(B_{ij})_{i,j}\in\mathsf{U}_{d}\) be such that \(B_{ij}=|U_{ij}|^{2}\), where \(U\in\mathcal{U}(d)\) is a corresponding unitary matrix. Fix two row indices \(i_{1}\neq i_{2}\). By unitarity, for all \(k\in[d]\) we have \[-U_{i_{1}k}\overline{U_{i_{2}k}}=\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{d}U_{i_{1}j}\overline{U_{i_{2}j}}.\] Taking norm and applying the triangle inequality, for all \(k\in[d]\) we obtain \[|U_{i_{1}k}\overline{U_{i_{2}k}}|\leq\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{d}|U_{i_{1}j}\overline{U_{i_{2}j}}|,\] which translates into \[\sqrt{B_{i_{1}k}B_{i_{2}k}}\leq\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{d}\sqrt{B_{i_{1}j}B_{i_{2}j}}.\] A similar computation shows that the columns of \(B\) also satisfy the bracelet condition from Eq. (2); hence, \(B\in\mathsf{L}_{d}\), as claimed. The bracelet conditions characterize unistochasticity for \(3\times 3\) matrices (i.e., \(\mathsf{U}_{3}=\mathsf{L}_{3}\), see [1]), while being only necessary for \(d\geq 4\), see [13]. Note that the complete description of the non-convex set \(\mathsf{U}_{3}\) was obtained, thanks to the characterization in terms of bracelet conditions, in [16]. For example, for the bistochastic matrices studied in Figure 1, which are of the form \[B=\begin{bmatrix}\lambda_{1}&\lambda_{2}&\lambda_{3}\\ \lambda_{3}&\lambda_{1}&\lambda_{2}\\ \lambda_{2}&\lambda_{3}&\lambda_{1}\end{bmatrix},\] the bracelet conditions read \[\sqrt{\lambda_{i}\lambda_{j}}\leq\sqrt{\lambda_{i}\lambda_{k}}+\sqrt{\lambda_{j} \lambda_{k}}, \tag{4}\] for any permutation \((i,j,k)\) of the set \(\{1,2,3\}\). The matrices satisfying these conditions are precisely the unistochastic matrices, and they correspond to the region delimited by the gray curves in Figure 1. ### Generalized bracelet conditions Since the idea behind the bracelet condition from Eq. (2) was to use the orthogonality of the rows/columns of a unitary operator, we generalize this insight to our setting in the following definition, encoding in it the block-orthogonality of block unitary matrices. **Definition 3.3**.: _A pair of probability vectors \(\alpha,\beta\in\Delta_{d}\) is said to satisfy the generalized bracelet condition of order \(s\) if they correspond to the normalized squares of Frobenius norms of the blocks of some unitary matrix \(U\in\mathcal{U}(ds)\):_ \[\mathsf{Brac}_{d,s}:=\Big{\{}(\alpha,\beta)\in\Delta_{d}^{2} :\exists A_{1},\ldots,A_{d},B_{1},\ldots,B_{d}\in\mathcal{M}_{s}( \mathbb{C})\text{ such that }\] \[\sum_{i=1}^{d}A_{i}A_{i}^{*}=\sum_{i=1}^{d}B_{i}B_{i}^{*}=I_{s}, \quad\sum_{i=1}^{d}A_{i}B_{i}^{*}=0_{s}, \tag{5}\] \[\tfrac{1}{s}\|A_{i}\|_{F}^{2}=\alpha_{i}\ \text{ and }\ \tfrac{1}{s}\|B_{i}\|_{F}^{2}=\beta_{i}\quad\forall i\in[d]\Big{\}}.\] The conditions above mean precisely that we can expand the 2-row block matrix \[\begin{bmatrix}A_{1}&\cdots&A_{d}\\ B_{1}&\cdots&B_{d}\end{bmatrix}\] to a full unitary matrix \(U\in\mathcal{U}(ds)\). Note that the case \(d=1\) of the bracelet conditions is empty for every order, i.e., \(\mathsf{Brac}_{1,s}=\emptyset\) for every \(s\geq 1\); indeed, if \(A_{1}A_{1}^{*}=B_{1}B_{1}^{*}=I_{s}\), then \(A_{1},B_{1}\in\mathcal{U}(s)\), which implies that \(A_{1}B_{1}^{*}\neq 0\). We first show that the newly introduced conditions from Eq. (5) do indeed generalize the standard bracelet conditions from Eq. (2). **Proposition 3.4**.: _For all dimensions \(d\geq 2\), the generalized bracelet conditions of order \(s=1\) are precisely the usual bracelet conditions: \(\mathsf{Brac}_{d,1}=\mathsf{Brac}_{d}\)._ Proof.: For \(s=1\) the generalized bracelet condition takes the form \[\mathsf{Brac}_{d,1}=\left\{\begin{aligned} &(\alpha,\beta)\in\Delta_{d}^{2} \,:\,\exists a_{1},\ldots,a_{d},b_{1},\ldots,b_{d}\in\mathbb{C}\text{ such that }\\ &\sum_{i}|a_{i}|^{2}=\sum_{i}|b_{i}|^{2}=1,\,\sum_{i}a_{i}\bar{b} _{i}=0,\ \text{ and }\ |a_{i}|^{2}=\alpha_{i},\,|b_{i}|^{2}=\beta_{i}\ \ \forall i\in[d]\end{aligned}\right\}.\] The inclusion \(\mathsf{Brac}_{d,1}\subseteq\mathsf{Brac}_{d}\) follows by mimicking the proof of Proposition 3.2. The converse inclusion can be thought of as a generalized version of the triangle inequality: for \(l_{1}\geq\ldots\geq l_{d}\geq 0\) satisfying \(l_{1}\leq\sum_{j=2}^{d}l_{j}\), there exist \(\theta_{1},\ldots,\theta_{d}\in[0,2\pi)\) such that \(\sum_{j=1}^{d}l_{j}e^{\mathsf{i}\theta_{j}}=0\). Therefore, for any \((\alpha,\beta)\in\mathsf{Brac}_{d}\) we can choose the phases \(\theta_{1},\ldots,\theta_{d}\) so that \(a_{j}:=\sqrt{\alpha_{j}}e^{\mathsf{i}\theta_{j}}\) and \(b_{j}:=\sqrt{\beta_{j}}\) satisfy \(\sum_{j=1}^{d}a_{j}\bar{b}_{j}=0\). The other conditions follow trivially, and so \((\alpha,\beta)\in\mathsf{Brac}_{d,1}\), as claimed. As it is the case for the sets \(\mathsf{U}_{d,s}\) (see Proposition 2.6), the sets \(\mathsf{Brac}_{d,s}\) satisfy the following "convexity" relation: **Proposition 3.5**.: _For every \(d\geq 2\) and \(s,t\geq 1\), we have \(\tfrac{s}{s+t}\mathsf{Brac}_{d,s}+\tfrac{t}{s+t}\mathsf{Brac}_{d,t}\subseteq \mathsf{Brac}_{d,s+t}\)._ Proof.: Consider pairs of probability vectors \((\alpha,\beta)\in\mathsf{Brac}_{d,s}\) and \((\mu,\nu)\in\mathsf{Brac}_{d,t}\) together with the generating matrices \(A_{1},\ldots,A_{d},B_{1},\ldots,B_{d}\in\mathcal{M}_{s}(\mathbb{C})\) and \(C_{1},\ldots,C_{d},D_{1},\ldots,D_{d}\in\mathcal{M}_{t}(\mathbb{C})\), i.e., \[\begin{bmatrix}A_{1}&\cdots&A_{d}\\ B_{1}&\cdots&B_{d}\end{bmatrix}\xrightarrow[s,t]{\frac{1}{s}\|\cdot\|_{F}^{2} }\begin{bmatrix}\alpha\\ \beta\end{bmatrix}=\begin{bmatrix}\alpha_{1}&\cdots&\alpha_{d}\\ \beta_{1}&\cdots&\beta_{d}\end{bmatrix}\] \[\begin{bmatrix}C_{1}&\cdots&C_{d}\\ D_{1}&\cdots&D_{d}\end{bmatrix}\overset{\frac{1}{s+t}\|\cdot\|_{F_{\mathcal{F}}}^{ 2}}{\longmapsto}\begin{bmatrix}\mu\\ \nu\end{bmatrix}=\begin{bmatrix}\mu_{1}&\cdots&\mu_{d}\\ \nu_{1}&\cdots&\nu_{d}\end{bmatrix}\] Then, using a direct sum construction, we have: \[\begin{bmatrix}A_{1}\oplus C_{1}&\cdots&A_{d}\oplus C_{d}\\ B_{1}\oplus D_{1}&\cdots&B_{d}\oplus D_{d}\end{bmatrix}\overset{\frac{1}{s+ t}\|\cdot\|_{F_{\mathcal{F}}}^{2}}{\longmapsto}\begin{bmatrix}\frac{s\alpha_{1}+t\mu_{1}}{s+ t}&\cdots&\frac{s\alpha_{d}+t\mu_{d}}{s+t}\\ \frac{s\beta_{1}+t\nu_{1}}{s+t}&\cdots&\frac{s\beta_{d}+t\nu_{d}}{s+t}\end{bmatrix} =\begin{bmatrix}\frac{s\alpha+t\mu}{s+t}\\ \frac{s\beta+t\nu}{s+t}\end{bmatrix}.\] Hence, \(\frac{s}{s+t}(\alpha,\beta)+\frac{t}{s+t}(\mu,\nu)\in\mathsf{Brac}_{d,s+t}\), as desired. The result above easily generalizes to more than two summands. **Corollary 3.6**.: _For all dimensions \(d\geq 2\) and all orders \(s_{1},\ldots,s_{k}\geq 1\), we have_ \[\frac{s_{1}}{s_{1}+\ldots+s_{k}}\mathsf{Brac}_{d,s_{1}}+\ldots+\frac{s_{k}}{s_ {1}+\ldots+s_{k}}\mathsf{Brac}_{d,s_{k}}\subseteq\mathsf{Brac}_{d,s_{1}+ \ldots+s_{k}}.\] _In particular, for all \(s\geq 1\) we have \(\mathsf{Brac}_{d,1}=\mathsf{Brac}_{d}\subseteq\mathsf{Brac}_{d,s}\)._ The generalized bracelet conditions on probability vectors introduced in Definition 3.3 are not easy to check in general, due to the fact that one needs to solve a quadratic problem in \(s\times s\) matrices. We present next a necessary condition for a pair of probability vectors to satisfy the generalized bracelet conditions that is easily verifiable. **Proposition 3.7**.: _For any pair of probability vectors satisfying the generalized bracelet condition \((\alpha,\beta)\in\mathsf{Brac}_{d,s}\), it holds that, for all \(i\in[d]\) such that \(\beta_{i}\geq 1-1/s\),_ \[\sqrt{\alpha_{i}}\sqrt{s\beta_{i}-(s-1)}\leq s\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d}\sqrt{\alpha_{j}\beta_{j}}.\] Proof.: The inequality is a simple consequence of the conditions on the matrices \(A_{i},B_{j}\) from Definition 3.3. Start from \[-A_{i}B_{i}^{*}=\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d}A_{j}B_{j}^{*},\] where \(i\in[d]\), and take the operator norm of both sides. For the left-hand side, we have the following lower bound (\(\sigma_{k}(\cdot)\) denote below the singular values of a matrix, ordered decreasingly): \[\|-A_{i}B_{i}^{*}\|=\sigma_{1}(A_{i}B_{i}^{*})\geq\sigma_{1}(A_{i})\sigma_{s}( B_{i}),\] where we apply [1, Eq. (III.20)]. Clearly, \(\sigma_{1}(A_{i})=\|A_{i}\|\geq s^{-1/2}\|A_{i}\|_{F}=\sqrt{\alpha_{i}}\). We also have \[\sigma_{s}(B_{i})^{2}=\|B_{i}\|_{F}^{2}-\sum_{k=1}^{s-1}\sigma_{k}(B_{i})^{2} \geq s\beta_{i}-(s-1),\] where we have used the fact that that all the singular values of \(B_{i}\) do not exceed \(1\), which follows from \(\sum_{k}B_{k}B_{k}^{*}=I_{s}\). Moving now to the right-hand side, we have: \[\Big{\|}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d}A_{j}B_{j}^{*}\Big{\|}\leq\sum_{\begin{subarray}{c}j=1 \\ j\neq i\end{subarray}}^{d}\|A_{j}\|\|B_{j}\|\leq\sum_{\begin{subarray}{c}j=1 \\ j\neq i\end{subarray}}^{d}\|A_{j}\|_{F}\|B_{j}\|_{F}=s\sum_{\begin{subarray}{c}j=1 \\ j\neq i\end{subarray}}^{d}\sqrt{\alpha_{j}\beta_{j}},\] concluding the proof. Note that in the case \(s=1\), the necessary conditions given in the Proposition above reduce to the usual bracelet conditions from Eq. (2), exactly as the generalized bracelet conditions. In what follows, we analyze the sets \(\mathsf{Brac}_{d,s}\) for different values of the dimension \(d\) and of the generalization parameter \(s\). We will show that for \(d=2\) nothing new happens when the value of \(s\) changes, i.e., \(\mathsf{Brac}_{2,s}=\mathsf{Brac}_{2}\) for every \(s\geq 1\); however, for all dimensions \(d\geq 3\) we will obtain strict inclusion \(\mathsf{Brac}_{d,s}\supsetneq\mathsf{Brac}_{d}\) for every \(s>1\). Let us start with an auxiliary lemma. **Lemma 3.8**.: _Consider two real diagonal matrices \(X=\operatorname{diag}(x_{1},\cdots,x_{s})\) and \(Y=\operatorname{diag}(y_{1},\cdots,y_{s})\). The following conditions are equivalent:_ 1. _There exist_ \(U,V\in\mathcal{U}(s)\) _such that_ \(XUY=V\)_._ 2. _There exists_ \(\sigma\in\mathfrak{S}_{s}\) _such that_ \(|x_{i}y_{\sigma(i)}|=1\) _for every_ \(i\in[s]\)_._ Proof.: We shall prove the double implication. \((i)\Leftarrow(ii)\): Let \(\sigma\in\mathfrak{S}_{s}\) be such that \(|x_{i}y_{\sigma(i)}|=1\) for all \(i\in[s]\). Taking \(U\) to be the phase-permutation matrix corresponding to \(\sigma\) and phases \(\arg(x_{i}y_{\sigma(i)})\), we have \(XUY=I_{s}\). \((i)\Rightarrow(ii)\): If \(XUY=V\) for some unitary matrices \(U\) and \(V\), then the real diagonal matrices \(X,Y\) are both invertible; furthermore, we have \[I_{s}=V^{*}V=Y^{*}U^{*}X^{*}XUY=YU^{*}X^{2}UY,\] which implies that \[X^{2}U=UY^{-2}.\] The latter equation means that the \(s\) linearly independent column vectors of the unitary matrix \(U\) form \(s\) eigenvectors of \(X^{2}\), and so the \(s\) eigenvalues of \(X^{2}\) are exactly the \(s\) diagonal elements of \(Y^{-2}\). Therefore, the set of \(s\) diagonal elements of \(X^{2}\) (counting multiplicities) is equal to the set of \(s\) diagonal elements of \(Y^{-2}\). Hence, there exists \(\sigma\in\mathfrak{S}_{s}\) such that \(x_{i}^{2}=1/y_{\sigma(i)}^{2}\) for all \(i\in[s]\), which concludes the proof. **Proposition 3.9**.: _For \(d=2\) and all orders \(s\geq 1\), we have_ \[\mathsf{Brac}_{2,s}=\mathsf{Brac}_{2}=\left\{\left((p,1-p),(1-p,p)\right)\,: \,p\in[0,1]\right\}.\] Proof.: Fix \(s\geq 2\) and let \(\left((\alpha_{1},1-\alpha_{1}),(\beta_{1},1-\beta_{1})\right)\in\mathsf{Brac }_{2,s}\). It suffices to show that \(\beta_{1}=1-\alpha_{1}\). Let \(A_{1},A_{2},B_{1},B_{2}\in\mathcal{M}_{s}(\mathbb{C})\) be such that \(\frac{1}{s}\operatorname{Tr}(A_{1}A_{1}^{*})=\alpha_{1}\) and \(\frac{1}{s}\operatorname{Tr}(B_{1}B_{1}^{*})=\beta_{1}\) as well as \[A_{1}A_{1}^{*}+A_{2}A_{2}^{*} =I_{s}\] \[B_{1}B_{1}^{*}+B_{2}B_{2}^{*} =I_{s}\] \[A_{1}B_{1}^{*}+A_{2}B_{2}^{*} =0_{s}.\] Since \(A_{1}A_{1}^{*}\) and \(A_{2}A_{2}^{*}\) commute, there exists \(U\in\mathcal{U}(s)\) such that \[A_{1}A_{1}^{*} =U\operatorname{diag}(a_{1},\ldots,a_{s})U^{*}\] \[A_{2}A_{2}^{*} =U\operatorname{diag}(1-a_{1},\ldots,1-a_{s})U^{*}\] for some real numbers \(a_{1},\ldots,a_{s}\in[0,1]\). Moreover, using the singular value decomposition, there exist \(U_{1},U_{2}\in\mathcal{U}(s)\) such that \[A_{1} =U\operatorname{diag}(\sqrt{a_{1}},\ldots,\sqrt{a_{s}})U_{1}^{*}\] \[A_{2} =U\operatorname{diag}(\sqrt{1-a_{1}},\ldots,\sqrt{1-a_{s}})U_{2} ^{*}.\] Similarly, there exist \(V,V_{1},V_{2}\in\mathcal{U}(s)\) such that \[B_{1} =V\operatorname{diag}(\sqrt{b_{1}},\ldots,\sqrt{b_{s}})V_{1}^{*}\] \[B_{2} =V\operatorname{diag}(\sqrt{1-b_{1}},\ldots,\sqrt{1-b_{s}})V_{2} ^{*}\] with \(b_{1},\ldots,b_{s}\in[0,1]\). Denoting \(W_{1}:=U_{1}^{*}V_{1}\) and \(W_{2}:=U_{2}^{*}V_{2}\), we have \(W_{1},W_{2}\in\mathcal{U}(s)\) and the condition \(A_{1}B_{1}^{*}+A_{2}B_{2}^{*}=0\) now reads \[\operatorname{diag}(\sqrt{a_{1}},\ldots,\sqrt{a_{s}})W_{1} \operatorname{diag}(\sqrt{b_{1}},\ldots,\sqrt{b_{s}})\] \[=-\operatorname{diag}(\sqrt{1-a_{1}},\ldots,\sqrt{1-a_{s}})W_{2} \operatorname{diag}(\sqrt{1-b_{1}},\ldots,\sqrt{1-b_{s}}).\] For non-degenerate \(A_{1},A_{2},B_{1},B_{2}\) (i.e., if none of \(\alpha_{i}\)'s or \(\beta_{i}\)'s equals \(0\) or \(1\)), the condition \(A_{1}B_{1}^{*}+A_{2}B_{2}^{*}=0\) can be written as \[\operatorname{diag}\left(\sqrt{\frac{a_{1}}{1-a_{1}}},\ldots,\sqrt{\frac{a_{s }}{1-a_{s}}}\right)W_{1}\operatorname{diag}\left(\sqrt{\frac{b_{1}}{1-b_{1}}},\ldots,\sqrt{\frac{b_{s}}{1-b_{s}}}\right)=-W_{2}.\] Using Lemma 3.8, there exists a permutation \(\sigma\in\mathfrak{S}_{s}\) such that \(\frac{a_{i}}{1-a_{i}}\frac{b_{\sigma(i)}}{1-b_{\sigma(i)}}=1\), that is, \(a_{i}=1-b_{\sigma(i)}\), for every \(i\in[s]\). Therefore, \[\alpha_{1}=\tfrac{1}{s}\operatorname{Tr}(A_{1}A_{1}^{*})=\tfrac{1}{s}\sum_{i= 1}^{s}a_{i}=\tfrac{1}{s}\sum_{i=1}^{s}(1-b_{\sigma(i)})=1-\beta_{1};\] thus, for \(p:=\alpha_{1}\in[0,1]\) we have \[\begin{bmatrix}A_{1}&A_{2}\\ B_{1}&B_{2}\end{bmatrix}^{\frac{1}{s}\|\cdot\|_{F}^{2}}\begin{bmatrix}p&1-p\\ 1-p&p\end{bmatrix}.\] For general \(A_{1},A_{2},B_{1},B_{2}\), because the general linear group \(\mathcal{GL}_{s}(\mathbb{C})\) is dense in \(\mathcal{M}_{s}(\mathbb{C})\) and the map \(\frac{1}{s}\|\cdot\|_{F}^{2}\) is continuous, we again obtain \(\alpha_{1}=1-\beta_{1}\), which finishes the proof. We now move on to the case \(d=3\) and we show that \(\mathsf{Brac}_{3,s}\supsetneq\mathsf{Brac}_{3,1}=\mathsf{Brac}_{3}\) for every \(s\geq 2\). This fact will actually imply that \[\mathsf{Brac}_{d,s}\supsetneq\mathsf{Brac}_{d,1}=\mathsf{Brac}_{d}\ \text{ for all }\ d\geq 3,\,s\geq 2\] since for all \(d,n,s\) we have \(\mathsf{Brac}_{d+n,s}\cap\{\alpha_{d+1}=\cdots=\alpha_{d+n}=\beta_{d+1}= \cdots=\beta_{d+n}=0\}\cong\mathsf{Brac}_{d,s}\). Back to the claim about \(d=3\), we shall focus on the slice \[\mathsf{Brac}_{3,s}\cap\{\alpha_{3}=\beta_{2}=0\} =\big{\{}\big{(}(\alpha_{1},1-\alpha_{1},0),(\beta_{1},0,1-\beta_{ 1})\big{)}\in\Delta_{3}^{2}:\exists A_{1},A_{2},B_{1},B_{3}\in\mathcal{M}_{s}( \mathbb{C})\text{ s.t. }\] \[\quad A_{1}A_{1}^{*}+A_{2}A_{2}^{*}=B_{1}B_{1}^{*}+B_{3}B_{3}^{*}= I_{s},\,A_{1}B_{1}^{*}=0_{s}, \tag{6}\] \[\tfrac{1}{s}\mathrm{Tr}(A_{1}A_{1}^{*})=\alpha_{1},\,\tfrac{1}{s }\mathrm{Tr}(A_{2}A_{2}^{*})=1-\alpha_{1},\] \[\tfrac{1}{s}\mathrm{Tr}(B_{1}B_{1}^{*})=\beta_{1},\,\tfrac{1}{s} \mathrm{Tr}(B_{3}B_{3}^{*})=1-\beta_{1}\,\big{\}}.\] **Proposition 3.10**.: _For \(d=3\) and every \(s\geq 1\), we have_ \[\mathsf{Brac}_{3,s}\cap\{\alpha_{3}=\beta_{2}=0\}=\big{\{}\big{(}(\alpha_{1},1 -\alpha_{1},0),(\beta_{1},0,1-\beta_{1})\big{)}\in\Delta_{3}^{2}\,:\,\lceil \alpha_{1}s\rceil+\lceil\beta_{1}s\rceil\leq s\big{\}}\,.\] Proof.: Let \(s\geq 1\). We shall prove the double inclusion. "\(\subseteq\)": Consider matrices \(A_{1},A_{2},B_{1},B_{3}\) as in Eq. (6) and let \(\sqrt{a_{1}}\geq\ldots\geq\sqrt{a_{s}}\) be the singular values of \(A_{1}\). The eigenvalues of \(A_{1}A_{1}^{*}\) are therefore \(a_{1}\geq\ldots\geq a_{s}\geq 0\), and \(A_{1}A_{1}^{*}+A_{2}A_{2}^{*}=I_{s}\) guarantees that \(a_{1}\leq 1\). Since \(\mathrm{rk}(A_{1})=\mathrm{rk}(A_{1}A_{1}^{*})=\,\) the number of nonzero \(a_{i}\)'s, it follows that \[\alpha_{1}s=\mathrm{Tr}(A_{1}A_{1}^{*})=\sum_{i=1}^{s}a_{i}=\sum_{i=1}^{\mathrm{ rk}(A_{1})}a_{i}\leq\mathrm{rk}(A_{1})a_{1}\leq\mathrm{rk}(A_{1}).\] In consequence, \(\lceil\alpha_{1}s\rceil\leq\mathrm{rk}(A_{1})\) and, similarly, \(\lceil\beta_{1}s\rceil\leq\mathrm{rk}(B_{1})\). Finally, from \(A_{1}B_{1}^{*}=0\) we obtain \(\mathrm{rk}\,B_{1}^{*}\leq\dim\ker A_{1}=s-\mathrm{rk}\,A_{1}\), and so \(\mathrm{rk}(A_{1})+\mathrm{rk}(B_{1})\leq s\), which proves the first inclusion. "\(\supseteq\)": Conversely, given \(\alpha_{1},\beta_{1}\in[0,1]\) satisfying \(\lceil\alpha_{1}s\rceil+\lceil\beta_{1}s\rceil\leq s\), let us consider \[A_{1} :=\operatorname{diag}\left(\sqrt{a_{1}},\ldots,\sqrt{a_{\lceil \alpha_{1}s\rceil}},0,\ldots,0\right)\] \[A_{2} :=\operatorname{diag}\left(\sqrt{1-a_{1}},\ldots,\sqrt{1-a_{ \lceil\alpha_{1}s\rceil}},1,\ldots,1\right)\] \[B_{1} :=\operatorname{diag}\left(0,\ldots,0,\sqrt{b_{1}},\ldots,\sqrt{ b_{\lceil\beta_{1}s\rceil}}\right)\] \[B_{3} :=\operatorname{diag}\left(1,\ldots,1,\sqrt{1-b_{1}},\ldots,\sqrt {1-b_{\lceil\beta_{1}s\rceil}}\right)\!,\] where \[a_{1}=\cdots=a_{\lfloor\alpha_{1}s\rfloor}=1\] and, in the case when \(\alpha_{1}s\) is not an integer, \[a_{\lceil\alpha_{1}s\rceil}=a_{\lfloor\alpha_{1}s\rfloor+1}=\alpha_{1}s- \lfloor\alpha_{1}s\rfloor\in(0,1).\] The \(b_{i}\)'s are defined analogously, using \(\beta_{1}s\). The assumption \(\lceil\alpha_{1}s\rceil+\lceil\beta_{1}s\rceil\leq s\) guarantees that the non-zero elements of \(A_{1}\) and \(B_{1}\) do not overlap, which implies that \(A_{1}B_{1}^{*}=0_{s}\). One can easily verify that all the other conditions from Eq. (6) hold as well, and so the proof is finished. Let us consider \[E(s):=\left\{(\alpha_{1},\beta_{1})\in[0,1]^{2}\colon\lceil\alpha_{1}s\rceil +\lceil\beta_{1}s\rceil\leq s\right\}.\] The sets \(E(2),E(3),E(4),E(5)\) are displayed in Figure 2. Note that every \(E(s)\) contains the axes: \[E(1)=\left\{(\alpha_{1},0)\,:\,\alpha_{1}\in[0,1]\right\}\cup\left\{(0,\beta_ {1})\,:\,\beta_{1}\in[0,1]\right\}\subseteq E(s)\ \text{ for every }s\geq 1.\] Moreover, we have \[\left\{(\alpha_{1},\beta_{1})\,:\,\alpha_{1}+\beta_{1}\leq 1-2/s\right\} \subseteq E(s)\subseteq\left\{(\alpha_{1},\beta_{1})\,:\,\alpha_{1}+\beta_{1} \leq 1\right\}\ \text{ for every }s\geq 1;\] hence, \[\overline{\lim_{s\to\infty}E(s)}=\left\{(\alpha_{1},\beta_{1})\in[0,1]^{2}\,: \,\alpha_{1}+\beta_{1}\leq 1\right\}.\] This shows that, after taking the limit \(s\to\infty\) and the closure, the bracelet conditions corresponding to the slice considered in Eq. (6) become trivial; this result is in the spirit of Theorem 2.8. Figure 2. The (interior of the) sets \(E(s)\) for \(s=2,3,4,5\). ### Generalized bracelet matrices Similarly to Definition 3.1, we introduce the set of generalized bracelet matrices. **Definition 3.11**.: _A bistochastic matrix \(B\in\mathsf{B}_{d}\) is called a generalized bracelet matrix of order \(s\) if all pairs of rows and all pairs of columns of \(B\) satisfy the generalized bracelet condition of order \(s\) from Eq. (5). That is, the set of generalized bracelet matrices is defined as_ \[\mathsf{L}_{d,s}:=\big{\{}B\in\mathsf{B}_{d}:\,\forall i_{1}\neq i_{2}\,(B_{i _{1}\cdot},B_{i_{2}\cdot})\in\mathsf{Brac}_{d,s}\text{ and }\forall j_{1}\neq j_{2}\,(B_{j_{1}},B_{j_{2}})\in \mathsf{Brac}_{d,s}\big{\}}. \tag{7}\] The following result is a generalization of Proposition 3.2: **Proposition 3.12**.: _For all dimension \(d\geq 2\) and all order \(s\geq 1\) we have_ \[\mathsf{U}_{d,s}\subseteq\mathsf{L}_{d,s}\subseteq\mathsf{B}_{d}.\] _In particular, the closure of the set of all generalized bracelet matrices is the full Birkhoff polytope:_ \[\overline{\bigcup_{s\geq 1}\mathsf{L}_{d,s}}=\mathsf{B}_{d}.\] Proof.: The only fact that needs checking is \(\mathsf{U}_{d,s}\subseteq\mathsf{L}_{d,s}\), as then the claim about \(\bigcup_{s\geq 1}\mathsf{L}_{d,s}\) will follow from Theorem 2.8. Fix \(B\in\mathsf{U}_{d,s}\) and let \(U\in\mathcal{U}(ds)\) be the corresponding unitary matrix, i.e., \(\frac{1}{s}\operatorname{Tr}(U_{ij}U_{ij}^{*})=B_{ij}\) for all \(i,j\in[d]\). Using the unitarity of \(U\), one easily verifies that the pair of different rows \((B_{i_{1}\cdot},B_{i_{2}\cdot})\) satisfies the generalized bracelet condition \(\mathsf{Brac}_{d,s}\) with matrices \(U_{i_{1}1},\ldots,U_{i_{1}d};U_{i_{2}1},\ldots,U_{i_{2}d}\in\mathcal{M}_{s}( \mathbb{C})\). Analogously for columns; hence \(B\in\mathsf{L}_{d,s}\), as claimed. We show next that the (non-convex) sets of generalized bracelet matrices \(\mathsf{L}_{d,s}\) have a very intricate inclusion structure. To this end, we study the intersections of \(\mathsf{L}_{d,s}\) and of \(\mathsf{U}_{d,s}\) with segments connecting two extremal points of the Birkhoff polytope. **Proposition 3.13**.: _Let \(d\geq 3\) and consider permutations \(\pi,\sigma\in\mathfrak{S}_{d}\) such that \(\pi^{-1}\sigma\) has a \(p\)-cycle for some \(p\geq 3\). Then the intersection of the segment \([\pi,\sigma]\) with \(\mathsf{U}_{d,s}\) is equal to the intersection of \([\pi,\sigma]\) with \(\mathsf{L}_{d,s}\), and coincides with the discrete set of convex mixtures of \(\pi\) and \(\sigma\) with rational weights with denominator \(s\):_ \[\{\lambda\in[0,1]\,:\,(1-\lambda)\pi+\lambda\sigma\in\mathsf{U}_{d,s}\}=\{ \lambda\in[0,1]\,:\,(1-\lambda)\pi+\lambda\sigma\in\mathsf{L}_{d,s}\}=\{k/s \colon k=0,\ldots,s\}.\] _In particular, this holds for the \(1\)-faces (i.e. edges) of the Birkhoff polytope \(\mathsf{B}_{d}\) (for which \(\pi^{-1}\sigma\) is a \(d\)-cycle)._ Proof.: The inclusion of the first set in the second one is trivial. We shall prove that the second set is contained in the third, and then that the third set is contained in the first. For the first inclusion, without loss of generality we may assume that the decomposition of \(\pi^{-1}\sigma\) contains the cycle \((1,2,\ldots,p)\) for some \(p\geq 3\). Furthermore, we may also assume that \(\pi\) is the identity permutation; hence, \((1-\lambda)\pi+\lambda\sigma\) has the form: \[\begin{bmatrix}1-\lambda&\lambda&0&\cdots&0&0\\ 0&1-\lambda&\lambda&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1-\lambda&\lambda\\ \lambda&0&0&\cdots&0&1-\lambda\end{bmatrix}\quad\bigoplus\ \Big{[}\text{another matrix of size }(d-p)\times(d-p)\Big{]}.\] Considering any two rows of the \(p\times p\) submatrix, we obtain, after permuting the columns, the slice analysed in Prop. 3.10: if \((1-\lambda)\pi+\lambda\sigma\in\mathsf{L}_{d,s}\), there exist \(A_{1},A_{2},B_{1},B_{3}\in\mathcal{M}_{s}(\mathbb{C})\) such that \[\begin{bmatrix}A_{1}&A_{2}&0&0&\cdots\\ B_{1}&0&B_{3}&0&\cdots\end{bmatrix}_{2s\times ps}\xrightarrow{\frac{1}{s}\| \cdot\|_{p}^{2}}\begin{bmatrix}\lambda&1-\lambda&0&0&\cdots\\ 1-\lambda&0&\lambda&0&\cdots\end{bmatrix}_{2\times p}.\] As in the "\(\subseteq\)" part of the proof of Prop. 3.10, we deduce that \(\operatorname{rk}(A_{1})\geq\lceil\lambda s\rceil\) and \(\operatorname{rk}(B_{1})\geq\lceil(1-\lambda)s\rceil\), and then \(\lceil\lambda s\rceil+\lceil(1-\lambda)s\rceil\leq s\); thus, \(\lambda=k/s\) for some \(k\in\{0,1,\ldots,s\}\), proving the first claim. As for the second inclusion, use Corollary 2.7, together with the fact that permutation matrices are unistochastic, to conclude that \[\frac{s-k}{s}\pi+\frac{k}{s}\sigma\in\mathsf{U}_{d,s}.\] Finally, the claim about the face structure of the Birkhoff polytope can be found in, e.g., [1, Theorem 2.2]. **Corollary 3.14**.: _The family of sets \(\mathsf{U}_{d,s}\) is not increasing in \(s\)._ Note that the result above was the motivation behind the study of the slice of the bracelet condition set considered in Eq. (6). Indeed, if we consider the segment \([(1)(2)(3),(123)]\) connecting the identity and the full cycle permutations of \(\mathfrak{S}_{3}\), we have \[(1-\lambda)\cdot(1)(2)(3)+\lambda\cdot(123)=\begin{bmatrix}1-\lambda&\lambda& 0\\ 0&1-\lambda&\lambda\\ \lambda&0&1-\lambda\end{bmatrix}.\] We see that any two rows (or columns) of the matrix above have the zero pattern found in Eq. (6). As a final remark, we show that the blue line in Fig. 3, i.e. the set of all bistochastic matrices of the form \[B=\begin{bmatrix}x&y&y\\ y&x&y\\ y&y&x\end{bmatrix},\] is a subset of \(\mathsf{U}_{3,2}\). To this end, we consider a variable \(q\in[0,1]\) and define \[w_{\pm}=\tfrac{1}{2}\big{(}1-q\pm\sqrt{1+2q-3q^{2}}\,\big{)},\] and \[U_{q}:=\begin{bmatrix}q&0&w_{-}&0&w_{+}&0\\ 0&q&0&w_{+}&0&w_{-}\\ w_{+}&0&q&0&w_{-}&0\\ 0&w_{-}&0&q&0&w_{+}\\ w_{-}&0&w_{+}&0&q&0\\ 0&w_{+}&0&w_{-}&0&q\end{bmatrix}\stackrel{{\varphi_{3,2}}}{{ \longrightarrow}}\begin{bmatrix}q^{2}&\tfrac{1}{2}(1-q^{2})&\tfrac{1}{2}(1-q^ {2})\\ \tfrac{1}{2}(1-q^{2})&q^{2}&\tfrac{1}{2}(1-q^{2})\\ \tfrac{1}{2}(1-q^{2})&\tfrac{1}{2}(1-q^{2})&q^{2}\end{bmatrix}\in\mathsf{U}_{3,2}.\] One can easily check by direct computation that the matrix \(U_{q}\) is unitary (actually orthogonal, see next section) and that the associated bistochastic matrices fill in the blue line in Fig. 3. It is interesting to notice that \(U_{q}\) is equal to a direct sum \(U_{q}=U_{135}\oplus U_{246}\) of two circulant matrices acting on odd, resp. even, indices. Note that the blue point at the bottom end of the blue line (i.e., the matrix \(\tfrac{1}{2}(P_{(132)}+P_{(123)})\)), corresponding to \(q=0\), has also been discussed in Example 2.3, where a different \(6\times 6\) orthogonal matrix has been used to show that it is \(2\)-unistochastic. Finally, consider the red point in Fig. 3, corresponding to the convex combination, with weights \(2/3\) and \(1/3\), respectively, of the blue and green points. It corresponds to the bistochastic matrix \[B=\frac{2}{3}\begin{bmatrix}0&1/2&1/2\\ 1/2&0&1/2\\ 1/2&1/2&0\end{bmatrix}+\frac{1}{3}\begin{bmatrix}1/9&4/9&4/9\\ 4/9&1/9&4/9\\ 4/9&4/9&1/9\end{bmatrix}=\begin{bmatrix}1/27&13/27&13/27\\ 13/27&1/27&13/27\\ 13/27&13/27&1/27\end{bmatrix}\] Since the green point lies on the hypocycloid curve: \(\sqrt{4/9\cdot 4/9}=2\sqrt{1/9\cdot 4/9}\), it corresponds to a unistochastic matrix. The blue point corresponds, as shown above, to a \(2\)-unistochastic matrix. Hence, in virtue of Proposition 2.6, \(B\) is \(3\)-unistochastic. This disproves a bistochastic version of [1, 2, 10], which suggested that the set of \(3\)-unistochastic matrices restricted to the simplex in Fig. 3 coincides with the union of the region delimited by the hypocycloid and the yellow Star of David shape generated by the two triangles. ## 4. Generalized orthostochastic matrices Much of the theory developed in the previous sections for generalized unistochastic matrices can be carried out to the case of _generalized orthostochastic_ matrices, which is what we do in this section. However, since many things are very similar, we shall only present the main definitions and some observations. We leave the detailed study of generalized orthostochastic matrices (and that of their quaternionic counterpart, the _qustochastic_ matrices) to future work. Recall that the function \(\varphi_{d,s}\) from Definition 2.1 maps a \(d\times d\) block-matrix (with blocks of size \(s\times s\)) to the matrix of normalized squares of Frobenius norms of the blocks. We denote by \(\mathcal{O}(n)\) the group of \(n\times n\) orthogonal matrices. **Definition 4.1**.: _We define_ \[\mathsf{O}_{d,s}:=\varphi_{d,s}(\mathcal{O}(ds))\] _to be the set of generalized orthostochastic matrices._ As in the complex case, for \(s=1\) we recover the usual orthostochastic matrices, which have received, along with the unistochastic matrices, a lot of attention in the literature [1, 2, 3, 4, 5]. Clearly, \(\mathsf{O}_{d,s}\subseteq\mathsf{U}_{d,s}\), with the inclusion being strict for \(d\geq 3\). Importantly, the van der Waerden matrix \(J_{d}/d\) is orthostochastic if and only if there exists a real Hadamard matrix of order \(d\)[1, 13, 14]. This can only happen if \(d=2\) or if \(d\) is a multiple of four, and it has long been conjectured that these conditions are also sufficient. In particular, the distance between \(J_{3}/3\) and the set \(\mathsf{O}_{d,1}\) is equal to \(\sqrt{2}/3\)[5, Proposition 3.2]. We prove now the main result of this section. **Proposition 4.2**.: _For all dimensions \(d\geq 2\) and all orders \(s\geq 1\), we have_ \[\mathsf{U}_{d,s}\subseteq\mathsf{O}_{d,2s}.\] Proof.: The result follows from the standard embedding \(\mathcal{U}(n)\subseteq\mathcal{O}(2n)\), obtained by replacing a complex entry \(z_{ij}\) by the \(2\times 2\) block \(\left[\begin{smallmatrix}\operatorname{Re}z_{ij}&\operatorname{Im}z_{ij}\\ -\operatorname{Im}z_{ij}&\operatorname{Re}z_{ij}\end{smallmatrix}\right]\). Figure 3. A slice through the Birkhoff polytope \(\mathsf{B}_{3}\) corresponding to the simplex generated by the permutation matrices \(P_{\mathrm{id}}\), \(P_{(123)}\), and \(P_{(132)}\); cf. Figure 1 and [1, Figure 7(b)]. **Corollary 4.3**.: _For all \(d\notin\{2\}\cup 4\mathbb{N}\), we have_ \[J_{d}/d\in\mathsf{O}_{d,2}\setminus\mathsf{O}_{d,1}.\] Proof.: We have \(J_{d}/d\in\mathsf{U}_{d,1}\subseteq\mathsf{O}_{d,2}\). On the other hand, since there cannot exist a real Hadamard matrix of order \(d\), we have \(J_{d}/d\notin\mathsf{O}_{d,1}\), as claimed. ## 5. Random generalized unistochastic matrices In the previous sections we have introduced and discussed generalized unistochastic matrices, which form the set \[\mathsf{U}_{d,s}=\varphi_{d,s}(\mathcal{U}(ds)),\] where \(\varphi_{d,s}\) is the map from Eq. (2.1). As the unitary group \(\mathcal{U}(ds)\) comes equipped with the (normalized) Haar measure \(\mathfrak{h}_{ds}\), it is natural to introduce and examine its image measure via \(\varphi_{d,s}\). **Definition 5.1**.: _We endow the set of generalized unistochastic matrices \(\mathsf{U}_{d,s}\) with the probability measure_ \[\mu_{d,s}=(\varphi_{d,s})_{\#}\mathfrak{h}_{ds},\] _that is, the image measure of the normalized Haar distribution \(\mathfrak{h}_{ds}\) on \(\mathcal{U}(ds)\) through the map \(\varphi_{d,s}\). In other words, if \(U\in\mathcal{U}(ds)\) is Haar-distributed, then \(B:=\varphi_{d,s}(U)\in\mathsf{U}_{d,s}\) is \(\mu_{d,s}\)-distributed._ We recall the following result about the first few joint moments of the entries of a Haar-distributed random unitary matrix. **Lemma 5.2** ([10, Proposition 4.2.3]).: _Let \(U=(U_{ij})_{i,j\in[n]}\in\mathcal{U}(n)\) be Haar-distributed. We have_ \[\mathbb{E}\Big{[}|U_{ij}|^{2}\Big{]} =\frac{1}{n}\qquad(1\leq i,j\leq n)\] \[\mathbb{E}\Big{[}|U_{ij}|^{4}\Big{]} =\frac{2}{n(n+1)}\qquad(1\leq i,j\leq n)\] \[\mathbb{E}\Big{[}|U_{ij}|^{2}|U_{i^{\prime}j}|^{2}\Big{]} =\mathbb{E}\Big{[}|U_{ij}|^{2}|U_{i^{\prime}j^{\prime}}|^{2}\Big{]} =\frac{1}{n(n+1)}\qquad(i\neq i^{\prime},j\neq j^{\prime})\] \[\mathbb{E}\Big{[}|U_{ij}|^{2}|U_{i^{\prime}j^{\prime}}|^{2}\Big{]} =\frac{1}{n^{2}-1}\qquad(i\neq i^{\prime},j\neq j^{\prime}).\] We leverage now this result to obtain the first moments of a random generalized unistochastic matrix. **Proposition 5.3**.: _For a Haar-distributed random unitary matrix \(U\in\mathcal{U}(ds)\), consider the corresponding \(\mu_{d,s}\)-distributed bistochastic matrix_ \[B:=\varphi_{d,s}(U)=\left(\frac{1}{s}||U_{ij}||_{F}^{2}\right)_{i,j\in[d]}\in \mathsf{B}_{d}.\] _For all \(1\leq i\neq i^{\prime},j\neq j^{\prime}\leq d\) and \(n:=ds\) we have:_ \[\mathbb{E}\Big{[}B_{ij}\Big{]} =\frac{1}{d}\] \[\mathbb{E}\Big{[}B_{ij}^{2}\Big{]} =\frac{d(s^{2}+1)-2}{d(n^{2}-1)}\] \[\mathbb{E}\Big{[}B_{ij}B_{i^{\prime}j}\Big{]} =\mathbb{E}\Big{[}B_{ij}B_{ij^{\prime}}\Big{]} =\frac{ds^{2}-1}{d(n^{2}-1)}\] \[\mathbb{E}\Big{[}B_{ij}B_{i^{\prime}j^{\prime}}\Big{]} =\frac{s^{2}}{n^{2}-1}.\] Proof.: We show the different claims one by one, using Lemma 5.2. \[\mathbb{E}\Big{[}B_{ij}\Big{]}=\frac{1}{s}\mathbb{E}\Big{[}\operatorname{Tr} \bigl{(}U_{ij}U_{ij}^{*}\bigr{)}\Big{]}=\frac{1}{s}\mathbb{E}\Big{[}||U_{ij}||_{ F}^{2}\Big{]}=\frac{1}{s}\sum_{k,l=1}^{s}\mathbb{E}\Big{[}|U_{ij}(k,l)|^{2}\Big{]}= \frac{1}{s}\cdot s^{2}\cdot\frac{1}{n}=\frac{1}{d}.\] \[\mathbb{E}\Big{[}B_{ij}^{2}\Big{]} =\frac{1}{s^{2}}\mathbb{E}\Big{[}\operatorname{Tr}\bigl{(}U_{ij }U_{ij}^{*}\bigr{)}^{2}\Big{]}=\frac{1}{s^{2}}\mathbb{E}\Big{[}\big{(}\sum_{k,l =1}^{s}|U_{ij}(k,l)|^{2}\big{)}\big{(}\sum_{p,q=1}^{s}|U_{ij}(p,q)|^{2}\big{)} \Big{]}\] \[=\frac{1}{s^{2}}\cdot s^{2}\cdot\Big{[}\frac{2}{n(n+1)}+2(s-1) \frac{1}{n(n+1)}+(s-1)^{2}\frac{1}{n^{2}-1}\Big{]}=\frac{d(s^{2}+1)-2}{d(n^{2} -1)}.\] \[\mathbb{E}\Big{[}B_{ij}B_{ij^{\prime}}\Big{]} =\frac{1}{s^{2}}\mathbb{E}\Big{[}\big{(}\sum_{k,l=1}^{s}|U_{ij}(k,l)|^{2}\big{)}\big{(}\sum_{p,q=1}^{s}|U_{ij^{\prime}}(p,q)|^{2}\big{)}\Big{]}\] \[=\frac{1}{s^{2}}\mathbb{E}\Big{[}\big{(}\sum_{k,l=1}^{s}|U_{ij}(k,l)|^{2}\big{)}\big{(}\sum_{q=1}^{s}|U_{ij^{\prime}}(k,q)|^{2}+\sum_{ \begin{subarray}{c}p,q=1\\ p\neq k\end{subarray}}^{s}|U_{ij^{\prime}}(p,q)|^{2}\big{)}\Big{]}\] \[=\frac{1}{s^{2}}\cdot s^{2}\cdot\Big{[}s\frac{1}{n(n+1)}+s(s-1) \frac{1}{n^{2}-1}\Big{]}.\] **Remark 5.4**.: _We could get the same results using the (graphical) Weingarten calculus [10, 11] used to compute general integrals over the unitary group with respect to the Haar measures._ Let us now analyze the correlations between different matrix elements of \(B\). Recall that _Pearson's correlation coefficient_ of a pair of random variables \((X,Y)\) is defined as \[\rho(X,Y):=\frac{\operatorname{Cov}(X,Y)}{\sqrt{\operatorname{Var}(X)}\cdot \sqrt{\operatorname{Var}(Y)}}.\] **Corollary 5.5**.: _For all \(d\geq 2\), \(s\geq 1\), \(n:=ds\), and \(1\leq i\neq i^{\prime},j\neq j^{\prime}\leq d\), we have:_ \[\operatorname{Var}(B_{ij}) =\frac{(d-1)^{2}}{d^{2}(n^{2}-1)}\] \[\operatorname{Cov}(B_{ij},B_{i^{\prime}j}) =\operatorname{Cov}(B_{ij},B_{ij^{\prime}}) =-\frac{d-1}{d^{2}(n^{2}-1)}\] \[\rho(B_{ij},B_{i^{\prime}j}) =\rho(B_{ij},B_{ij^{\prime}}) =-\frac{1}{d-1}\] \[\operatorname{Cov}(B_{ij},B_{i^{\prime}j^{\prime}}) =\frac{1}{d^{2}(n^{2}-1)}\] \[\rho(B_{ij},B_{i^{\prime}j^{\prime}}) =\frac{1}{(d-1)^{2}}.\] Proof.: This follows by direct computation from Prop. 5.3; the details are left to the reader. **Remark 5.6**.: _Note that the covariance (and the correlation coefficient) of elements of \(B\) situated on the same row (or column) is negative; this anti-correlation is explained by the normalization conditions of bistochastic matrices. However, the correlation of elements not belonging to the same row and column is positive._ _Let us also note that while the covariance of the matrix elements of \(B\) decreases (in absolute value) with the parameter \(s\), the correlation coefficient is constant (at fixed matrix dimension \(d\))._ The fact that the variance of \(B_{ij}\) decreases with the parameter \(s\) at fixed matrix dimension \(d\) means that the distribution \(\mu_{d,s}\) concentrates, as \(s\to\infty\), around the van der Waerden matrix. The same phenomenon can be seen at the level of spectra, see Figure 4: the non-trivial eigenvalues of the random matrices \(B\sim\mu_{d,s}\) tend to concentrate around the origin as \(s\) grows. We point the reader interested in spectral properties of bistochastic and unistochastic matrices to the papers [23, 24]. Finally, note the pair of complex eigenvalues outside the gray-bounded region in the middle top panel of Figure 4; they are a signature of the fact that \(\mathsf{U}_{3,2}\supsetneq\mathsf{U}_{3,1}\), see Example 2.3. **Acknowledgements.** We thank Karol Zyczkowski for bringing the paper [1] to our attention. I.N. was supported by the ANR projects ESQuisses, grant number ANR-20-CE47-0014-01 and STARS, grant number ANR-20-CE40-0008, and by the PHC program _Star_ (Applications of random matrix theory and abstract harmonic analysis to quantum information theory). A.S. was supported by the ANR project Quantum Trajectories, grant number ANR-20-CE40-0024-01. Z.O. would like to extend my sincere gratitude to Professor Ion Nechita and Anna Szczepanek for their invaluable guidance and insightful discussions. Additionally, I also wish to express my appreciation to all professors at IMT for their exceptional guidance and support, which makes my M1 year in Toulouse a truly fulfilling and enriching experience. Figure 4. Spectra of random generalized bistochastic matrices. We plot the (complex) eigenvalues of \(10\,000\) samples from the measure \(\mu_{d,s}\) introduced in Def. 5.1. On the top row, we have \(d=3\), and, respectively, \(s=1,2,3\). On the bottom row, \(d=4\) and \(s=1,2,4\). We also plot in gray the hypocycloid curves which are conjectured to bound the spectra in the unistochastic case \(s=1\), see [23, Section 4.3].
2305.19800
Accurate and Efficient Structural Ensemble Generation of Macrocyclic Peptides using Internal Coordinate Diffusion
Macrocyclic peptides are an emerging therapeutic modality, yet computational approaches for accurately sampling their diverse 3D ensembles remain challenging due to their conformational diversity and geometric constraints. Here, we introduce RINGER, a diffusion-based transformer model using a redundant internal coordinate representation that generates three-dimensional conformational ensembles of macrocyclic peptides from their 2D representations. RINGER provides fast backbone and side-chain sampling while respecting key structural invariances of cyclic peptides. Through extensive benchmarking and analysis against gold-standard conformer ensembles of cyclic peptides generated with metadynamics, we demonstrate how RINGER generates both high-quality and diverse geometries at a fraction of the computational cost. Our work lays the foundation for improved sampling of cyclic geometries and the development of geometric learning methods for peptides.
Colin A. Grambow, Hayley Weir, Nathaniel L. Diamant, Gabriele Scalia, Tommaso Biancalani, Kangway V. Chuang
2023-05-30T16:39:18Z
http://arxiv.org/abs/2305.19800v2
RINGER: Rapid Conformer Generation for Macrocycles with Sequence-Conditioned Internal Coordinate Diffusion ###### Abstract Macrocyclic peptides are an emerging therapeutic modality, yet computational approaches for accurately sampling their diverse 3D ensembles remain challenging due to their conformational diversity and geometric constraints. Here, we introduce RINGER, a diffusion-based transformer model for sequence-conditioned generation of macrocycle structures based on internal coordinates. RINGER provides fast backbone sampling while respecting key structural invariances of cyclic peptides. Through extensive benchmarking and analysis against gold-standard conformer ensembles of cyclic peptides generated with metadynamics, we demonstrate how RINGER generates both high-quality and diverse geometries at a fraction of the computational cost. Our work lays the foundation for improved sampling of cyclic geometries and the development of geometric learning methods for peptides. ## 1 Introduction Macrocyclic peptides are an important therapeutic modality in modern drug discovery that occupy a unique chemical and pharmacological space between small and large molecules [1; 2; 3]. These cyclic peptides exhibit improved structural rigidity and metabolic stability compared to their linear counterparts [4], yet retain key conformational flexibility and diversity to bind shallow protein interfaces [5]. However, computational approaches for modeling their structural ensembles remain limited compared to small molecules and proteins in terms of computational speed, accuracy (sample quality), and conformational diversity [6]. Critically, scalable and accurate tools are necessary to enable rational design of macrocyclic drugs; access to these tools can significantly impact optimization of key properties including binding affinity [7; 8], permeability [9; 10; 11], and oral bioavailability [12]. Several key challenges hinder fast and effective macrocycle conformer generation: 1) Macrocyclic peptides exhibit diverse molecular structures and chemical modifications, including varying ring size, stereochemistry, \(N\)-methylation, and more [13]. Their structural diversity, along with the increased number of rotatable bonds, results in a vast conformational space that is considerably more expensive to sample computationally. 2) Macrocycles are subject to complex non-linear constraints due to ring closure. The atomic positions, angles, and dihedrals of the macrocycle backbone are highly interdependent, and additional complex intramolecular interactions make this process inherently difficult to model [14]. 3) Experimental X-ray and NMR structures for macrocycles are lacking (\(\sim 10^{3}\)) in comparison to small molecules (\(\sim 10^{6}\) in the Cambridge Structural Database [15]) and proteins (\(\sim 10^{5}\) in the Protein Data Bank [16]). The scarcity of available experimental data has made it difficult to integrate observational data to improve structural predictions or train machine learning-based approaches. Together, the vast conformational space combined with limited data make modeling and sampling of macrocycles not only conceptually challenging, but technically challenging due to computational cost. Approaches that can accurately generate diverse conformations at scale would dramatically improve our ability to rationally design and optimize macrocycles. To address these limitations, we introduce RINGER (RINGER Generates Ensembles of Rings), a deep learning model designed specifically for sequence-conditioned macrocycle conformer generation (Figure 1) that efficiently samples realistic angles and torsions (i.e., internal coordinates) for macrocyclic peptides. RINGER merges a transformer architecture that naturally captures the physical equivariances and invariances of macrocyclic peptides with a discrete-time diffusion model to learn highly-coupled distributions over internal coordinates. We demonstrate how RINGER simultaneously achieves excellent performance in sample quality over angular and torsional profiles while maintaining excellent RMSDs relative to gold-standard conformer ensembles generated with the Conformer-Rotamer Ensemble Sampling Tool (CREST) [17]. We summarize our contributions as follows: * We propose a new framework, RINGER for conformer generation of macrocycle backbones based on efficiently encoding ring geometry using redundant internal coordinates. Our model naturally handles the cyclic nature of macrocycles and chiral side chains with both L- and D-amino acids. * We propose a simple solution to recover Cartesian coordinates from redundant internal coordinates that satisfies ring constraints using a sequential least-squares optimization and demonstrate that it works well in practice. * We benchmark RINGER extensively against state-of-the-art physics- and machine learning-based algorithms to demonstrate how our approach better captures complex distributions of macrocycles and achieves excellent sample quality and diversity compared to existing methods. ## 2 Background and Related Work Our work builds on small-molecule conformer generation and protein structure modeling to create a framework for macrocycle conformers. Below, we briefly summarize related work. Figure 1: Overview of RINGER for macrocycle conformer generation. **A.** Given a 2D representation of a macrocyclic peptide, RINGER generates an accurate and diverse 3D conformational ensemble. **B.** An illustration of the diffusion process learning to recover the time \(t=0\) bond angle (red) and torsional (blue) distributions from time, \(t=T\). Physics and Heuristic-based Conformer Generation for MacrocyclesPhysics-based and heuristic-based algorithms remain the state of the art for macrocycles and have required special considerations compared to drug-like small molecules due to ring-closing constraints. The open-source cheminformatics library RDKit leverages distance geometry algorithms for small-molecule conformer generation (ETKDG) [18], with improved heuristic bounds for macrocycles (ETKDGv3) [19, 20]. Similarly, commercial conformer generation algorithms such as OpenEye OMEGA [21, 22] in macrocycle mode use a distance geometry algorithm based on 4D coordinate initialization to provide diverse conformers [23], as their torsion-driving approach is incompatible with ring closure. Similarly, low-mode [24, 25] or Monte Carlo [26] search methods combined with molecular dynamics have been found to be effective at sampling macrocycle conformations, particularly when combined with force field optimizations as demonstrated in Schrodinger's MacroModel [14] and Prime MCS [27]. These approaches have been tuned with expert knowledge and torsional libraries to maximize agreement with observed experimental structures. The open-source CREST package [17] leverages iterative metadynamics with a genetic structure-crossing algorithm (iMTD-GC) to explore new geometries, and can be considered a gold-standard for generating diverse ensembles of drug-like molecules. In this work, we use the recently-published CREMP [28] dataset, containing high-quality, CREST-generated ensembles, representing over 31 million macrocycle geometries (see Section 4.1 and Appendix B for more details). One key limitation of these approaches is high computational cost and difficulty in scaling; in general, conformer generation is \(10^{3}\) - \(10^{5}\times\) more computationally expensive compared to a drug-like small molecule due to the increased number of rotatable bonds and their ring-closing constraints (e.g., generating a conformational ensemble of a macrocyclic hexapeptide with CREST requires an average of 14 hours [28]). These approaches become increasingly challenging when kinetic or molecular dynamics approaches are used with explicit solvation [29, 30]. Generative Approaches for Small Molecule Conformer EnsemblesRecent work with deep generative models has focused on improved sampling of the conformational landscape of small molecules. For example, Mansimov et al. [31] propose a conditional graph variational autoencoder (CGVAE) approach for molecular geometry generation. Simm and Hernandez-Lobato [32] report conditional generation of molecular geometries based on distance geometry. Xu et al. [33] leverage normalizing flows and energy-based modeling to help capture the multimodal nature and complex dependencies of small molecule space. More recently, Xu et al. [34] report GeoDiff, an equivariant diffusion-based model that operates on Cartesian point clouds. Although GeoDiff provides strong results, sampling is costly and requires 5,000 time steps. Recent reports have also drawn inspiration from physics-based conformer generation to leverage the rigid-rotor hypothesis, which treats bond distances and angles as fixed, and torsional angles of rotatable bonds are independently sampled, assuming little or no interdependence between torsions [35]. These include GeoMol [36], an SE(3)-invariant machine learning model for small molecule conformer generation that leverages graph neural networks, and EquiBind [37] which performs conditional generation on protein structure. Recently, Jing et al. [38] report Torsional Diffusion, a diffusion model that operates on the torsional space via an extrinsic-to-intrinsic score model to provide strong benchmarks on the GEOM dataset [39]. Importantly, these methods do not address the challenge of highly-coupled torsions within cyclic systems and either propose complex ring-averaging processes [36] or ignore sampling of cyclic structures all together [38]. Protein Structure Prediction and DiffusionSignificant progress has been made recently in protein structure prediction with the advent of methods such as AlphaFold2 [40] and RoseTTAFold [41]. However, structure prediction methods have predominantly focused on deterministic maps to static output structures rather than on sampling diverse structure ensembles. Recently, several papers have developed diffusion-based approaches for protein generation based on Euclidean diffusion over Cartesian coordinates [42, 43] or backbones as in FoldingDiff [44], with an emphasis on structural design. Our work builds on FoldingDiff, which parameterizes structures over internal backbone angles and torsions and relies on the natural extension reference frame (NeRF) [45] to perform linear reconstructions. However, as we demonstrate below, naive linear transformations fail to address the ring constraints for macrocycles. Moreover, FoldingDiff focuses on unconditional generation of protein backbones, whereas our focus here is conditional generation. Machine Learning Approaches for Macrocycle Conformer Ensemble GenerationDespite the many approaches focused on small molecules and protein structure generation, there are few efforts in macrocycle structure prediction. Most notably, Miao et al. [46] recently disclosed StrEAMM for learning on molecular dynamics of cyclic peptides using explicit solvation. StrEAMM is a linear model that predicts local backbone geometries and their respective 1,2- and 1,3-residue interactions to provide excellent ensemble estimates of homodetic hexapeptides. However, the model is not naturally inductive and is not natively extensible to other macrocycle ring sizes and residues. Fishman et al. [47] recently developed a more general framework for diffusion models on manifolds defined via a set of inequality constraints. However, they only investigate the conformational ensemble of a single cyclic peptide as a proof-of-concept using a reduced \(\alpha\)-carbon representation. ## 3 RINGER: Problem Statement and Methods ### Problem Definition: Conditional Macrocycle Conformer Generation The core objective of our work is to model the distribution of conformers for a macrocyclic peptide with a focus on backbone structure. Given a macrocycle graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes (atoms) and \(\mathcal{E}\) is the set of edges (bonds), and \(n=|\mathcal{V}|\) our goal is to learn a distribution over the possible conformers. Let \(\mathcal{C}=\{c_{1},c_{2},\ldots,c_{K}\}\) be the set of conformers, where each conformer \(c_{k}\in\mathcal{C}\) represents a unique spatial arrangement of the atoms \(\mathcal{V}\). Our task is to learn the distribution \(p(\mathcal{C}\mid\mathcal{G})\), which represents the probability over the conformer ensemble \(\mathcal{C}\) given a molecular graph \(\mathcal{G}\). Learning and sampling from this complex distribution is inherently challenging for most molecules, and is further complicated in macrocycles due to the highly-coupled nature of ring atoms. A perturbation to one part of the ring generally perturbs the others. Consequently, any model must account for the interdependence between atoms due to the cyclic constraints. Given this problem, a good generative model ideally satisfies a few key properties: 1) Naturally encodes the physical and structural aspects of macrocyclic peptides. For example, cyclic peptides with only standard peptide bonds (i.e., homodetic peptides) do not have a natural starting residue and hence exhibit cyclic shift invariance, e.g., cyclo-(R.I.N.G.E.R) is identical to cyclo-(I.N.G.E.R.R), where each amino acid is denoted by its one-letter code with "cyclo" indicating cyclization of the sequence. 2) Captures multimodal distributions and complex, higher-order interactions such as the strong coupling between atomic positions in the ring. 3) Samples high-quality and diverse conformations from \(p(\mathcal{C}\mid\mathcal{G})\) that faithfully capture realistic geometries while respecting the underlying conformer distribution. ### Representing Macrocycle Geometry: Redundant Internal Coordinates Conformer geometries are defined by their set of Cartesian coordinates for each atomic position and can hence be modeled using SE(3)-equivariant models to learn complex distributions. However, Euclidean diffusion requires modeling the many degrees of freedom; and, in practice, can require many time steps to generate accurate geometries [34]. Moreover, realistic conformations are highly sensitive to the precise interatomic distances, angles, and torsions--although this information is implicit in the Cartesian positions, explicitly integrating these quantities into a model can provide a strong inductive bias and accelerate learning [48]. Borrowing from molecular geometry optimization [49], protein representation [45; 50; 51], and inverse kinematics [52], we adopt redundant internal coordinates that represent conformer geometries through a set of bond distances, angles, and torsions (dihedral angles), i.e., \(\mathcal{C}\equiv\{\mathcal{D},\Theta,\mathcal{T}\}\). In particular, this simplifies the learning task, as bond distances can be approximated as fixed distances with little loss in accuracy [21; 38; 44], and internal angles typically fit a narrow distribution. Importantly, these coordinates define an internal reference frame that readily encodes complex geometries including ring chirality. Moreover, this approach obviates the need for complex equivariant networks and enables the use of simpler neural architectures [44]. Hence, our generative process can be reformulated as learning the distribution \(p(\{\Theta,\mathcal{T}\}\mid\mathcal{G};\mathcal{D})\) using known bond distances for reconstruction back to Cartesians (Figure 1). ### Deep Probabilistic Diffusion Models for Sampling Internal Coordinates Denoising Probabilistic ModelsRecent works on deep denoising probabilistic models have demonstrated excellent generative performance for complex multimodal data [53; 54; 55], and have been successfully applied to both small molecules and proteins [34; 44]. In particular, we use the discrete-time diffusion model from Wu et al. [44] that formulates the forward transition probability using a wrapped normal distribution, \(q\left(\mathbf{x}_{t}\mid\mathbf{x}_{t-1}\right)=\mathcal{N}_{\text{warped}} \left(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}\right)\), instead of a standard normal distribution [38], where \(\mathbf{x}_{t}\) represents the noised internal coordinates (bond angle and torsion) at time step \(t\). We train a diffusion model, \(p_{\Xi}(\mathbf{x}_{t-1}\mid\mathbf{x}_{t})\), by training a neural network to predict the noise present at a given time step (for full details, see Appendix C). During inference, we sample \(\mathbf{x}_{T}\) from a wrapped normal distribution and iteratively generate \(\mathbf{x}_{0}\) using \(p_{\Xi}(\mathbf{x}_{t-1}\mid\mathbf{x}_{t})\). The sampling process is further detailed in Appendix D. Encoder ArchitectureMacrocycles exhibit extensive coupling of their residues due to torsional strain and intramolecular interactions such as hydrogen bonds. Here, we use a standard bidirectional transformer architecture [56, 57] using self-attention to learn the complex interactions between atoms. Unlike standard sequence models for linear data, macrocycles exhibit cyclic symmetry with no canonical start position. Thus, we design a bidirectional, relative positional encoding, \(\mathbf{p}_{ij}^{K}\), inspired by standard relative encodings [58] to reflect this cyclic invariance (see Appendix A for notation): \[\mathbf{z}_{i}=\sum_{j=1}^{n}\alpha_{ij}\left(\mathbf{v}_{j}\mathbf{W}^{V} \right),\quad\text{where}\quad\alpha_{ij}=\frac{\exp e_{ij}}{\sum_{k=1}^{n} \exp e_{ik}} \tag{1}\] \[e_{ij}=\frac{\mathbf{v}_{i}\mathbf{W}^{Q}\left(\mathbf{v}_{j}\mathbf{W}^{K}+ \mathbf{p}_{ij}^{K}\right)^{T}}{\sqrt{d_{z}}}\quad\text{with}\quad\mathbf{p}_ {ij}^{K}=\underbrace{\mathbf{W}_{(i-j)\bmod n}^{D}}_{\text{forward}}+ \underbrace{\mathbf{W}_{(i-j)\bmod(-n)}^{D}}_{\text{backward}} \tag{2}\] These cyclic relative position representations encode bidirectional edge relationships between each atom by specifying forward and reverse distances in the macrocycle. The relative position of any neighboring atom is uniquely defined by its forward and reverse graph distances in the embedding lookup \(\mathbf{W}^{D}\). For conditional generation, we perform a linear projection of the features \(\mathbf{a}_{i}\), corresponding to each macrocycle backbone atom and its side chain, and a separate linear projection of the angles and torsions \(\mathbf{x}_{i}=[\theta_{i},\tau_{i}]\) and concatenate them as a single input to the transformer, \(\mathbf{v}_{i}=\mathbf{a}_{i}^{\prime}\oplus\mathbf{x}_{i}^{\prime}\). Notably, our diffusion model only adds noise to the angular component, \(\mathbf{x}_{i}\). For unconditional generation, atoms are only labeled with their backbone identity (nitrogen, \(\alpha\)-carbon, carbonyl-carbon) using an embedding that is added to the input. Model details are shown in Appendix E. Ring Closing: Back Conversion to Cartesian Ring CoordinatesMacrocycles with fixed bond distances contain three redundant torsional angles and two redundant bond angles. Whereas linear peptides and proteins can be readily converted into an arbitrary Cartesian reference frame through methods such as NeRF [45], these redundancies prevent direct transformation to unique Cartesians for cyclic structures. Adopting a sequential reconstruction method such as NeRF accumulates small errors that result in inadequate ring closure for macrocycles.1 Other studies have developed complex heuristics with coordinate averaging for ring smoothing [36], yet these approaches can distort the predicted geometries. In practice, we demonstrate that an efficient post-processing step works well with minimal distortion: we treat this as a constrained optimization problem using the Sequential Least Squares Quadratic Programming (SLSQP) algorithm [59] to ensure valid Cartesian coordinates while satisfying distance constraints: Footnote 1: Although direct equality and inequality constraints over the diffusion process is a promising direction that could address this problem, we leave this direction for future work. \[\mathbf{\hat{\xi}}=\operatorname*{arg\,min}_{\mathbf{\xi}}\|\theta(\mathbf{ \xi})-\mathbf{\hat{\theta}}\|^{2}+\|w\left(\mathbf{\tau}(\mathbf{\xi})-\mathbf{ \hat{\tau}}\right)\|^{2}\quad\text{subject to:}\quad\mathbf{d}(\mathbf{\xi})= \mathbf{d}_{\text{true}} \tag{3}\] Here, we find the set of Cartesian coordinates, \(\mathbf{\hat{\xi}}\), that minimize the squared error against the internal coordinates \(\mathbf{\hat{\theta}}\) and \(\mathbf{\hat{\tau}}\) sampled by the diffusion process while satisfying bond distance equality constraints using known bond distances, \(\mathbf{d}_{\text{true}}\), from the training data. The torsion error, \(\mathbf{\tau}(\mathbf{\xi})-\mathbf{\hat{\tau}}\), is wrapped by \(w(\cdot)\) so that it remains in the \([-\pi,\pi)\) range. Empirically, we demonstrate that this scheme recovers realistic macrocycles with high fidelity by evenly distributing the error across the entire macrocycle backbone (see Appendix F for additional details). Overall Generation ProcedureOur model represents macrocycle backbones as cyclic sequences of redundant angles and dihedrals with fixed bond lengths. We train a discrete-time diffusion model to learn a denoising process over the internal coordinates, using a transformer architecture with an invariant cyclic positional encoding. At inference time, we sample from a wrapped Gaussian distribution to produce a set of angles and torsions, conditioning on the known set of atom features corresponding to the amino-acid sequence. In the final post-processing step, macrocycle geometries with Cartesian coordinates can be reconstructed through our constrained optimization using Equation (3). ## 4 Experiments and Results ### Experimental Setup DatasetWe train and evaluate our approach on the recently published CREMP dataset [28] that contains 36k homodetic macrocyclic peptides across varying ring sizes (4-mers, 5-mers, and 6-mers corresponding to 12-, 15-, and 18-membered backbone rings), side chains, amino-acid stereochemistry, and \(N\)-methylation. Each macrocycle in CREMP contains a conformational ensemble sampled with CREST [17], a metadynamics algorithm with genetic crossing built on the semi-empirical tight-binding method GFN2-xTB [60]. We perform stratified random splitting on the data, with a training and validation set of 35,198 molecules (948,158 conformers using a maximum of 30 conformers per molecule), which we split into 90% training and 10% validation, and a final test set of 1,000 molecules corresponding to 877,898 distinct conformers (using _all_ conformers per molecule within the \(6\,\mathrm{kcal/mol}\) energy threshold defined by CREST). Additional dataset statistics are shown in Appendix B. Training & SamplingAll training is performed on the set of 35k peptides described above, using the 30 lowest-energy conformers per peptide. We train each model on a single NVIDIA A100 GPU for up to 1000 epochs until convergence (typically less than 100 epochs) using the Adam optimizer with 10 warmup epochs. Following work in small-molecule conformer generation [34; 36; 38], we sample \(2K\) conformers for a macrocycle ensemble of \(K\) ground-truth conformers (median \(K\) = 656) and assess them based on the evaluation criteria below. For full training and sampling details see Appendices C and D. EvaluationFor unconditional generation, we use Kullback-Leibler divergence to measure the difference in sample quality. For conditional generation, we evaluate the quality of our generated macrocycle backbones using root-mean-squared-deviation (RMSD) between backbone atom coordinates, similar to previous work on small-molecule conformer generation. We use several metrics including **Matching** and **Coverage [33; 36; 38], and for each we report recall and precision. We note that although RMSD is widely used to assess conformer quality, its utility for comparing backbones is more limited, as sampled backbones with highly unrealistic or energetically unfavorable torsions can exhibit low RMSD values. Therefore, we additionally report the torsion fingerprint deviation (TFD) [19; 61] to evaluate the quality of the torsional profiles. RMSD provides a measure of distance between two conformers based on a least-squares alignment of their respective atomic positions, while TFD gives a normalized measure of matched torsion angles between backbone geometries. Appendix I defines the evaluation metrics in detail. BaselinesWe provide benchmarks of our method against open-source and commercial toolkits RDKit ETKDGv3 (for macrocycles) [19], OMEGA Macrocycle Mode [62], and the SE(3) diffusion model GeoDiff [34]. For GeoDiff, we report results on a model retrained on identical macrocycle conformers (GeoDiff-Macro), as the base model trained on small molecules provided poor performance. Methods such as torsional diffusion [38] only alter freely rotatable bonds and cannot sample macrocycle backbones by design. See Appendix J for details about the baseline methods. ### Unconditional Generation of Macrocycles To understand whether this approach can learn the underlying distribution of macrocycle conformations, we first trained RINGER on macrocycle backbones in the absence of any residue or side-chain features and only providing ring-atom identity. From a design perspective, diverse backbone sampling alone can help drive inverse peptide design, where specific backbone geometries suggest important sequences. Figure 2 clearly demonstrates how RINGER accurately replicates both angles and dihedrals with tight fidelity across all residue atoms, both qualitatively from the plots and quantitatively as measured by the KL divergence. Furthermore, we generated Ramachandran plots [50] alongside our withheld test set to visualize the conditional dependencies between residue torsions. Notably, RINGER recapitulates the critical modes of the distribution. Appendix K provides more fine-grained detail by visualizing distributions separately based on the number of residues in the macrocycle. ### Sequence-Conditioned Generation of Macrocycles We subsequently focused on the challenge of sequence-conditioned generation to understand whether RINGER could effectively capture the complex steric and intramolecular effects that dictate macrocycle backbone conformation. Whereas our unconditional model above disregarded side chains, we now condition backbone generation on molecular features corresponding to each residue, including side-chain features, stereochemistry, and \(N\)-methylation (see Appendix E). Comparison of RINGER RMSD and TFD ensemble metrics against RDKit, OMEGA, and GeoDiff baselines are shown in Table 1. Here, recall quantifies the proportion of ground truth conformers that are recovered by the model, and precision quantifies the quality of the generated ensemble (also see Appendix L.2 for confidence intervals). We found that RDKit ETKDGv3 and OMEGA Macrocycle mode, both based on distance-geometry approaches, performed similarly across both metrics and achieved moderate recall with limited precision. To compare deep learning approaches, we trained a Euclidean diffusion model using the GeoDiff architecture on the CREMP dataset, and found a strong boost in recall with similar precision. We evaluated our approach with and without the post-processing geometry constrained optimization (Equation 3), as our raw, generated samples from RINGER may not satisfy realistic macrocycle distance constraints. As with unconditional generation, sequence-conditioned generation learns the data distribution with high fidelity as shown in Appendix L.1. "RINGER" in Table 1 corresponds to Cartesian geometries that were generated from the predicted angles by starting at one atom and setting bond distances, angles, and dihedrals sequentially. The starting atom was chosen such that the redundant bond distance in the ring most closely matches the true bond distance. "RINGER (opt)" refers to our post-processed geometries that satisfy the true bond distances exactly. Notably, both approaches achieve excellent recall and precision across both RMSD- and TFD-based scores compared to our baselines. Furthermore, post-processing to guarantee valid macrocycle geometries preserves excellent recall, albeit with slightly attenuated precision. Additionally, Figure 3 shows that RINGER outperforms all baselines over a wide range of thresholds used for evaluating Coverage. The plateau in RMSD precision (but not for TFD) of RINGER with post-processing is a result of the optimization converging to unrealistic geometries that nonetheless match the true torsions well. This motivates further development of methods to natively handle the cycle constraints as a future direction. Figure 2: Comparison of the bond angle and dihedral distributions from the held-out test set (orange) and in the unconditionally generated samples (blue). The three top left plots correspond to the three bond angle types in each amino acid residue, the three bottom left plots show the three dihedral angles for each residue, and the right shows Ramachandran plots (colored logarithmically by density with high density regions shown in lighter colors). KL divergence is calculated as \(D_{\text{KL}}\)(test \(\parallel\) sampled). Notably, our approach not only identifies better conformer ensembles, but provides increased sampling efficiency with only \(T=20\) time steps, compared to GeoDiff's \(T=5000\) or FoldingDiff's \(T=1000\) (see Table 7 in Appendix L.3 for additional analysis). Although our standard training protocol uses a diverse ensemble of \(k=30\) lowest-energy conformers per training molecule, we found that training with only the lowest-energy conformer (\(k=1\)) still outperforms baseline methods. Increasing this number (\(k=30,100\)) notably increases recall with a clear trade-off in precision. These results highlight the excellent data efficiency and sample quality of our diffusion-based generation, and are a good illustration of the trade-off between precision (sample quality) and recall (ensemble diversity). ### Structural Analysis of Generated Macrocycles Although RMSD and TFD give a quantitative evaluation of performance, we also analyzed individual ensembles to understand the qualitative differences in conformer generation processes (Figure 4). Notably, the two macrocycles shown possess distinct sequences that result in distinct Ramachandran plots and conformations. As shown in Figure 4 (top panels), most ground truth conformer ensembles exhibit relatively tight distributions characterized by a distinctive set of \(\phi,\psi\) angles. Although RDKit, OMEGA, and GeoDiff can identify relevant low-energy conformers (albeit with slight errors), the overall sampling process generates unrealistic distributions. In contrast, RINGER recapitulates not only the ground state geometry with excellent accuracy, but better captures the entire ensemble \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**RMSD – Recall**} & \multicolumn{4}{c}{**RMSD – Precision**} \\ & & \multicolumn{2}{c}{Coverage (\%) \(\uparrow\)} & \multicolumn{2}{c}{MAT (Å) \(\downarrow\)} & \multicolumn{2}{c}{Coverage (\%) \(\uparrow\)} & \multicolumn{2}{c}{MAT (Å) \(\downarrow\)} \\ Method & \(k\) & Mean & Med. & Mean & Med. & Mean & Med. & Mean & Med. \\ \hline RDKit [19] & – & 35.8 & 8.9 & 0.187 & 0.160 & 5.6 & 0.9 & 0.540 & 0.504 \\ OMEGA [62] & – & 32.3 & 7.1 & 0.186 & 0.163 & 3.7 & 1.3 & 0.557 & 0.525 \\ GeoDiff-Macro [34] & 30 & 50.8 & 54.2 & 0.151 & 0.120 & 6.4 & 3.0 & 0.592 & 0.559 \\ **RINGER** & 30 & 77.0 & 84.5 & 0.091 & 0.072 & **61.3** & **69.1** & **0.185** & **0.120** \\ **RINGER** (opt) & 1 & 63.8 & 66.9 & 0.139 & 0.112 & 58.1 & 65.1 & 0.430 & 0.327 \\ **RINGER** (opt) & 30 & 79.7 & 86.3 & 0.084 & 0.065 & 56.4 & 62.7 & 0.441 & 0.356 \\ **RINGER** (opt) & 100 & **85.6** & **92.2** & **0.065** & **0.049** & 56.9 & 62.4 & 0.454 & 0.385 \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**TFD – Recall**} & \multicolumn{4}{c}{**TFD – Precision**} \\ & & \multicolumn{2}{c}{Coverage (\%) \(\uparrow\)} & \multicolumn{2}{c}{MAT \(\downarrow\)} & \multicolumn{2}{c}{Coverage (\%) \(\uparrow\)} & \multicolumn{2}{c}{MAT \(\downarrow\)} \\ Method & \(k\) & Mean & Med. & Mean & Med. & Mean & Med. & Mean & Med. \\ \hline RDKit [19] & – & 52.9 & 55.3 & 0.059 & 0.051 & 9.4 & 4.4 & 0.215 & 0.206 \\ OMEGA [62] & – & 49.7 & 47.6 & 0.061 & 0.055 & 6.6 & 4.2 & 0.225 & 0.219 \\ GeoDiff–Macro [34] & 30 & 68.1 & 83.0 & 0.048 & 0.037 & 9.1 & 6.1 & 0.248 & 0.241 \\ **RINGER** & 30 & **90.1** & **95.0** & **0.024** & **0.019** & **74.7** & **86.2** & **0.059** & **0.033** \\ **RINGER** (opt) & 30 & 89.2 & 94.3 & **0.024** & **0.019** & 61.8 & 68.9 & 0.068 & 0.044 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance metrics for sequence-conditioned generation of macrocycles evaluated on ring atoms. Coverage is evaluated at a threshold of \(0.1\,\mathrm{\SIUnitSymbolAngree}\) for RMSD and 0.05 for TFD. \(k\) is the maximum number of lowest-energy conformers used per molecule in the _training_ data. _All test data_ conformers are used for evaluation. “opt” refers to the use of Equation (3) to reconstruct Cartesian coordinates. Figure 3: Comparison of mean coverage when varying the threshold across RMSD (left) and TFD (right). Translucent error bands correspond to 95% confidence intervals. distribution. These results demonstrate how RINGER better models sequence-dependent geometries to achieve strong performance. ## 5 Limitations and Future Directions Our studies demonstrate the potential for diffusion-based models to tackle key limitations in constrained macrocycle generation, but they are not without limitations. First, our current work has focused on the CREMP dataset, which is currently limited to homodetic, 4-, 5-, and 6-mer macrocycles with canonical side chains in implicit chloroform solvent. Extension to macrocycles with larger ring sizes, non-canonical side chains, and other complex topologies would improve the generalizability of this work. Second, we have focused primarily on modeling diversity of backbone geometry. Complete expansion to fully model the positions of all side chain atoms is the focus of ongoing work. Finally, although we demonstrate the effectiveness of a standard, discrete-time diffusion process, our approach is not physically constrained to satisfy macrocyclic geometries and currently requires a post-optimization step. Developing and applying physics-informed diffusion processes with inequality constraints could improve the efficiency of training and sampling of relevant macrocycle backbones. Despite these limitations, our work provides an important first step toward efficient generation of complex macrocycle geometries, and we anticipate its application toward more complex, conditional generation tasks. ## 6 Conclusions In summary, we present RINGER, a new approach for generating macrocycle conformer ensembles that significantly improves sample quality, diversity, and inference. By leveraging specific benefits of diffusion-based models, we demonstrate how a transformer-based architecture with a cyclic positional encoding results in significant gains over Cartesian-based equivariant models and widely-used distance geometry-based algorithms for both unconditional and conditional structure generation. The present work paves the way for more efficient and accurate computational exploration of conformational Figure 4: RINGER accurately generates ensembles as illustrated by Ramachandran plots for individual macrocycle ensembles. The 3D backbones illustrate the lowest-energy reference structure and the closest matching conformer (based on ring-atom RMSD) from each method. For RINGER: C from N-methyl, O from carbonyl, hydrogens, and proline carbons were inferred via MMFF94 optimization constraining the backbone geometry. space. We anticipate that this approach will more broadly enable rational macrocycle discovery through further development. ## 7 Code and Data Availability All code for training, sampling, and evaluation in this study are available at [http://www.github.com/Genentech/RINGER](http://www.github.com/Genentech/RINGER). We include the exact training and test data splits and trained model. The CREMP dataset [28] is available for download from [https://zenodo.org/record/7931445](https://zenodo.org/record/7931445). ### Acknowledgments and Disclosure of Funding We thank Ben Sellers and Christian Cunningham for insightful discussions on macrocycles and peptide therapeutics. We also thank members of the Departments of Peptide Therapeutics and Discovery Chemistry for helpful feedback and discussions. This research is sponsored by Genentech, Inc. All authors are employees of Genentech, Inc. and shareholders of Roche.
2303.14072
Alignment of the Alpha Magnetic Spectrometer (AMS) in space
The Alpha Magnetic Spectrometer (AMS) is a precision particle physics detector operating at an altitude of 410 km aboard the International Space Station. The AMS silicon tracker, together with the permanent magnet, measures the rigidity (momentum/charge) of cosmic rays in the range from 0.5 GV to several TV. In order to have accurate rigidity measurements, the positions of more than 2000 tracker modules have to be determined at the micron level by an alignment procedure. The tracker was first aligned using the 400 GeV/c proton test beam at CERN and then re-aligned using cosmic-ray events after being launched into space. A unique method to align the permanent magnetic spectrometer for a space experiment is presented. The developed underlying mathematical algorithm is discussed in detail.
Qi Yan, Vitaly Choutko
2023-03-24T15:33:37Z
http://arxiv.org/abs/2303.14072v1
# Alignment of the Alpha Magnetic Spectrometer (AMS) in space ###### Abstract The Alpha Magnetic Spectrometer (AMS) is a precision particle physics detector operating at an altitude of \(\sim\)410 km aboard the International Space Station. The AMS silicon tracker, together with the permanent magnet, measures the rigidity (momentum/charge) of cosmic rays in the range from \(\sim\)0.5 GV to several TV. In order to have accurate rigidity measurements, the positions of more than 2000 tracker modules have to be determined at the micron level by an alignment procedure. The tracker was first aligned using the 400 GeV/c proton test beam at CERN and then re-aligned using cosmic-ray events after being launched into space. A unique method to align the permanent magnetic spectrometer for a space experiment is presented. The developed underlying mathematical algorithm is discussed in detail. Alignment Tracking Detector Silicon Tracker AMS Cosmic Rays _Submitted to The European Physical Journal C_ ###### Contents * 1 Introduction * 2 AMS detector and the silicon tracker * 3 Coordinate systems and composite alignment parameters * 3.1 Coordinate transformation and alignment parameters * 3.2 Coordinate measurement residual and its derivatives with respect to the alignment parameters * 4 Constraints of the composite alignment parameters * 4.1 Constraints of sensors in a ladder * 4.2 Constraints of ladders in a layer * 4.3 Constraints of layers in the tracker * 4.4 Constraints of stretching and shear deformations * 4.4.1 Stretching * 4.4.2 Shearing * 5 Global track alignment * 6 Alignment based on the 400 GeV/c proton test beam * 6.1 Setup of the test beam * 6.2 Alignment procedure * 6.2.1 Presigmas in the alignment * 6.2.2 Fixed parameters in the alignment * 6.3 Alignment results * 6.4 Mechanical stability study with the \(180^{\circ}\) runs * 7 Dynamic alignment of the external tracker layers in space * 7.1 Thermal environment and data collection on orbit * 7.2 Alignment procedure * 7.2.1 Dynamic alignment in a short-time window * 7.2.2 Alignment smoothing for long time period * 7.3 Alignment results * 8 * 8 Static alignment of the tracker in space * 8.1 Global track alignment with curvature constraints * 8.2 Alignment data sample * 8.3 Alignment procedure * 8.3.1 Alignment validation with Monte Carlo * 8.3.2 Alignment optimization for the flight data * 8.3.3 Refinement with the curvature alignment * 8.3.4 Determination of the total absolute rigidity scale * 8.4 Alignment results * 8.4.1 Displacements of the tracker modules during launch * 8.4.2 Stability of the tracker modules in space * 8.4.3 Alignment precision * 9 Conclusion ## 1 Introduction The Alpha Magnetic Spectrometer (AMS), operating aboard the International Space Station (ISS) since May 2011, is a unique large acceptance magnetic spectrometer in space. It aims to measure energy spectra of cosmic-ray charged particles, nuclei, antiparticles, antinuclei, and gamma-rays in the GeV-TeV region to understand Dark Matter, antimatter, and the origin of cosmic rays, as well as to explore new physics phenomena. The AMS silicon tracker detector, together with the permanent magnet, determines the rigidity (momentum/charge) of charged cosmic rays by multiple measurements of the coordinates along the particle trajectory. High performance of the tracker is crucial for the AMS mission and requires a sophisticated alignment to accurately determine the positions of the detector modules. In August 2010, before AMS was launched, the complete detector was tested with a 400 GeV/c proton beam at the CERN Super Proton Synchrotron (SPS). This data allows the precise alignment of the tracker with micron accuracy using the procedure described in this paper, which aligns all the detector modules from different mechanical hierarchy levels in one step. The strong accelerations and vibrations during launch, followed by the rapid outgassing of the support structure in vacuum, together with continuous temperature variations in space all change the positions of the tracker modules. The tracker is continuously re-aligned with cosmic-ray events to correct the resulting displacements. The unprecedented challenge in the alignment of the magnetic spectrometer in space is that the detector has to be aligned by using cosmic-ray events with unknown rigidities in the presence of the magnetic field. In this paper, we report a unique mathematical approach which allows to overcome these difficulties and align the tracker to micron precision. ## 2 AMS detector and the silicon tracker As shown in Fig. 1, the AMS detector consists of a permanent magnet and an array of particle detectors to measure the velocity \(\beta=v/c\), absolute charge \(Q\), energy \(E\), and rigidity \(R\) of the passing particles. Within the magnet bore and above and below the magnet are a total of 9 precision silicon tracker layers, L1 to L9. The tracker accurately measures \(R\) and \(Q\) of the particles. Above and below the magnet bore are the Upper and Lower Time of Flight (TOF) counters [1]. The TOF provides a charged particle trigger to AMS and determines \(\beta\) and \(Q\) of the incoming particles. The Transition Radiation Detector (TRD) [2], located above the Upper Time of Flight counters, identifies electrons and positrons. The Ring Imaging Cherenkov detector (RICH) [3], below the Lower Time of Flight counters, measures \(\beta\) and \(Q\) of passing particles. The Electromagnetic Calorimeter (ECAL) [4], at the bottom of AMS, measures \(E\) of electromagnetic particles and separates protons from electrons and positrons. The Anti-Coincidence Counters (ACC) [5], surrounding the inner tracker inside the magnet Figure 1: (right) Schematic view of a cosmic-ray fluorine nuclei event of 26 GV rigidity measured by AMS, with the signals in the TRD, TOF, silicon tracker, RICH, and ECAL. Also shown are the permanent magnet and ACC. (left) Layout of the tracker showing the upper external layer (L1), the inner tracker (L2-L8), and the lower external layer (L9) as well as their support planes. bore, reject cosmic rays entering AMS from the side. The magnet [6] is made of 64 sectors of high-grade Nd-Fe-B assembled in a cylindrical shell. The central field of the magnet is 1.4 kGauss. In 2010, the field was measured in 120 000 locations to an accuracy of better than 2 Gauss. Comparison with the measurements performed with the same magnet in 1997 shows that the field did not change within 1%. On orbit, the magnet temperature varies from \(-3\) to \(+20^{\circ}\)C. The field strength is corrected with a measured temperature dependence of \(-0.09\%/^{\circ}\)C [7]. The AMS tracker comprises 2284 double-sided silicon micro-strip sensors each with a surface area of 41.360 \(\times\) 72.045 (active area of 39.832 \(\times\) 70.565) mm\({}^{2}\) and thickness of 0.300 mm, assembled in 192 mechanical and electrical units called ladders [8]. Each ladder contains 9 to 15 sensors, see Fig. 2 (a). The total active area is 6.42 m\({}^{2}\). Both sides of a sensor are implanted with metallic strips running in orthogonal directions, providing a two-dimensional measurement of the particle position. For the side with \(p+\) doped strips (\(p\)-side), the implantation (readout) strip pitch is 27.5 (110) \(\upmu\)m. The opposite side (\(n\)-side) with \(n+\) strips has an implantation (readout) pitch of 104 (208) \(\upmu\)m. The \(p\)-side (\(n\)-side) strips provide the measurement of the particle bending (non-bending) coordinate \(y\) (\(x\)). Combining the information from all signal strips in a sensor, the coordinate resolution in \(y\) is \(\sim\)10 \(\upmu\)m for \(Q=1\) and \(\sim\)5 \(\upmu\)m for \(Q=6\) particles [9]. Sensors within a ladder are daisy-chained together through wire bonds on the \(p\)-side and are connected by a metalized Upilex film on the \(n\)-side which is then glued to a ladder reinforcement frame with layers of foam and carbon fiber (see Fig. 2 (a)). Figure 2: The AMS silicon tracker ladder: (a) the main components of the ladder and (b) two assembled ladders. From 16 to 26 ladders are mounted onto one side of a support plane to form a layer. As seen in Fig 1, the tracker has 9 layers supported by 6 rigid planes. Each plane is made of an aluminum honeycomb interior and carbon fiber skins. The first layer (L1) is on plane 1 at the top of the detector, the second (L2) is on plane 2 just above the magnet, six (L3 to L8) are on 2 sides of planes 3, 4, and 5 within the bore of the magnet, and the last (L9) is on plane 6 just above the ECAL. The maximum lever arm from L1 to L9 is about 3 m. L2 to L8 constitute the inner tracker. The planes of the inner tracker are firmly held by a cylindrical carbon fiber structure which has near zero coefficient of thermal expansion and excellent mechanical strength [8]. The material thickness of a plane, including 2 layers of ladders, represents \(\sim\)1% of a radiation length (\(X_{0}\)). External plane 1 carrying L1 is bolted to another support sandwich plane (plane 1 NS) fastened to the top cover of the TRD. External plane 6 carrying L9 is attached to the ECAL fixation blocks [6]. The deformation of the support structures of the TRD (M-Structure) [10] and ECAL (Unique Support Structure) [11] due to gravity change or temperature variation (more than \(\pm\)10\({}^{\circ}\)C in space) induce sizable displacements of L1 and L9 with respect to the position of the inner tracker. The material thickness between L1 and L2, mostly the TRD and Upper TOF, is \(\sim\)0.3 \(X_{0}\), and that between L8 and L9, mostly the Lower TOF and RICH, is \(\sim\)0.2 \(X_{0}\)[12]. ## 3 Coordinate systems and composite alignment parameters Figure 3: The components and coordinate systems of (a) a sensor, (b) a ladder, (c) a layer, and (d) the inner tracker. The inner tracker coordinate system is also the global coordinate system. The AMS tracker modules (sensors, ladders, and layers) are assembled in a hierarchical support structure -- sensors in ladders, ladders on layers, and layers on planes into the tracker. Each module is positioned with respect to the next support structure by 6 degrees of freedom: 3 translations and 3 rotation angles. Figure 3 (a) (b) (c) illustrates the local coordinate systems of a sensor, a ladder, and a layer where the geometric center of each module is defined as its origin point (\(\mathbf{o}_{s}\), \(\mathbf{o}_{L}\), or \(\mathbf{o}_{P}\)). Taking the sensor coordinate system as an example, as shown in Fig. 3 (a), the \(u_{s}\)-axis and the \(v_{s}\)-axis are defined along the coordinates measured by the strips of the \(n\)-side and the \(p\)-side respectively and the \(w_{s}\)-axis is normal to the sensor plane. Fig. 3 (d) shows the global coordinate system of the tracker where the geometric center of the inner tracker layers (L2-L8) is defined as its origin point (\(\mathbf{o}_{g}\) or \(\mathbf{o}\)), the \(x\) (\(u_{g}\))-axis is along the coordinates measured by \(n\)-side strips parallel to the main component of the magnetic field, the \(z\) (\(w_{g}\))-axis is pointing vertically perpendicular to the tracker layers, and the \(y\) (\(v_{g}\))-axis completes to a right-handed orthogonal coordinate system. In composite alignment, all detector modules from different hierarchy levels are aligned simultaneously. This approach was previously used in the CMS experiment [13]. In this section and section 4, we will introduce mathematical formulae for composite alignment. Specifically, section 4 will address the implementation of constraints in composite alignment using our original numerical grid method. ### Coordinate transformation and alignment parameters The coordinates of the detector hit measured in the local sensor frame \(\mathbf{q}=(u_{s},v_{s},w_{s})^{\mathsf{T}}\) can be transformed subsequently to the coordinates in the next reference frame, namely, in the ladder frame (\(\mathbf{r}_{L}\)), in the layer frame (\(\mathbf{r}_{P}\)), and in the global tracker frame (\(\mathbf{r}_{g}\)), as: \[\mathbf{r}_{L} =\mathbf{R}_{s}^{\mathsf{T}}\Delta\mathbf{R}_{s}(\mathbf{q}+\Delta \mathbf{q}_{s})+\mathbf{r}_{0s} \tag{1}\] \[\mathbf{r}_{P} =\mathbf{R}_{L}^{\mathsf{T}}\Delta\mathbf{R}_{L}(\mathbf{r}_{L}+ \Delta\mathbf{q}_{L})+\mathbf{r}_{0L}\] (2) \[\mathbf{r}_{g} =\mathbf{R}_{P}^{\mathsf{T}}\Delta\mathbf{R}_{P}(\mathbf{r}_{P}+ \Delta\mathbf{q}_{P})+\mathbf{r}_{0P} \tag{3}\] where \(\mathbf{q}+\Delta\mathbf{q}_{s}\), \(\mathbf{r}_{L}+\Delta\mathbf{q}_{L}\), and \(\mathbf{r}_{P}+\Delta\mathbf{q}_{P}\) are the hit coordinates in the frames of the sensor, ladder, and layer respectively including small corrections on their individual position shifts of \(\Delta\mathbf{q}_{s}\), \(\Delta\mathbf{q}_{L}\), and \(\Delta\mathbf{q}_{P}\); \(\mathbf{R}_{s}^{\mathsf{T}}\), \(\mathbf{R}_{L}^{\mathsf{T}}\), and \(\mathbf{R}_{P}^{\mathsf{T}}\) are the nominal rotation matrices from the sensor into the ladder, from the ladder into the layer, and from the layer into the tracker respectively and \(\Delta\mathbf{R}_{s}\), \(\Delta\mathbf{R}_{L}\), and \(\Delta\mathbf{R}_{P}\) are their small individual corrections; and \(\mathbf{r}_{0s}\), \(\mathbf{r}_{0L}\), and \(\mathbf{r}_{0P}\) are the nominal positions of the sensor, ladder, and layer origin points in the next frame of the ladder, layer, and tracker respectively. The corrections of each module displacement by an offset \(\Delta\mathbf{q}_{i}=(\Delta u,\Delta v,\Delta w)^{\mathsf{T}}\) and a rotation \(\Delta\mathbf{R}_{i}=\Delta\mathbf{R}_{i}^{\gamma}\Delta\mathbf{R}_{i}^{ \beta}\Delta\mathbf{R}_{i}^{\alpha}\) have to be determined from the alignment procedure, where \(\Delta\mathbf{R}_{i}^{\alpha}\), \(\Delta\mathbf{R}_{i}^{\beta}\), and \(\Delta\mathbf{R}_{i}^{\gamma}\) are the decomposed rotation matrices defined by angles of rotation \(\alpha\), \(\beta\), and \(\gamma\) around the \(u\)-axis, the new \(v\)-axis, and the new \(w\)-axis (Fig. 3): \[\Delta\mathbf{R}_{i}^{\alpha}=\begin{pmatrix}1&0&0\\ 0&\cos\alpha&\sin\alpha\\ 0&-\sin\alpha&\cos\alpha\end{pmatrix}\ \Delta\mathbf{R}_{i}^{\beta}= \begin{pmatrix}\cos\beta&0&-\sin\beta\\ 0&1&0\\ \sin\beta&0&\cos\beta\end{pmatrix}\] \[\Delta\mathbf{R}_{i}^{\gamma}=\begin{pmatrix}\cos\gamma&\sin\gamma&0\\ -\sin\gamma&\cos\gamma&0\\ 0&0&1\end{pmatrix} \tag{4}\] In the small-angle approximation, the correction matrix for rotation becomes: \[\Delta\mathbf{R}_{i}=\Delta\mathbf{R}_{i}^{\gamma}\Delta\mathbf{R}_{i}^{\beta} \Delta\mathbf{R}_{i}^{\alpha}=\begin{pmatrix}1&\gamma&-\beta\\ -\gamma&1&\alpha\\ \beta&-\alpha&1\end{pmatrix} \tag{5}\] The transformation of a hit coordinate from the local sensor frame, \(\boldsymbol{q}\), to the global tracker frame, \(\boldsymbol{r}_{g}\), is given by: \[\boldsymbol{r}_{g}{\simeq}\mathbf{R}^{\mathsf{T}}(\boldsymbol{q}+\Delta \boldsymbol{q})+\boldsymbol{r}_{0} \tag{6}\] where \(\Delta\boldsymbol{q}\) is the total equivalent displacement correction in the local sensor frame including alignment parameters for all composite detector structures of the sensor, ladder, and layer, \(\mathbf{R}^{\mathsf{T}}\) is the nominal rotation matrix from the sensor into the global tracker frame, and \(\boldsymbol{r}_{0}\) is the nominal position of the sensor origin point in the global tracker frame. The definitions of \(\Delta\boldsymbol{q}\), \(\mathbf{R}^{\mathsf{T}}\), and \(\boldsymbol{r}_{0}\) and the detailed calculation can be found in Appendix A. ### Coordinate measurement residual and its derivatives with respect to the alignment parameters The coordinate measurement (hit) residual \(\boldsymbol{\varepsilon}\) is defined as the spatial difference between the predicted position of the track \(\boldsymbol{q}_{p}\) and the measured position of the detector hit \(\boldsymbol{q}\) in the sensor plane (local sensor frame), as: \[\boldsymbol{\varepsilon}=\boldsymbol{q}_{p}-\boldsymbol{q} \tag{7}\] The predicted position of the track in the sensor plane, \(\boldsymbol{q}_{p}\), is only sensitive to the sensor displacement, \(\Delta\boldsymbol{q}\), along the \(w_{s}\)-axis (Fig. 3 (a)). Such displacement will introduce a change of the track intersection position \(\Delta\boldsymbol{q}_{p}\) as: \[\Delta\boldsymbol{q}_{p}=\mathbf{P}_{p}\Delta\boldsymbol{q} \tag{8}\] where \[\mathbf{P}_{p}=\begin{pmatrix}0&0&\frac{dw_{s}^{p}}{dw_{s}^{p}}\\ 0&0&\frac{dw_{s}^{p}}{dw_{s}^{p}}\\ 0&0&1\end{pmatrix} \tag{9}\] The quantities \(du_{s}^{p}/dw_{s}^{p}\) and \(dv_{s}^{p}/dw_{s}^{p}\) are the track projected directions in the sensor \(u_{s}w_{s}\)-plane and \(v_{s}w_{s}\)-plane respectively. Hence, the total correction to the residual for the detector module displacements is: \[\Delta\boldsymbol{\varepsilon}=\Delta\boldsymbol{q}_{p}-\Delta\boldsymbol{q} =\mathbf{P}\Delta\boldsymbol{q} \tag{10}\] where \[\mathbf{P}=\mathbf{P}_{p}-\mathbf{E}=\begin{pmatrix}-1&0&\frac{du^{p}}{dw_{s}^{p}} \\ 0&-1&\frac{dw_{s}^{p}}{dw_{s}^{p}}\\ 0&0&0\end{pmatrix} \tag{11}\] and \(\mathbf{E}\) is the unit matrix. From Eq.(A.3) and Eq.(10), all the partial derivatives of the residual with respect to the alignment parameters can be calculated. Some examples are listed as follows: \[\frac{\partial\boldsymbol{\varepsilon}}{\partial u_{s}} =\mathbf{P}\boldsymbol{e}_{1}\] \[\frac{\partial\boldsymbol{\varepsilon}}{\partial u_{L}} =\mathbf{P}\mathbf{R}_{s}\boldsymbol{e}_{1}\] \[\frac{\partial\boldsymbol{\varepsilon}}{\partial u_{P}} =\mathbf{P}\mathbf{R}_{s}\mathbf{R}_{L}\boldsymbol{e}_{1}\] \[\frac{\partial\boldsymbol{\varepsilon}}{\partial\alpha_{s}} =\mathbf{P}\frac{\partial\Delta\mathbf{R}_{s}}{\partial\alpha_{s }}\boldsymbol{q} \tag{12}\] \[\frac{\partial\boldsymbol{\varepsilon}}{\partial\alpha_{L}} =\mathbf{P}\mathbf{R}_{s}\frac{\partial\Delta\mathbf{R}_{L}}{ \partial\alpha_{L}}(\mathbf{R}_{s}^{\mathsf{T}}\boldsymbol{q}+\boldsymbol{r} _{0s})=\mathbf{P}\mathbf{R}_{s}\frac{\partial\Delta\mathbf{R}_{L}}{\partial \alpha_{L}}\hat{\boldsymbol{r}}_{L}\] \[\frac{\partial\boldsymbol{\varepsilon}}{\partial\alpha_{P}} =\mathbf{P}\mathbf{R}_{s}\mathbf{R}_{L}\frac{\partial\Delta \mathbf{R}_{P}}{\partial\alpha_{P}}\Big{[}\mathbf{R}_{L}^{\mathsf{T}}( \mathbf{R}_{s}^{\mathsf{T}}\boldsymbol{q}+\boldsymbol{r}_{0s})+\boldsymbol{r} _{0L}\Big{]}\] \[=\mathbf{P}\mathbf{R}_{s}\mathbf{R}_{L}\frac{\partial\Delta \mathbf{R}_{P}}{\partial\alpha_{P}}\hat{\boldsymbol{r}}_{P}\] where \(\boldsymbol{e}_{1}=(1,0,0)^{\mathsf{T}}\) is the unit vector of the \(u\)-axis, and \(\hat{\boldsymbol{r}}_{L}=(\hat{u}_{L},\hat{v}_{L},\hat{w}_{L})^{\mathsf{T}}= \mathbf{R}_{s}^{\mathsf{T}}\boldsymbol{q}+\boldsymbol{r}_{0s}\) and \(\hat{\boldsymbol{r}}_{P}=(\hat{u}_{P},\hat{v}_{P},\hat{w}_{P})^{\mathsf{T}}= \mathbf{R}_{L}^{\mathsf{T}}(\mathbf{R}_{s}^{\mathsf{T}}\boldsymbol{q}+ \boldsymbol{r}_{0s})+\boldsymbol{r}_{0L}\) are the hit coordinates in the frames of the ladder and layer respectively without displacement. Substituting the rotation (Eq.(5)) derivatives, the partial derivatives of the residual with respect to the alignment parameters of the sensor \((\partial\boldsymbol{\varepsilon}/\partial\boldsymbol{p}_{s})\), ladder \((\partial\boldsymbol{\varepsilon}/\partial\boldsymbol{p}_{L})\), and layer \((\partial\boldsymbol{\varepsilon}/\boldsymbol{p}_{P})\) are obtained as: \[\frac{\partial\boldsymbol{\varepsilon}}{\partial\boldsymbol{p}_{s}} =\left(\frac{\partial\boldsymbol{\varepsilon}}{\partial u_{s}}, \frac{\partial\boldsymbol{\varepsilon}}{\partial v_{s}},\frac{\partial \boldsymbol{\varepsilon}}{\partial w_{s}},\frac{\partial\boldsymbol{ \varepsilon}}{\partial\alpha_{s}},\frac{\partial\boldsymbol{\varepsilon}}{ \partial\beta_{s}},\frac{\partial\boldsymbol{\varepsilon}}{\partial\gamma_{s}} \right)=\mathbf{P}\frac{\partial\boldsymbol{q}}{\partial\boldsymbol{p}_{s}}\] \[=\mathbf{P}\begin{pmatrix}1&0&0&0&-w_{s}=0&v_{s}\\ 0&1&0&w_{s}=0&0&-u_{s}\\ 0&0&1&-v_{s}&u_{s}&0\end{pmatrix} \tag{13}\] \[\frac{\partial\boldsymbol{\varepsilon}}{\partial\boldsymbol{p}_{L}} =\left(\frac{\partial\boldsymbol{\varepsilon}}{\partial u_{L}}, \frac{\partial\boldsymbol{\varepsilon}}{\partial v_{L}},\frac{\partial \boldsymbol{\varepsilon}}{\partial w_{L}},\frac{\partial\boldsymbol{ \varepsilon}}{\partial\alpha_{L}},\frac{\partial\boldsymbol{\varepsilon}}{ \partial\beta_{L}},\frac{\partial\boldsymbol{\varepsilon}}{\partial\gamma_{L}} \right)=\mathbf{P}\frac{\partial\boldsymbol{q}}{\partial\boldsymbol{p}_{L}}\] \[=\mathbf{P}\mathbf{R}_{s}\begin{pmatrix}1&0&0&0&-\hat{w}_{L}&\hat {v}_{L}\\ 0&1&0&\hat{w}_{L}&0&-\hat{u}_{L}\\ 0&0&1&-\hat{v}_{L}&\hat{u}_{L}&0\end{pmatrix}\] (14) \[\frac{\partial\boldsymbol{\varepsilon}}{\partial\boldsymbol{p}_{P}} =\left(\frac{\partial\boldsymbol{\varepsilon}}{\partial u_{P}}, \frac{\partial\boldsymbol{\varepsilon}}{\partial v_{P}},\frac{\partial \boldsymbol{\varepsilon}}{\partial w_{P}},\frac{\partial\boldsymbol{\varepsilon}}{ \partial\alpha_{P}},\frac{\partial\boldsymbol{\varepsilon}}{\partial\beta_{P}}, \frac{\partial\boldsymbol{\varepsilon}}{\partial\gamma_{P}}\right)=\mathbf{P} \frac{\partial\boldsymbol{q}}{\partial\boldsymbol{p}_{P}}\] \[=\mathbf{P}\mathbf{R}_{s}\mathbf{R}_{L}\begin{pmatrix}1&0&0&0&- \hat{w}_{P}&\hat{v}_{P}\\ 0&1&0&\hat{w}_{P}&0&-\hat{u}_{P}\\ 0&0&1&-\hat{v}_{P}&\hat{u}_{P}&0\end{pmatrix} \tag{15}\] The alignment parameters of the sensor, ladder, and layer are \(\Delta\mathbf{p}_{s}=(\Delta u_{s},\Delta v_{s},\Delta w_{s},\alpha_{s},\)\(\beta_{s},\gamma_{s})^{\mathsf{T}}\), \(\Delta\mathbf{p}_{L}=(\Delta u_{L},\Delta v_{L},\Delta w_{L},\alpha_{L},\beta_{L}, \gamma_{L})^{\mathsf{T}}\), and \(\Delta\mathbf{p}_{P}=(\Delta u_{P},\Delta v_{P},\Delta w_{P},\alpha_{P},\beta_{P}, \gamma_{P})^{\mathsf{T}}\) correspondingly. ## 4 Constraints of the composite alignment parameters For a composite detector which consists of several subcomponents, those modules on the same support structure are likely to have highly correlated displacements. Applying the alignment directly on a single level of the hierarchy such as the sensors ignores the mechanical correlations and distorts the detector structure. In the composite alignment, the alignment parameters in each level are defined relative to the next support structure as shown in Eqs.(1)(2)(3) and all the alignment parameters for all the detector modules (sensors, ladders, and layers) are aligned simultaneously. In this way, all correlations are considered and the alignment accuracy is optimized. If all composite modules are aligned at the same time without constraints, there will be no unique solution. For example, all the sensors in a ladder can move in one direction and the ladder can move in the opposite direction, which results in no movement of any sensors. To avoid this, 6 degrees of freedom must be constrained for every group of subcomponents on the same support structure. The expressions of all the constraints are derived by our developed grid method described in the following sections 4.1, 4.2, and 4.3. Figure 4: Schematics of (a) a ladder and (b) a layer divided into fine uniform grids. In addition, the stretching and shear deformations of the detector as subsets of the linear coordinate transformation will need specific constraints, as they are not sensed by the track alignment procedure. In section 4.4, we present our study to deal with this issue. ### Constraints of sensors in a ladder Each sensor in a ladder can move individually. To investigate the displacements of the sensors with respect to the ladder, a ladder is divided into fine uniform grids spanning over all its sensors, as illustrated in Fig. 4 (a). A lattice point \(\mathbf{m}^{i}\) represents a movement in the \(i\)-th grid position induced by the displacement of the sensor. The movement of the ladder \(\Delta\mathbf{q}\) as a result of the displacements of all its sensors is estimated from all the lattices via \(\chi^{2}\)-minimization: \[\chi^{2}=\sum_{i}|\mathbf{m}^{i}-\Delta\mathbf{q}|^{2} \tag{16}\] The derivatives of the minimized \(\chi^{2}\) with respect to the ladder movement parameters \(\Delta\mathbf{p}_{L}\) are zero: \[\frac{\partial\chi^{2}}{\partial\mathbf{p}_{L}}=\sum_{i}2\left(\frac{\partial\mathbf{q }}{\partial\mathbf{p}_{L}}\right)^{\mathsf{T}}_{i}(\mathbf{m}^{i}-\Delta\mathbf{q})=\mathbf{0} \tag{17}\] The displacements of sensors in a ladder are required to result in zero overall ladder displacement as \(\Delta\mathbf{q}(\Delta u_{L}^{s},\Delta v_{L}^{s},\Delta w_{L}^{s},\alpha_{L}^{s },\beta_{L}^{s},\gamma_{L}^{s})=\mathbf{0}\). Substituting \(\Delta\mathbf{q}=\mathbf{0}\) and \(\mathbf{m}^{i}=(\partial\mathbf{q}/\partial\mathbf{p}_{s})_{i}\Delta\mathbf{p}_{s}^{i}\) (the first order approximation) into Eq.(17), 6 constraints on the alignment parameters of sensors in a ladder are obtained by summing up all the lattice points, as: \[\sum_{i}\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{L}}\right)^{\mathsf{T}}_{i }\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{s}}\right)_{i}\Delta\mathbf{p}_{s}^{ i}=\mathbf{0} \tag{18}\] where \[\begin{split}\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{L}} \right)^{\mathsf{T}}_{i}&=\left(\frac{\partial\mathbf{q}}{\partial u_ {L}},\frac{\partial\mathbf{q}}{\partial v_{L}},\frac{\partial\mathbf{q}}{\partial w_{ L}},\frac{\partial\mathbf{q}}{\partial\alpha_{L}},\frac{\partial\mathbf{q}}{\partial\beta_{L}}, \frac{\partial\mathbf{q}}{\partial\gamma_{L}}\right)^{\mathsf{T}}_{i}\\ &=\begin{pmatrix}1&0&0&0&-\hat{w}_{L}^{i}&\hat{v}_{L}^{i}\\ 0&1&0&\hat{w}_{L}^{i}&0&-\hat{u}_{L}^{i}\\ 0&0&1&-\hat{v}_{L}^{i}&\hat{u}_{L}^{i}&0\end{pmatrix}^{\mathsf{T}}\mathbf{R}_{ s}^{i\mathsf{T}}\\ \end{split} \tag{19}\] is transposed from Eq.(14), \[\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{s}}\right)_{i}=\begin{pmatrix}1&0& 0&0&-w_{s}^{i}=0&v_{s}^{i}\\ 0&1&0&w_{s}^{i}=0&0&-u_{s}^{i}\\ 0&0&1&-v_{s}^{i}&u_{s}^{i}&0\end{pmatrix} \tag{20}\] is from Eq.(13), and \[\Delta\mathbf{p}_{s}^{i}=(\Delta u_{s}^{i},\Delta v_{s}^{i},\Delta w_{s}^{i}, \alpha_{s}^{i},\beta_{s}^{i},\gamma_{s}^{i})^{\mathsf{T}} \tag{21}\] ### Constraints of ladders in a layer Similarly, to study the displacements of the ladders with respect to the layer, a layer is divided into fine uniform grids as illustrated in Fig. 4 (b). A lattice point \(\mathbf{m}^{i}=(\partial\mathbf{q}/\partial\mathbf{p}_{L})_{i}\Delta\mathbf{p}_{L}^{i}\) represents a movement in the \(i\)-th grid position induced by the displacement of the ladder. Using the same method, 6 constraints on the alignment parameters of ladders in a layer are derived as: \[\sum_{i}\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{P}}\right)_{i}^{\mathsf{T} }\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{L}}\right)_{i}\Delta\mathbf{p}_{L}^{i }=\mathbf{0} \tag{22}\] where \[\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{P}}\right)_{i}^{ \mathsf{T}} =\left(\frac{\partial\mathbf{q}}{\partial u_{P}},\frac{\partial\mathbf{q} }{\partial v_{P}},\frac{\partial\mathbf{q}}{\partial w_{P}},\frac{\partial\mathbf{q}} {\partial\alpha_{P}},\frac{\partial\mathbf{q}}{\partial\beta_{P}},\frac{\partial \mathbf{q}}{\partial\gamma_{P}}\right)_{i}^{\mathsf{T}} \tag{23}\] \[=\begin{pmatrix}1&0&0&0&-\hat{w}_{P}^{i}&\hat{v}_{P}^{i}\\ 0&1&0&\hat{w}_{P}^{i}&0&-\hat{u}_{P}^{i}\\ 0&0&1&-\hat{v}_{P}^{i}&\hat{u}_{P}^{i}&0\end{pmatrix}^{\mathsf{T}}\mathbf{R}_{ L}^{i\mathsf{T}}\mathbf{R}_{s}^{i\mathsf{T}}\] is transposed from Eq.(15), \[\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{L}}\right)_{i}=\mathbf{R}_{s}^{i} \begin{pmatrix}1&0&0&0&-\hat{w}_{L}^{i}&\hat{v}_{L}^{i}\\ 0&1&0&\hat{w}_{L}^{i}&0&-\hat{u}_{L}^{i}\\ 0&0&1&-\hat{v}_{L}^{i}&\hat{u}_{L}^{i}&0\end{pmatrix} \tag{24}\] is from Eq.(14), and \[\Delta\mathbf{p}_{L}^{i}=(\Delta u_{L}^{i},\Delta v_{L}^{i},\Delta w_{L}^{i}, \alpha_{L}^{i},\beta_{L}^{i},\gamma_{L}^{i})^{\mathsf{T}} \tag{25}\] ### Constraints of layers in the tracker The composite structure of layers in the tracker also has to be constrained to factor out the translations and rotations of the whole detector and to establish the basic position and orientation of AMS. Considering mechanical and thermal stability, only the layers from the inner tracker (L2-L8), whose planes are firmly held by the carbon fiber cylinder, are used in the constraints. All the inner tracker layers are divided into fine grids of equal size with each (\(i\)-th) lattice point representing the layer displacement at that position, see Fig. 4 (b). By requiring the overall inner tracker to have neither translations nor rotations as \(\Delta\mathbf{p}_{g}=(\Delta x,\)\(\Delta y,\Delta z,\alpha,\beta,\gamma)^{\mathsf{T}}=\mathbf{0}\), the constraints on the alignment parameters of the inner tracker layers are obtained as: \[\sum_{i}\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{g}}\right)_{i}^{\mathsf{ T}}\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{P}}\right)_{i}\Delta\mathbf{p}_{P}^{i }=\mathbf{0} \tag{26}\] where \[\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{g}}\right)_{i}^{\mathsf{T}}= \begin{pmatrix}1&0&0&0&-\hat{z}^{i}&\hat{y}^{i}\\ 0&1&0&\hat{z}^{i}&0&-\hat{x}^{i}\\ 0&0&1&-\hat{y}^{i}&\hat{x}^{i}&0\end{pmatrix}^{\mathsf{T}}\mathbf{R}_{P}^{i \mathsf{T}}\mathbf{R}_{L}^{i\mathsf{T}}\mathbf{R}_{s}^{i\mathsf{T}} \tag{27}\] \(\hat{\mathbf{r}}^{i}_{g}=(\hat{x}^{i},\hat{y}^{i},\hat{z}^{i})^{\mathsf{T}}\) is the \(i\)-th lattice point position in the global tracker frame without displacement, \[\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{P}}\right)_{i}=\mathbf{R}^{i}_{s} \mathbf{R}^{i}_{L}\begin{pmatrix}1&0&0&0&-\hat{w}^{i}_{P}&\hat{v}^{i}_{P}\\ 0&1&0&\hat{w}^{i}_{P}&0&-\hat{u}^{i}_{P}\\ 0&0&1&-\hat{v}^{i}_{P}&\hat{u}^{i}_{P}&0\end{pmatrix} \tag{28}\] is from Eq.(15), and \[\Delta\mathbf{p}^{i}_{P}=(\Delta u^{i}_{P},\Delta v^{i}_{P},\Delta w^{i}_{P}, \alpha^{i}_{P},\beta^{i}_{P},\gamma^{i}_{P})^{\mathsf{T}} \tag{29}\] The grid density for calculation of Eq.(18), Eq.(22), or Eq.(26) is sufficiently large so that its contribution to the uncertainty of each constraint is negligible. ### Constraints of stretching and shear deformations The first alignment of the AMS tracker is based on the 400 GeV/c proton test beam, where the characteristics of tracks with given momenta in the magnetic field is equivalent to straight tracks. Any linear coordinate transformation will conserve the linearity of a straight track and hence not be sensed by the track alignment procedure. Conversely, without specific constraints, an unstable system of the alignment due to \(\chi^{2}\) invariance could introduce this kind of transformation, manifested as an extra detector displacement or deformation. A general linear transformation from a vector \(\mathbf{r}=(x,y,z)^{\mathsf{T}}\) to a new vector \(\mathbf{r}^{\prime}=(x^{\prime},y^{\prime},z^{\prime})^{\mathsf{T}}\) can be expressed by the matrix equation: \[\mathbf{r}^{\prime}=\mathbf{D}\mathbf{r}+\mathbf{d} \tag{30}\] Figure 5: Schematics of the inner tracker deformations: (a) stretching, (b) shearing, and (c) shearing section view. where \(\mathbf{D}\) is a 3\(\times\)3 matrix, called a transformation matrix, and \(\boldsymbol{d}=(d_{1},d_{2},d_{3})^{\mathsf{T}}\) is a vector representing a translation. Clearly, in a linear transformation, there are a total of 12 free parameters (3 in \(\boldsymbol{d}\) and 9 in \(\mathbf{D}\)), which can be categorized to describe the following decomposed transformations: 1. 3 translations represented by 3 elements in \(\boldsymbol{d}\) 2. 3 rotations whose matrix forms are shown in Eq.(4) 3. 3 stretchings with each leading to an expansion or shrinking of the object along the corresponding axis. As an example, Fig. 5 (a) shows the shrinking along the \(z\)-axis 4. 3 shearings with each deforming the object shape on the corresponding projection plane as the one in Fig. 5 (b) shows the shearing on the \(yz\)-plane The outcome of (i) translations and (ii) rotations is a rigid-body displacement without changing the object shape or size. The 3 translations and 3 rotations of the inner tracker have already been constrained to be zero as previously discussed in section 4.3. Next, we will focus on (iii) stretchings and (iv) shearings. #### 4.4.1 Stretching The matrix of stretching \(\mathbf{D}_{t}\) is diagonal: \[\mathbf{D}_{t}=\begin{pmatrix}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&\lambda_{3}\end{pmatrix} \tag{31}\] where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are the extension-contraction coefficients along the \(x\)-, \(y\)-, and \(z\)-axes, respectively. Stretching deformations are connected to the detector structure. As shown in Fig. 3, the silicon sensors through the ladder structure are tiled in the \(xy\)-plane to form all layers. The stretching deformation in the \(xy\)-plane is constrained to some extent by the exactly known size of the sensors. During exposure to the proton test beam or to cosmic rays in space, the incoming particles always enter the detector in various directions and positions. The distance between the neighboring sensors or ladders is well determined by many tracks, which are crossing them and the sensors from other layers in front and behind. Therefore, during alignment, the extension-contraction coefficients \(\lambda_{1}\) and \(\lambda_{2}\) are naturally constrained by the intrinsic size of the sensors either themselves or the ones in front/behind and no external constrains are needed. The extension-contraction coefficient along the \(z\)-axis, \(\lambda_{3}\), has no any sensor structure restriction (Fig. 5 (a)) and has to be defined. According to Eq.(31), the stretching length along the \(z\)-axis, \(\Delta z\), is described as: \[\Delta z=(\lambda_{3}-1)z=kz \tag{32}\] where no stretching deformation is \(k\equiv(\lambda_{3}-1)=0\). We can use the same grid method as in the previous section to derive the corresponding constraint on the alignment parameters. As seen in Fig. 4 (b), the \(i\)-th lattice point of the inner tracker \(m^{i}=\Delta z_{i}\) represents the \(z\) position shift induced by the displacement of the layer at that position. The stretching parameter \(k\) is estimated from all the lattices via \(\chi^{2}\)-minimization, as: \[\chi^{2}=\sum_{i}(m^{i}-kz^{i})^{2} \tag{33}\] where the derivative of \(\chi^{2}\) with respect to \(k\) is zero: \[\frac{\partial\chi^{2}}{\partial k}=\sum_{i}2z^{i}(m^{i}-kz^{i})=0 \tag{34}\] The constraint of \(k=0\) leads to: \[\sum_{i}z^{i}m^{i}=\sum_{i}z^{i}\Delta z^{i}=0 \tag{35}\] \(\Delta z^{i}\) in \(\Delta\mathbf{p}_{g}^{i}=(\Delta x^{i},\Delta y^{i},\Delta z^{i},\alpha^{i},\beta,\gamma)^{\mathsf{T}}\) can be replaced by the layer alignment parameters of \(\Delta\mathbf{p}_{P}^{i}=(\Delta u_{P}^{i},\Delta v_{P}^{i},\Delta w_{P}^{i}, \alpha_{P}^{i},\beta_{P}^{i},\gamma_{P}^{i})^{\mathsf{T}}\) as: \[\Delta\mathbf{p}_{g}^{i}=\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{g}}\right)_{ i}^{\mathsf{T}}\left(\frac{\partial\mathbf{q}}{\partial\mathbf{p}_{P}}\right)_{i} \Delta\mathbf{p}_{P}^{i} \tag{36}\] where \((\partial\mathbf{q}/\partial\mathbf{p}_{g})_{i}^{\mathsf{T}}\) is from Eq.(27) and \((\partial\mathbf{q}/\partial\mathbf{p}_{P})_{i}\) is from Eq.(28). For the AMS inner tracker structure, the constraint of Eq.(35) can be simplified as: \[\sum_{l=2}^{8}\mathbf{R}_{P}^{\mathsf{T}l}(3,3)\Delta w_{P}^{l}z^{l}A^{l}=0 \tag{37}\] where \(\mathbf{R}_{P}^{\mathsf{T}l}(3,3)\) is the (3,3) entry of the \(l\)-th layer rotation matrix, \(\Delta w_{P}^{l}\) is the \(l\)-th layer alignment parameter on the translation along the \(w_{P}\)-axis, and \(z^{l}\) and \(A^{l}\) are the \(l\)-th layer \(z\) position and surface area respectively. #### 4.4.2 Shearing Three individual matrices of pure shearing \(\mathbf{D}_{h}^{\kappa 1}\), \(\mathbf{D}_{h}^{\kappa 2}\), and \(\mathbf{D}_{h}^{\kappa 3}\) are given by: \[\mathbf{D}_{h}^{\kappa 1}=\begin{pmatrix}1&0&0\\ 0&1&\kappa_{1}/2\\ 0&\kappa_{1}/2&1\end{pmatrix}\quad\mathbf{D}_{h}^{\kappa 2}=\begin{pmatrix}1&0& \kappa_{2}/2\\ 0&1&0\\ \kappa_{2}/2&0&1\end{pmatrix}\] \[\mathbf{D}_{h}^{\kappa 3}=\begin{pmatrix}1&\kappa_{3}/2&0\\ \kappa_{3}/2&1&0\\ 0&0&1\end{pmatrix} \tag{38}\] where \(\kappa_{1}\), \(\kappa_{2}\), and \(\kappa_{3}\) are the shear strains on the \(yz\)-, \(xz\)-, and \(xy\)-planes, respectively. As seen, the pure shear matrices are symmetric in contrast with rotation matrices which are anti-symmetric as shown in Eq.(4). Using small angle and shear strain approximation, the product of matrices of shearing \(\mathbf{D}_{h}^{\kappa 1}\) and rotation \(\Delta\mathbf{R}^{\alpha}\) is: \[\mathbf{D}_{h}^{\kappa 1}\Delta\mathbf{R}^{\alpha}=\begin{pmatrix}1&0&0\\ 0&1&\kappa_{1}/2+\alpha\\ 0&\kappa_{1}/2-\alpha&1\end{pmatrix} \tag{39}\] When \(\alpha=\kappa_{1}/2\), \(\mathbf{D}_{h}^{\kappa 1}\Delta\mathbf{R}^{\alpha}\) becomes a simple shearing [14] along the \(y\)-axis on the \(yz\)-plane as illustrated in Fig. 5 (c). In the presence of both shearing and rotation in the \(yz\)-plane, the change of the object position along the \(y\)-axis, \(\Delta y\), obtained from Eq.(39) is: \[\Delta y=(\kappa_{1}/2+\alpha)z=k_{1}z \tag{40}\] where the requirment of the object to have neither rotation \(\alpha=0\) nor shear deformation \(\kappa_{1}=0\), defines \(k_{1}=0\). Repeating \(\chi^{2}\) minimization to Eq.(40) together with the constraint \(k_{1}=0\) leads to: \[\sum_{i}z^{i}\Delta y^{i}=0 \tag{41}\] where \(\Delta y^{i}\) can be replaced by the layer alignment parameters as shown in Eq.(36). For the AMS inner tracker structure, the corresponding constraint is simplified to be: \[\sum_{l=2}^{8}\mathbf{R}_{P}^{\mathbb{T}l}(2,2)\Delta v_{P}^{l}z^{l}A^{l}=0 \tag{42}\] where \(\mathbf{R}_{P}^{\mathbb{T}l}(2,2)\) is the (2,2) entry of the \(l\)-th layer rotation matrix, \(\Delta v_{P}^{l}\) is the \(l\)-th layer alignment parameter on the translation along the \(v_{P}\)-axis, and \(z^{l}\) and \(A^{l}\) are the \(l\)-th layer \(z\) position and surface area respectively. From Eq.(39), we can also study the change of the object position along the \(z\)-axis instead of the \(y\)-axis to derive another constraint on the \(yz\)-plane as: \[\Delta z=(\kappa_{1}/2-\alpha)y=k_{1}^{\prime}y \tag{43}\] Nevertheless, given a rotation constraint on \(\alpha\), the constraints on \(k_{1}^{\prime}\) of Eq.(43) and \(k_{1}\) of Eq.(40) are not independent as \(k_{1}^{\prime}=k_{1}-2\alpha\), which means the \(k_{1}^{\prime}\) constraint is just a linear combination of the \(k_{1}\) constraint and the \(\alpha\) constraint. To restrict both rotation and shearing on the \(yz\)-plane, a pair of constraints on any of (\(\alpha\), \(k_{1}\)), (\(\alpha\), \(k_{1}^{\prime}\)), or (\(k_{1}\), \(k_{1}^{\prime}\)) are sufficient and they are equivalent to each other. For the \(xz\)-plane, which is similar to the \(yz\)-plane (Fig. 3), the requirment of the object to have neither rotation \(\beta=0\) nor shearing \(\kappa_{2}=0\), leads to: \[\sum_{i}z^{i}\Delta x^{i}=0 \tag{44}\] where \(\Delta x^{i}\) can be replaced by the layer alignment parameters as shown in Eq.(36). For the AMS inner tracker structure, the corresponding constraint can be simplified as: \[\sum_{l=2}^{8}\mathbf{R}_{P}^{\mathcal{I}l}(1,1)\Delta u_{P}^{l}z^{l}A^{l}=0 \tag{45}\] where \(\mathbf{R}_{P}^{\mathcal{I}l}(1,1)\) is the (1,1) entry of the \(l\)-th layer rotation matrix and \(\Delta u_{P}^{l}\) is the \(l\)-th layer alignment parameter on the translation along the \(u_{P}\)-axis. The detector structure of the \(xy\)-plane, where the sensors are tiled, is completely different from the \(yz\)- and \(xz\)-planes. The essence of shear deformation is a symmetric strain tensor that results in a change in angle. So, the shearing on the \(xy\)-plane to a sensor will shear the sensor surface and break the orthogonal system of the strips on the opposite sides, which is mechanically not allowed. In this sense, the pure shearing on a \(xy\)-plane or a layer, which leads to a homogeneous deformation of all detector microscopic components, is practically non-existent. Another kind of pseudo-shearing of a layer with only shifting the positions of its ladders along the \(x\)-axis (\(u_{P}\)-axis in Fig. 4 (b)) without deforming the ladders' shape, is also constrained by the intrinsic structure of the sensors in the track alignment procedure, where the relative position between neighboring ladders in a layer is well defined by many tracks crossing them and the sensors from other layers in front and behind. Accordingly, similar to \(\lambda_{1}\) and \(\lambda_{2}\) in the stretching deformation, the shearing strain \(\kappa_{3}\) also does not need external constraint. In this section, we have studied the 12 degrees of freedom in the linear transformation with each of them corresponding to a kind of detector displacement or deformation. They were all constrained: 1. 3 translations and 3 rotations by Eq.(26), 2. 2 stretchings and 1 shearing by the intrinsic size and shape of the sensors during track alignment, 3. 1 stretching by Eq.(35) or Eq.(37), 4. 2 shearings, one by Eq.(41) or Eq.(42) and the other by Eq.(44) or Eq.(45). ## 5 Global track alignment The global alignment method was first introduced in Ref. [15]. It is widely used in HEP and other fields [16][17][18]. In addition to this method, there are also other alignment methods, such as the one presented in Ref. [19]. In magnetic field, each track trajectory is characterized by a number of parameters (5 for a helix without multiple-scattering) which has to be determined from the track fitting procedure. Besides the position measurements, multiple scattering due to Coulomb interaction of the particle with the detector materials also impacts the accurate determination of the track. Taking into account the scattering angles being extra measurement quantities, for a given track \(i\), the track parameters \(\Delta\mathbf{q}_{i}\) are determined via \(\chi^{2}\) minimization [20]: \[\chi^{2}_{i}=\sum_{j=1}^{n_{meas}}\mathbf{\varepsilon}_{j}(\mathbf{q}_{i})^{\mathsf{T}} \mathbf{V}^{-1}_{j}\mathbf{\varepsilon}_{j}(\mathbf{q}_{i})+\sum_{j=2}^{n_{scat}-1}\mathbf{ \beta}_{j}(\mathbf{q}_{i})^{\mathsf{T}}\mathbf{W}^{-1}_{j}\mathbf{\beta}_{j}(\mathbf{q}_{i}) \tag{46}\] where \(\mathbf{\varepsilon}_{j}\) is the \(j\)-th hit residual with the position measurement covariance matrix \(\mathbf{V}_{j}\), and \(\mathbf{\beta}_{j}\) is the \(j\)-th scattering angle with the covariance matrix \(\mathbf{W}_{j}\)[21][22]. In the AMS global alignment, the global detector alignment parameters, \(\Delta\mathbf{p}\), and the local track parameters, \(\Delta\mathbf{q}\), of all tracks are determined simultaneously through a vast \(\chi^{2}\) minimization, taking account of both residual measurements and multiple-scattering effects: \[\chi^{2}(\mathbf{q},\mathbf{p})=\sum_{i=1}^{N_{track}}\ \left[\sum_{j=1}^{n_{meas}}\mathbf{ \varepsilon}_{ij}(\mathbf{q}_{i},\mathbf{p})^{\mathsf{T}}\mathbf{V}^{-1}_{ij}\mathbf{ \varepsilon}_{ij}(\mathbf{q}_{i},\mathbf{p})+\sum_{j=2}^{n_{scat}-1}\mathbf{\beta}_{ij}( \mathbf{q}_{i})^{\mathsf{T}}\mathbf{W}^{-1}_{ij}\mathbf{\beta}_{ij}(\mathbf{q}_{i})\right] \tag{47}\] Setting the partial derivatives of the \(\chi^{2}\) of Eq.(47) with respect to each global parameter and each local track parameter equal to zero leads to the matrix equation: \[\begin{pmatrix}\sum_{i}\mathbf{C}^{i}&\mathbf{G}^{1}&\dots&\mathbf{G}^{j}& \dots&\mathbf{G}^{N}\\ (\mathbf{G}^{1})^{\mathsf{T}}&\Gamma^{1}&\dots&\mathbf{0}&\dots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\ (\mathbf{G}^{j})^{\mathsf{T}}&\mathbf{0}&\dots&\Gamma^{j}&\dots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\ (\mathbf{G}^{N})^{\mathsf{T}}&\mathbf{0}&\dots&\mathbf{0}&\dots&\Gamma^{N} \end{pmatrix}\begin{pmatrix}\Delta\mathbf{p}\\ \Delta\mathbf{q}_{1}\\ \vdots\\ \Delta\mathbf{q}_{j}\\ \vdots\\ \Delta\mathbf{q}_{N}\end{pmatrix}=\begin{pmatrix}\sum_{i}\mathbf{d}^{i}\\ \mathbf{b}^{1}\\ \vdots\\ \mathbf{b}^{j}\\ \vdots\\ \mathbf{b}^{N}\end{pmatrix} \tag{48}\] see Appendix B for the definitions of matrices \(\mathbf{C}\), \(\mathbf{G}\), \(\mathsf{\Gamma}\) and vectors \(\mathbf{d}\), \(\mathbf{b}\) as well as the detailed calculation. The solution requires the inversion of the matrix of dimension \((n_{g}+N\cdot n_{l})^{2}\), where \(n_{g}\) is the number of global alignment parameters (up to \(\sim\)15 000 for the AMS tracker), \(N\) is the number of tracks used for the alignment (e.g. \(\sim\)10\({}^{9}\) tracks for the alignment with cosmic rays collected in flight), and \(n_{l}\) is the number of local parameters per track (e.g. up to 27 for the General Broken Lines algorithm [20] with 13 equivalent thin scatterers representing the AMS materials). The dimension of the inversion matrix for solving the global alignment parameters can be reduced to \(n_{g}^{2}\) by partitioning [23]. The constraints discussed in section 4 are added into the matrix via Lagrange multipliers. The matrix inversion is handled by the Pede program [24]. A presigma, which can be interpreted as an initial detector mounting precision, can be assigned to the diagonal matrix element of each alignment parameter to optimize the matrix solution in the program. In principle, the matrix inversion for solving the global alignment parameters only needs to be performed once and no iterations are required. However due to potential inaccuracies in the solution of the large linear system and due to a required outlier (large residual events) treatment, a few internal iterations for the matrix inversion may be necessary. For the "Inversion" solution method in the Pede, 3 internal iterations are more than enough. The presigmas are always defined with respect to the previous iteration, hence alignment corrections significantly larger than the presigmas can still occur after iterations. In this sense, the presigmas are considered not to bias the result if enough iterations are performed but will impact the choice of the preferred solution among all possible candidates with similar \(\chi^{2}\), which will be discussed in more detail in the next section. Recently, the version of the Pede written in Fortran has been implemented to be compatible with multi-threading [25]. But it is still deficient in dealing with massive local parameters of billions of tracks (\(N\cdot n_{l}\sim 10^{9}\times 20\)) and a sizable number of global parameters (\(n_{g}\sim 15\ 000\)). This version of Pede is extended by the AMS collaboration to become fully parallelized using the OpenMP platform [26], which allows much faster I/O and computational processing. In particular, the most restricted I/O part is improved by replacement with the parallelized ROOT [27] I/O. Using CERN 64-CPU machines and the EOS storage system [28], it takes \(\sim\)30 hours to process the matrix inversion for 1 billion tracks with 3 internal iterations. ## 6 Alignment based on the 400 GeV/c proton test beam Each module of the AMS tracker has its own initial mechanical mounting precision varying from a few microns to thousands of microns: the assembly accuracy for a sensor in the ladder is \(\sim\)6 \(\upmu\)m, the mounting accuracy for a ladder on the layer is \(\sim\)70 \(\upmu\)m, the installation accuracy for an inner tracker layer is \(\sim\)40 \(\upmu\)m along \(x\) and \(y\) and \(\sim\)200 \(\upmu\)m along \(z\) while for an external layer it is \(\sim\)1000 \(\upmu\)m for \(x\), \(y\), and \(z\). A summary of the initial mounting precision can be found in Table 1 (a). The test-beam track alignment aims to reduce the module misalignment from all these sources down to a micron level for the rigidity measurement. Figure 6: Schematics of the nominal attitude of the AMS in the beam test: the \(z\)-axis of the AMS against the beam direction, the \(x\)-axis to the nadir, and the \(y\)-axis (pointing out of the page) parallel to the Earth. The densely packed lines represent the 886 directions of the primary 400 GeV/c proton test beam passing through AMS. ### Setup of the test beam During the beam test, AMS was installed on a rotation stand which allows the detector to be exposed to particles from different positions and directions. To minimize the potential deformation of the tracker planes as well as the contraction of the support structures due to gravity, the nominal attitude of the AMS illustrated in Fig. 6 was pointing to be the \(z\)-axis against the beam direction, the \(x\)-axis to the nadir (down), and the \(y\)-axis parallel to the Earth (horizontal), hence the positions of the tracker modules along the \(y\)-axis, i.e. the particle bending direction, was the least deformed. The track alignment is performed based on the primary 400 GeV/c proton beam, where the positions and orientations of the detector were adjusted 886 times to collect events in the full acceptance of the tracker as illustrated in Fig. 6. The beam spot size, defined as the spot radius to include 68% of events at each position, was rather narrow at \(\sim\)3.5 mm. With \(\sim\)10\({}^{4}\) events per position, the total collected number of events for the alignment was \(\sim\)10\({}^{7}\). Besides the normal data collection, AMS also collected a special dataset of the 400 GeV/c proton beam, in which the whole detector was rotated around the \(y\)-axis by 180\({}^{\circ}\) to examine the mechanical stability of the tracker, as illustrated in Fig. 7. There were 60 assigned beam positions for this configuration and the total number of the collected events was \(\sim\)10\({}^{6}\). This data is only used for the alignment verification purpose instead of being directly used in the test-beam alignment. ### Alignment procedure In the test-beam alignment, all the composite tracker modules are aligned simultaneously using the global composite alignment approach as discussed in sections 3, 4, and 5. The General Broken Lines (GBL) algorithm with fixed curvature (\(1/R=1/400\) GV\({}^{-1}\)) track fitting is imposed to derive the residuals \(\mathbf{\varepsilon}_{ij}(\mathbf{q}_{i}^{0},\mathbf{p}^{0})\), the partial derivatives with respect to the local track parameters of the residuals \(\partial\mathbf{\varepsilon}_{ij}/\partial\mathbf{q}_{i}\) and the scattering angles \(\partial\mathbf{\beta}_{ij}/\partial\mathbf{q}_{i}\), for Figure 7: Schematics of the detector deformations due to gravity for (a) nominal AMS and for (b) 180\({}^{\circ}\) rotated AMS in the beam test. Eq.(48), see also Eqs.(B.3)(B.5)(B.8)(B.9). The 400 GeV/c proton Monte Carlo sample produced by Geant4 [29] is used for the alignment optimization. #### 6.2.1 Presigmas in the alignment The external layers, L1 and L9, have much worse mounting accuracy than the inner tracker (L2-L8). At the small scale, the assembly accuracy of the sensors-in-ladders or ladders-on-layers for L1 and L9 are similar to that of the inner tracker. It means that the positions of the external layers in the sensor or ladder level can be treated equally as the inner tracker in the alignment and help to reduce the overall bias. But this can only be achieved by the composite alignment, where all the modules are defined relative to the next support structures and all the modules from the inner tracker and external layers are aligned together taking into account all the correlations. In the composite alignment, the presigmas of the layer alignment parameters for L1 and L9 are set to be more than 20 times larger than the inner tracker (see Table 1 (b)), while the presigmas of the sensor/ladder alignment parameters are assigned to be the same for every layer, so that the preferred alignment solution tends to correct the displacements of the whole external layers with reference to the position of the inner tracker. On the other hand, under similar conditions or \(\chi^{2}\), the solutions with displacements of the larger modules are preferred to the solutions with displacements of the smaller components. Presigmas of the alignment parameters can be properly adjusted to favor displacements of the larger modules. As seen in Table 1, the presigmas of the layer alignment parameters labeled \begin{table} \end{table} Table 1: (a) The initial mechanical mounting precision of the tracker modules and (b) the presigmas of the alignment parameters used in the test-beam alignment. The presigmas labeled ”-” indicate the parameters that cannot be precisely determined by the alignment due to the limited beam directions per sensor and therefore are fixed to 0. The presigmas labeled ”†” are significantly increased to approach the preferred solution. "!" are significantly increased compared with the layer mounting precision to strengthen the preference of the corrections on the layers rather than on the ladders. #### 6.2.2 Fixed parameters in the alignment With a total of 886 beam spots distributed over \(\sim\)250 sensors per layer, the average number of beam spots per sensor is \(\sim\)3. Due to the limited beam positions and directions, \(\sim\)75% of the sensors with the crucial sensor parameters of \(\Delta u_{s}\), \(\Delta v_{s}\), and \(\gamma_{s}\) can be aligned: for a sensor with a small number of passing events, \(<\)2000, \(\Delta u_{s}\), \(\Delta v_{s}\), and \(\gamma_{s}\) are fixed to 0; for a sensor with the passing beam spots close together, such as \(\sigma(u_{s})<\)10 mm and \(\sigma(v_{s})<\)12 mm, \(\gamma_{s}\) cannot be precisely determined and is fixed to 0, where \(\sigma\) represents the standard deviation. One ladder in L3 is completely inactive and its alignment parameters are fixed as \(\Delta u_{L}=\Delta v_{L}=\Delta w_{L}=\alpha_{L}=\beta_{L}=\gamma_{L}=0\). Another ladder in L4 is inactive on the \(n\)-side and its \(\Delta u_{L}\) is fixed to 0. For a ladder with the passing beams at small inclination angles and small position spanning along the \(v_{L}\)-axis, both \(\sigma(du_{L}^{p}/dw_{L}^{p}\cdot v_{L})<2.2\) mm and \(\sigma(dv_{L}^{p}/dw_{L}^{p}\cdot v_{L})<2.2\) mm, \(\alpha_{L}\) cannot be precisely obtained from the alignment and is fixed to 0, where \(du_{L}^{p}/dw_{L}^{p}\) and \(dv_{L}^{p}/dw_{L}^{p}\) are the beam projected directions in the ladder \(u_{L}w_{L}\)-plane and \(v_{L}w_{L}\)-plane respectively (see Fig. 3 (b)). Similarly, for a ladder with the passing beams of small inclination angles and small position spanning along the \(u_{L}\)-axis, both \(\sigma(du_{L}^{p}/dw_{L}^{p}\cdot u_{L})<7\) mm and \(\sigma(dv_{L}^{p}/dw_{L}^{p}\cdot u_{L})<7\) mm, \(\beta_{L}\) is fixed to 0. Table 2 summarizes the number of ladders and sensors with fixed alignment parameters. As seen, 39 ladders -- out of a total 192 ladders -- have the alignment parameter \(\alpha_{L}\) fixed. From Eq.(14), we can derive that the \(\alpha_{L}\) equivalent alignment corrections on the ladder hit position are \(du_{L}^{p}/dw_{L}^{p}\cdot v_{L}\cdot\alpha_{L}\) and \(dv_{L}^{p}/dw_{L}^{p}\cdot v_{L}\cdot\alpha_{L}\) for the \(u_{L}\)- and \(v_{L}\)-projections respectively. Assuming the particle incident angle \(du_{L}^{p}/dw_{L}^{p}\) (or \(dv_{L}^{p}/dw_{L}^{p})=0.3\), for the hit with the largest \(v_{L}=35\) mm at the ladder edge, a typical mounting precision of \(\sigma(\alpha_{L})=0.3\) mrad (see Table 1 (a)) or a fixed \(\alpha_{L}=0\) will introduce a misalignment of 3.15 \(\upmu\)m, which is a small inaccuracy. This is also the case for sensor alignment parameters of \(\Delta w_{s}\), \(\alpha_{s}\), and \(\beta_{s}\) fixing them in the alignment will not result in a sizable misalignment. Owing to a good sensor assembly precision of \(\sigma(\gamma_{s})=0.1\) mrad, a fixed \(\gamma_{s}=0\) for part of sensors will also \begin{table} \begin{tabular}{c c c c c c} \multicolumn{6}{c}{(a) Number of fixed ladder alignment parameters} \\ \hline \(\Delta u_{L}\) & \(\Delta v_{L}\) & \(\Delta w_{L}\) & \(\alpha_{L}\) & \(\beta_{L}\) & \(\gamma_{L}\) \\ \hline 2 & 1 & 1 & 39 & 2 & 1 \\ \hline \multicolumn{6}{c}{(b) Number of fixed sensor alignment parameters} \\ \hline \(\Delta u_{s}\) & \(\Delta v_{s}\) & \(\Delta w_{s}\) & \(\alpha_{s}\) & \(\beta_{s}\) & \(\gamma_{s}\) \\ \hline 734 & 572 & 2284 & 2284 & 2284 & 1276 \\ \hline \end{tabular} \end{table} Table 2: The number of ladders (a) and sensors (b) with fixed parameters in the test-beam alignment. Note that the AMS tracker has 192 ladders and 2284 sensors. give a small misalignment of up to \(|v_{s}\gamma_{s}|=3.5\)\(\upmu\)m (\(v_{L}=35\) mm) and \(|u_{s}\gamma_{s}|=1.9\)\(\upmu\)m (\(u_{L}=19\) mm) for the \(u_{L}\)- and \(v_{L}\)-projections respectively. ### Alignment results The alignment parameters obtained from the test-beam alignment are shown in Fig. 8. As seen, the external layers, L1 and L9, have much larger layer-biases both in translations and rotations compared with the layers of the inner tracker. Other than that, no significant large outliers on the alignment parameters occur. Figure 9 shows the residual distributions of the 9 layers in the sensor \(v_{s}\) direction before and after the test-beam alignment. A large improvement of the residual distributions is obvious. Figure 10 shows the residual biases of all sensors before and after the alignment. As seen, there is no bias in each sensor after the alignment. Even taking into account the limited beam positions and directions, the overall misalignment in the \(v_{s}\) direction for the rigidity measurement is 1-2 \(\upmu\)m. ### Mechanical stability study with the 180\({}^{\circ}\) runs The test-beam alignment is done based on the nominal data where the AMS \(z\)-axis is against the beam direction and the \(x\)-axis is to the nadir as illustrated in Fig. 7 (a). The obtained alignment corrections are then applied to the data collected with the whole detector rotated around the \(y\)-axis by 180\({}^{\circ}\) where now the \(z\)-axis is along the beam direction and the \(x\)-axis is pointing to the zenith as illustrated in Fig. 7 (b). After rotation, Figure 8: The distributions of the alignment parameters of layers (top row), ladders (middle row), and sensors (bottom row) obtained from the test-beam alignment. The fixed alignment parameters are not included. Figure 10: The residual biases of the individual sensors in (a) the \(u_{s}\) direction and (b) the \(v_{s}\) direction before (open squares) and after (full circles) the test-beam alignment. A circle or square represents the residual bias of each sensor. The circles or squares of a common group are the sensors from the same half of a tracker layer. The sensor ID is defined as \((sensor+20\times ladder+400\times layer)\times half\), where \(sensor\) is the sensor number [1...15], \(ladder\) is the ladder number [1...13], \(layer\) is the layer number [0...8], and \(half\) is \(-1\) for the ladders located on the negative half (\(u_{0L}<0\)) and \(+1\) on the positive half (\(u_{0L}>0\)) of a layer. Figure 9: The residual distributions of the individual layers in the sensor \(v_{s}\) direction before (dashed histograms) and after (solid histograms) the test-beam alignment. there is a significant bias of each sensor in the sensor \(u_{s}\) direction (along or opposite to the \(x\)-axis), while the residual bias of each sensor in the sensor \(v_{s}\) direction (along or opposite to the \(y\)-axis) is tiny as shown in Fig. 11 (b). This clearly indicates the displacement induced by gravity whose direction is parallel to the \(x\)-axis. Compared with the inner tracker support structure, a carbon fiber cylinder, the support structures of the external layers, the TRD M-Structure and the Unique Support Structure, are made from aluminum, which is much less stiff. The resulting detector deformations due to gravity before and after the detector rotation are illustrated in Fig. 7 (a) and (b) respectively. As seen, when the direction of gravity was switched from along to opposite to the AMS \(x\)-axis, the most prominent changes of the external layer positions in the tracker frame are expected to be the layer translation along the \(x\)-axis (e.g. from \(\Delta u_{P}^{\rm L1}>0\) to \(\Delta u_{P}^{\rm L1}<0\) for L1) and the layer rotation around the \(y\)-axis (e.g. from \(\beta_{P}^{\rm L9}<0\) to \(\beta_{P}^{\rm L9}>0\) for L9). To justify this reasoning, an additional alignment to correct the displacements of the external layers is performed to the \(180^{\circ}\) runs, where all the alignment parameters on the sensors Figure 11: The residual biases in (a) the \(u_{s}\) and (b) the \(v_{s}\) directions of the individual sensors of the \(180^{\circ}\) test-beam runs using the alignment corrections from the nominal runs but before (open squares) and after (full circles) the additional alignment on the external layers. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Displacement} & \multicolumn{3}{c}{Translation (\(\mu\)m)} & \multicolumn{3}{c}{Rotation (mrad)} \\ \cline{2-7} & \(\Delta u_{P}\) & \(\Delta v_{P}\) & \(\Delta w_{P}\) & \(\alpha_{P}\) & \(\beta_{P}\) & \(\gamma_{P}\) \\ \hline L1 & -200 & -1 & -42 & -0.016 & 0.297 & 0.002 \\ L9 & -580 & -2 & 96 & 0.038 & 1.253 & -0.007 \\ \hline \hline \end{tabular} \end{table} Table 3: The displacements of L1 and L9 introduced by \(180^{\circ}\) detector rotation obtained from the test-beam alignment. and ladders as well as the layers of the inner tracker are fixed to be the same as the nominal runs except the layer alignment parameters of L1 and L9 which are left free. The obtained relative changes (180\({}^{\circ}\) with respect to the nominal) of the layer alignment parameters of L1 and L9 are shown in Table 3. As seen, when reversing the gravity load in the \(x\)-direction, the largest translation displacements are along the \(x\)-axis, \(-200\)\(\upmu\)m and \(-580\)\(\upmu\)m for L1 and L9 respectively, and the largest rotation displacements are around the \(y\)-axis, 0.297 mrad and 1.253 mrad for L1 and L9 respectively. The translation displacement along the \(y\)-axis, which is the most critical direction, namely the particle bending direction, is the smallest, \(-1\)\(\upmu\)m and \(-2\)\(\upmu\)m for L1 and L9 respectively. Most strikingly, with the alignment only on the external layers, all the major structures of the sensor residual biases disappear and the remaining deviations are within 2 \(\upmu\)m as shown in Fig. 7. This demonstrates that the major outcome of the tracker deformation due to gravity in the beam test is the rigid-body displacement of the external layers. With the 180\({}^{\circ}\) runs, the alignment has been verified, the inner tracker support structure has been proved to be rigid, and significant movements induced by gravity of the external layers as rigid bodies have been observed. ## 7 Dynamic alignment of the external tracker layers in space After AMS was launched into space, we found that the positions of ladders and sensors were permanently changed up to tens of microns compared to their positions on the ground. In addition, the continuous temperature variations on orbit, through the thermal deformation of the support structures, cause the periodic movements of the whole external layers at hundreds of microns per half-obit (\(\sim\)46 min). The first kind of displacement is corrected by the static alignment with billions of cosmic-ray events, which will be discussed in section 8. The second kind of displacement is corrected by the dynamic alignment with instantaneously collected cosmic-ray events and will be reported in this section. Prior to the static alignment, the dynamic alignment should be applied to remove large periodic movements of the external layers and decrease the inaccuracy of the external tracker layers to the same level as that of the inner tracker. ### Thermal environment and data collection on orbit The ISS orbits the Earth every 93 minutes with an orbital inclination of 52\({}^{\circ}\). The thermal environment of AMS on the ISS has both short-term and long-term variations. The regular short-term variation is the periodic temperature cycle that follows orbital day and night transition. The long-term variation is mainly due to the change of the angle between the ISS orbital plane and the direction to the Sun, or solar beta angle, which has a precession period of 60 days and can reach up to \(\pm 75^{\circ}\). Other thermal variables such as the positions of the ISS radiators and solar arrays, ISS attitude changes for visiting vehicles and reboosts, and shading of AMS by adjacent payloads can also have a big influence on the temperature changes at different time scales, from minutes to months. The sensor positions with respect to the carbon fiber reinforced ladders should not change over time, as carbon fiber has near zero coefficient of thermal expansion. Likewise, the positions of ladders on the carbon fiber skinned planes is stable. The positions of the inner tracker layers should also not change as their planes are firmly embedded in the carbon fiber cylinder. However, the variation of the temperature and gradients across the aluminum mechanical structures (mainly the TRD M-Structure and the Unique Support Structure) lead to continuous periodic movements of the external layers, which are corrected by the dynamic alignment using the concurrently collected cosmic-ray events, mainly protons and helium. In flight, the AMS event trigger rates vary from 200 Hz near the equator to \(\sim\)2000 Hz near the Earth's magnetic poles. The average event acquisition rate is \(\sim\)700 Hz. The events from each quarter of the ISS orbit (from near the pole to the equator or vice versa), about 23 minutes, are arranged in sequence as one run. Detector hardware calibrations are done between runs and last up to two minutes. ### Alignment procedure In the dynamic alignment, only the rigid-body movements of the external layers are considered. In this case, there are a total of 12 alignment parameters, 6 for L1 of \((\Delta u_{P}^{\rm L1},\Delta v_{P}^{\rm L1},\)\(\Delta w_{P}^{\rm L1},\alpha_{P}^{\rm L1},\beta_{P}^{\rm L1},\gamma_{P}^{\rm L1})^{\rm T}\) and 6 for L9 of \((\Delta u_{P}^{\rm L9},\)\(\Delta v_{P}^{\rm L9},\Delta w_{P}^{\rm L9},\alpha_{P}^{\rm L9},\beta_{P}^{\rm L9}, \gamma_{P}^{\rm L9})^{\rm T}\). For a short time interval with a finite number of cosmic-ray events which are mostly at low rigidities [7][30], the main constraint on the alignment precision of an external layer comes from the multiple scattering due to the materials of between L1 and L2, \(\sim\)0.3 \(X_{0}\), or between L8 and L9, \(\sim\)0.2 \(X_{0}\) (see Fig. 1). As an example, for a particle with 10 GV rigidity, the average scattering angle between L1 and L2 is \(\sim\)0.7 mrad, which corresponds to \(\sim\)700 \(\upmu\)m smearing on the L1 position using the 1 m extrapolation from the inner tracker. Since multiple scattering and the resulting position smearing decreases linearly with increasing rigidity [21], the efficient usage of cosmic-ray events and particularly those at high rigidities is critical for the precision of the dynamic alignment. #### 7.2.1 Dynamic alignment in a short-time window The developed global alignment approach as discussed in sections 3 and 5 is applied for the dynamic alignment. The GBL algorithm with free curvature (inverse rigidity, \(1/R\)) track fitting is used to derive the residuals \(\mathbf{\varepsilon}_{ij}(\mathbf{q}_{i}^{0},\mathbf{p}^{0})\), the partial derivatives with respect to the local track parameters of the residuals \(\partial\mathbf{\varepsilon}_{ij}/\partial\mathbf{q}_{i}\) and the scattering angles \(\partial\mathbf{\beta}_{ij}/\partial\mathbf{q}_{i}\), for Eq.(48), see also Eqs.(B.3)(B.5)(B.8)(B.9). The event sample used for the dynamic alignment is required to have a reconstructed track and hits on the external layers. The crucial ingredient for the dynamic alignment accuracy, the covariance matrix of the scattering angle, \(\mathbf{W}_{ij}\propto 1/R_{i}^{2}\) in Eq.(47), can be calculated iteratively event by event using the measured rigidity with the following alignment procedures: 1. Initialize \(\mathbf{W}_{ij}(R_{i})\) event by event using the rigidity measured from the inner tracker. 2. Determine the alignment parameters of L1 and L9 by minimization of Eq.(47). 3. Recalculate \(\mathbf{W}_{ij}(R_{i})\) event by event by replacing the rigidity with the new one meausured from both the inner tracker and external layers including the latest alignment corrections from step (ii). 4. Repeat steps (ii) (iii) until all the alignment parameters converge. An isotropic cosmic-ray Monte Carlo (MC) sample produced by Geant4 [29][7][30] is used for validation of the alignment. For every 100 000 events, the positions of the external layers in the MC are randomly displaced, which is then followed by a dynamic alignment. Figure 12 shows the misalignments of L1 and L9 before and after the alignment derived directly from the MC. As seen, with the alignment, the misalignments are reduced from more than a thousand microns down to a few microns. For the flight data, this alignment is performed in time-slices of \(\Delta_{t}\approx 90\) sec. The set of alignment parameters obtained in each time-slice have significant statistical errors, which are further reduced by combining the alignment results from the nearby time-slices via a custom developed smoothing procedure described in the next section. Figure 12: Misalignments of L1 (a, b) and L9 (c, d) before (dashed histograms) and after (solid histograms) the dynamic alignment derived from the MC simulation. For every 100 000 simulated events passing through either L1 or L9, the positions of the external layers in the MC are randomly displaced, which is then followed by a dynamic alignment. One entry of misalignment in each histogram corresponds to one set of displacements of the external layers. With the alignment, the misalignments projected to \(x\) (a, c) and \(y\) (b, d) coordinates are reduced from more than a thousand microns down to 2.8 \(\upmu\)m for L1 and 3.8 \(\upmu\)m for L9. #### 7.2.2 Alignment smoothing for long time period After the short-time dynamic alignment, for a given time period of \(N\Delta_{t}\), \(N\) sets of alignment parameters are smoothed as functions of time to describe the differential movements of the external layers. To fully exploit the alignment information, the time period for each smoothing should be as long as possible, but that introduces too many fitting parameters to solve. Instead, in our approach, the entire 10 years is divided into small overlapping time segments of a few hours, where the alignment data in each time segment are smoothed by a spline function [31], as illustrated in Fig. 13: 1. Each spline function has up to 40 knots (indicated as vertical lines in Fig. 13) which are distributed over time with an equal number of data points per knot. 2. The neighboring segments overlap (share) 6 knots of the alignment data (Fig. 13 dashed vertical lines' region). 3. If there is a data gap (more than 1 hour), the new segment will restart once the next alignment data appears. To achieve the minimal alignment error, the assignment of the knots is critical: the more alignment-data points per knot, the smaller the statistical error but the larger the systematic error; while the fewer data points per knot, the smaller the systematic error but the larger the statistical error. In the short-time dynamic alignment, the error of each alignment parameter for each time slice (\(\Delta_{t}\)), \(\sigma_{t}\), is estimated from error propagation, which has a small bias depending on (a) the track fitting model along with the assessment of errors on the multiple scattering Figure 13: Illustration of the spline smoothing for describing the variation of the dynamic alignment parameter \(\Delta u_{P}^{L1}\) over time. The entire time block is divided into several smaller overlapping time segments, where the alignment data (points) in each segment are smoothed by a spline function (curve) as indicated. The distribution of knots of the spline is indicated by the vertical lines including those knots shared with the neighboring splines (dashed vertical lines). and coordinate resolution and (b) the intrinsic correlation among the alignment parameters. A correction factor \(k\), which scales the alignment parameter error to the true one as \(k\sigma_{t}\), can be derived from the alignment data over a long time period (\(N_{0}\Delta_{t}\)) by bootstrapping: \[k=\sqrt{\frac{\chi_{0}^{2}}{n_{0}}}=\sqrt{\frac{\chi_{0}^{2}}{N_{0}-m_{0}}} \tag{49}\] where \(\chi_{0}^{2}\), \(m_{0}\), and \(n_{0}=N_{0}-m_{0}\) are the fitting chi-square, number of knots, and degrees of freedom, for the spline fitting to \(N_{0}\) data points with a sufficient number of knots to reach a negligible systematic error. In view of the observed rate of the external-layer movement, every 2 data points or 180 sec per knot (\(m_{0}=N_{0}/2\)) is enough to derive \(k\). For a spline fit to \(N\) data points with a given number of data points per knot, the total alignment error after smoothing is the sum in quadrature of the statistical and systematic errors: \[\sigma_{tot}=\sqrt{\sigma_{stat}^{2}+\sigma_{sys}^{2}}=\sqrt{k^{2}\sigma_{fit} ^{2}+\Big{(}\frac{\chi^{2}}{n}-k^{2}\Big{)}\sigma_{t}^{2}} \tag{50}\] where \(\sigma_{fit}\), \(\chi^{2}\), and \(n\) are the fitting error, chi-square, and degrees of freedom respectively, \(\sigma_{stat}=k\sigma_{fit}\) is the statistical error which decreases as increasing data points per knot, and \(\sigma_{sys}=\sigma_{t}\sqrt{\chi^{2}/n-k^{2}}\) is the systematic error which increases as increasing data points per knot. The smoothing of the external layer movement is optimized by assigning the knots with the minimal total error of Eq.(50) for every alignment parameter. ### Alignment results The total errors of the individual alignment parameters as functions of number of data points per knot calculated over 10 years from Eq.(50) are shown in Fig. 14. Accordingly, the time intervals between adjacent knots for the spline smoothings with the minimal alignment errors are summarized in Table 4 (a). As seen, compared with rot Figure 14: The total errors of the dynamic alignment parameters as functions of number of data points per knot, calculated over 10 years from Eq.(50). The error bars in each plot represent the standard deviations of the alignment errors arising from the time dependence. more dense knots to trace their variations, indicating that the movements of the external layers in terms of translations are more rapid than in terms of rotations. Typical variations of the individual alignment parameters over a day together with the smoothings are shown in Fig. 15. The orbital period of \(\sim\)93 minutes can be clearly seen. As shown in the figure, the movement of L1 (L9) in terms of translation is \(\sim\)200 \(\upmu\)m, \(\sim\)100 \(\upmu\)m, and \(\sim\)200 \(\upmu\)m (\(\sim\)100 \(\upmu\)m, \(\sim\)20 \(\upmu\)m, and \(\sim\)200 \(\upmu\)m) per half orbit in the \(x\)-, \(y\)-, and \(z\)-directions (strictly the \(u_{P}\)-, \(v_{P}\)-, and \(w_{P}\)-directions) respectively and of rotation is \(\sim\)0.2 mrad, \(\sim\)0.2 mrad, and \(\sim\)0.03 mrad (\(\sim\)0.2 mrad, \(\sim\)0.1 mrad, and \(\sim\)0.05 mrad) per half orbit around the \(x\)-, \(y\)-, and \(z\)-axes (strictly the \(u_{P}\)-, \(v_{P}\)-, and \(w_{P}\)-axes) respectively. In addition to the orbital movements, the external layers also display the long-term movements with a cycle of about 2 months -- the period of the solar beta angle. Figures 16 and 17 show the variations of the individual alignment parameters of L1 and L9 respectively, over 10 years from May 20, 2011 to May 20, 2021, where each data point represents the alignment parameter averaged over a day. As seen, the long-term movements of L1 (L9) translations are up to \(\sim\)1000 \(\upmu\)m, \(\sim\)200 \(\upmu\)m, and \(\sim\)300 \(\upmu\)m (\(\sim\)300 \(\upmu\)m, \(\sim\)100 \(\upmu\)m, and \(\sim\)700 \(\upmu\)m) per month in the \(x\)-, \(y\)-, and \(z\)-directions respectively and the rotations can reach \(\sim\)0.2 mrad, \(\sim\)0.6 mrad, and \(\sim\)0.02 mrad (\(\sim\)0.4 mrad, \(\sim\)0.5 mrad, and \(\sim\)0.03 mrad) per month around the \(x\)-, \(y\)-, and \(z\)-axes respectively. Figure 15: The variations of the dynamic alignment parameters of L1 (left column) and L9 (right column) over 24 hours on Dec. 17, 2015. The final achieved alignment precision for all the alignment parameters derived from Fig. 14 is summarized in Table 4 (b). As seen, for example, with the dynamic alignment, the translational movement in the \(y\)-direction (\(\Delta v_{P}\)) is aligned to a precision of 6.8 \(\upmu\)m for L1 and 7.6 \(\upmu\)m for L9. To evaluate the total residual misalignments of the external layers in the particle bending direction which is connected to the rigidity resolution, the rigidity measured using the upper span of the tracker, namely from L1 to L8 (\(R_{18}\)), are compared to the rigidity measured using the lower span, namely from L2 to L9 (\(R_{29}\)), for a helium sample with the full-span rigidity (measured from L1 to L9) \(R_{19}>570\) GV. Figure 18 shows the Gaussian sigma of the \(1/R_{18}-1/R_{29}\) distribution derived from the flight data (full circle) and its fit to the prediction from the MC simulation (line). As seen, with the dynamic alignment, the total residual misalignments (alignment errors) on the rigidity measurement are estimated to be 7.1 \(\upmu\)m for L1 and 7.9 \(\upmu\)m for L9. Figure 16: The variations of the dynamic alignment parameters of L1 over 10 years from May 20, 2011 to May 20, 2021. Note the change in behavior starting from the end of 2015, which is due to the installation of a thermal blanket on the port (\(-x\)) side of AMS on Oct 28, 2015 (indicated by the vertical dashed line). ## 8 Static alignment of the tracker in space Before launch, AMS has been aligned based on the primary 400 GeV/c proton test beam as discussed in section 6. However, the strong accelerations and vibrations during launch, followed by the rapid outgassing of the support structure in vacuum permanently changed the positions of all the tracker modules. Therefore, the entire tracker has to be aligned again with cosmic-ray events to correct the resulting displacements. The most challenging part of this alignment is the unknown curvatures (\(1/R\)) of the incoming particles in the presence of the magnetic field. A track alignment approach similar to the test-beam alignment but with free curvature track fitting (see Eq.(47)) is not enough for such an alignment as the curvatures of the tracks can be biased by any value without changing the alignment \(\chi^{2}\). The development of a new mathematical description is required for such an alignment. Figure 17: The variations of the dynamic alignment parameters of L9 over 10 years from May 20, 2011 to May 20, 2021. Note the change in behavior starting from the end of 2015, which is due to the installation of a thermal blanket on the port (\(-x\)) side of AMS on Oct 28, 2015 (indicated by the vertical dashed line). ### Global track alignment with curvature constraints For the alignment with a magnetic field and with particles whose rigidities are unknown, a new term, \(\rho_{i}^{2}(\mathbf{p})/Z_{i}\), is introduced in the global alignment \(\chi^{2}\) to constrain the curvature \begin{table} \end{table} Table 4: (a) The time intervals between adjacent knots used for the spline smoothings of the individual alignment parameters that provide (b) the best dynamic alignment precision. Figure 18: The standard deviation of the difference in the inverse rigidities measured using the upper span (L1–L8) and using the lower span (L2–L9) of the tracker, \(\sigma(1/R_{18}-1/R_{29})\), for cosmic-ray helium data with the alignment corrections (full circle) and for the Monte Carlo prediction based on the alignment errors of L1 and L9 (line) in the rigidity range \(R_{19}>570\) GV. As seen, the data point best matches the Monte Carlo prediction at the alignment errors of \(7.1~{}\upmu\)m and \(7.9~{}\upmu\)m for L1 and L9 respectively. change: \[\chi^{2}(\mathbf{q},\mathbf{p})=\sum_{i=1}^{N_{track}}\left[\sum_{j=1}^{n_{meas}} \mathbf{\varepsilon}_{ij}(\mathbf{q}_{i},\mathbf{p})^{\mathsf{T}}\mathbf{V}_{ij}^{-1}\mathbf{ \varepsilon}_{ij}(\mathbf{q}_{i},\mathbf{p})+\sum_{j=2}^{n_{scat}-1}\mathbf{\beta}_{ij}(\bm {q}_{i})^{\mathsf{T}}\mathbf{W}_{ij}^{-1}\mathbf{\beta}_{ij}(\mathbf{q}_{i})+\frac{ \rho_{i}^{2}(\mathbf{p})}{Z_{i}}\right] \tag{51}\] where \(\rho_{i}(\mathbf{p})=\rho_{i}(\mathbf{p}^{0})+\sum_{g^{\prime}}\frac{\partial\rho_{i}} {\partial p_{g^{\prime}}}\Delta p_{g^{\prime}}\) is the curvature bias (\(\Delta R^{-1}\)) for the \(i\)-th track, that depends on the global alignment parameters \(\Delta\mathbf{p}\), and is equal to \(\rho_{i}(\mathbf{p}^{0})\) before the alignment, namely \(\Delta\mathbf{p}=\mathbf{0}\); and \(Z_{i}\) is its variance. \(Z{\rightarrow}0\) will impose no change of the curvature measurement before and after the alignment. Conversely, \(Z{\rightarrow}\infty\) means no curvature constraints in the alignment, making Eq.(51) the same as Eq.(47). In the absence of a curvature reference, the measured curvature of a track is supposed to have no bias before the alignment, as \(\rho_{i}(\mathbf{p}^{0})=0\), with an uncertainty represented by the variance (squared error) \(Z_{i}\). Setting the partial derivative of the \(\chi^{2}\) of Eq.(51) with respect to each global parameter \(\Delta p_{g}\) equal to zero, we can derive a matrix equation similar to Eq.(B.2), as: \[\sum_{i=1}^{N_{track}}\mathbf{d^{\prime}}^{i}=\Big{(}\sum_{i=1}^{N_{track}}\mathbf{ C}^{\prime i}\Big{)}\Delta\mathbf{p}+\sum_{i=1}^{N_{track}}\mathbf{G}^{i}\Delta \mathbf{q}_{i} \tag{52}\] where \(\mathbf{d^{\prime}}^{i}\) is a vector whose \(g\)-th element is given by: \[d^{\prime i}_{\ g}=-\sum_{j=1}^{n_{meas}}\Big{(}\frac{\partial\mathbf{\varepsilon }_{ij}}{\partial p_{g}}\Big{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1}\mathbf{\varepsilon }_{ij}(\mathbf{q}_{i}^{0},\mathbf{p}^{0})-\frac{\partial\rho_{i}}{\partial p_{g}}Z_{i} ^{-1}\rho_{i}(\mathbf{p}^{0}) \tag{53}\] \(\mathbf{C}^{\prime i}\) is a matrix whose \((g,g^{\prime})\) entry is given by: \[{C^{\prime}}^{i}_{\ gg^{\prime}}=\sum_{j=1}^{n_{meas}}\Big{(}\frac{\partial \mathbf{\varepsilon}_{ij}}{\partial p_{g}}\Big{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1} \frac{\partial\mathbf{\varepsilon}_{ij}}{\partial p_{g^{\prime}}}+\frac{\partial \rho_{i}}{\partial p_{g}}Z_{i}^{-1}\frac{\partial\rho_{i}}{\partial p_{g^{ \prime}}} \tag{54}\] and \(\mathbf{G}^{i}\) is the matrix whose entry has been defined in Eq.(B.5). Setting the partial derivative of the \(\chi^{2}\) of Eq.(51) with respect to each local track parameter of each track equal to zero, we obtain the same matrix equation as Eq.(B.7). Combining Eq.(B.7) and Eq.(52), all the global alignment parameters, \(\Delta\mathbf{p}\), and all the local track parameters, \(\Delta\mathbf{q}\), can be solved simultaneously as in Eq.(48) with the replacement of \(\mathbf{d}^{i}\rightarrow\mathbf{d^{\prime}}^{i}\) and \(\mathbf{C}^{i}\rightarrow\mathbf{C}^{\prime i}\). The partial derivatives of the curvature change with respect to the global alignment parameters, \(\partial\rho_{i}/\partial\mathbf{p}\), present in both \(\mathbf{d^{\prime}}^{i}\) of Eq.(53) and \(\mathbf{C^{\prime}}^{i}\) of Eq.(54), are needed for the alignment. For the \(i\)-th track, the alignment corrections \(\Delta\mathbf{p}\) will change the (\(j\)-th) hit residual by: \[\widetilde{\mathbf{\varepsilon}}_{ij}^{0}=\sum_{g^{\prime}}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial p_{g^{\prime}}}\Delta p_{g^{\prime}} \tag{55}\] A track fitting is performed on \(\widetilde{\mathbf{\varepsilon}}_{i}^{0}\) from all the hits to derive the local track parameters, \(\Delta\widetilde{\mathbf{q}}_{i}\), which represent the alignment corrections on the \(i\)-th track trajectory. Minimization of the fitting \(\widetilde{\chi}^{2}\) leads to the partial derivative with respect to each local track parameter, \(\widetilde{q}_{il}\), equal to zero: \[0=\frac{\partial\widetilde{\chi}^{2}}{\partial\widetilde{q}_{il}}\simeq 2 \sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial \widetilde{q}_{il}}\Bigr{)}^{\mathsf{T}}\Bigl{(}\widetilde{\mathbf{\varepsilon}}_{ ij}^{0}+\sum_{l^{\prime}}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial \widetilde{q}_{il^{\prime}}}\Delta\widetilde{q}_{il^{\prime}}\Bigr{)}=2\sum_{j= 1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial \widetilde{q}_{il}}\Bigr{)}^{\mathsf{T}}\Bigl{(}\sum_{g^{\prime}}\frac{ \partial\mathbf{\varepsilon}_{ij}}{\partial p_{g^{\prime}}}\Delta p_{g^{\prime}}+ \sum_{l^{\prime}}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial\widetilde{q}_{ il^{\prime}}}\Delta\widetilde{q}_{il^{\prime}}\Bigr{)} \tag{56}\] Eq.(56) can be simplified in matrix form as: \[\mathbf{0}=(\widetilde{\mathbf{G}}^{i})^{\mathsf{T}}\Delta\mathbf{p}+\widetilde{ \Gamma}^{i}\Delta\widetilde{\mathbf{q}}_{i} \tag{57}\] where \(\widetilde{\mathbf{G}}^{i}\) is a matrix whose \((g,l^{\prime})\) entry is given by: \[\widetilde{G}^{i}_{gl^{\prime}}=\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial \mathbf{\varepsilon}_{ij}}{\partial p_{g}}\Bigr{)}^{\mathsf{T}}\frac{\partial\bm {\varepsilon}_{ij}}{\partial\widetilde{q}_{il^{\prime}}} \tag{58}\] and \(\widetilde{\Gamma}^{i}\) is a matrix whose \((l,l^{\prime})\) entry is given by: \[\widetilde{\Gamma}^{i}_{ll^{\prime}}=\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{ \partial\mathbf{\varepsilon}_{ij}}{\partial\widetilde{q}_{il}}\Bigr{)}^{\mathsf{ T}}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial\widetilde{q}_{il^{\prime}}} \tag{59}\] The partial derivatives of the residual with respect to the local track parameters, \(\partial\mathbf{\varepsilon}_{ij}/\partial\widetilde{\mathbf{q}}_{i}\), for Eqs.(58) (59), are derived from the track fitting algorithm (e.g. the GBL algorithm) without multiple scattering. Hence, the local track parameters, \(\Delta\widetilde{\mathbf{q}}_{i}\), which represent the \(i\)-th track trajectory change by the alignment, are obtained from Eq.(57) as: \[\Delta\widetilde{\mathbf{q}}_{i}=\bigl{[}-(\widetilde{\Gamma}^{i})^{-1}(\widetilde {\mathbf{G}}^{i})^{\mathsf{T}}\bigr{]}\Delta\mathbf{p}=\widetilde{\mathbf{H}}^{i} \Delta\mathbf{p} \tag{60}\] where \(\Delta\widetilde{\mathbf{q}}_{i}=\bigl{(}\Delta\rho_{i}=\Delta\widetilde{R}_{i}^{- 1},\Delta\widetilde{q}_{i2},\Delta\widetilde{q}_{i3},\Delta\widetilde{q}_{i4}, \Delta\widetilde{q}_{i5}\bigr{)}^{\mathsf{T}}\) has only 5 parameters, much fewer than \(\Delta\mathbf{q}_{i}\) with multiple scattering appearing in Eqs.(B.7) (52), and the matrix \(\widetilde{\mathbf{H}}^{i}\) is given by \(\widetilde{\mathbf{H}}^{i}=-(\widetilde{\Gamma}^{i})^{-1}(\widetilde{\mathbf{ G}}^{i})^{\mathsf{T}}\). As \(\Delta\rho_{i}=\Delta\widetilde{q}_{i1}\) in Eq.(60), the partial derivative of the curvature change with respect to the \(g\)-th global alignment parameter, \(\partial\rho_{i}/\partial p_{g}\), is the \((1,g)\) entry of \(\widetilde{\mathbf{H}}^{i}\): \[\frac{\partial\rho_{i}}{\partial p_{g}}=\widetilde{\mathbf{H}}^{i}(1,g) \tag{61}\] The variance of \(\rho_{i}\), namely \(Z_{i}\), present in both \(\mathbf{d^{\prime}}^{i}\) of Eq.(53) and \(\mathbf{C^{\prime}}^{i}\) of Eq.(54), is also needed for the alignment. As inferred from Eq.(60), \(Z_{i}\) can be interpreted as the error propagation from a given covariance matrix of \(\Delta\mathbf{p}\) denoted by \(\widetilde{\mathbf{V}}^{\Delta\mathbf{p}}\), as: \[Z_{i}=\bigl{[}\widetilde{\mathbf{H}}^{i}\widetilde{\mathbf{V}}^{\Delta\mathbf{p}} (\widetilde{\mathbf{H}}^{i})^{\mathsf{T}}\bigr{]}(1,1) \tag{62}\] Each layer alignment translation parameter can be assigned an error, \(\widetilde{\sigma}\), for the calculation of \(\widetilde{\mathbf{V}}^{\Delta\mathbf{p}}\) as \(\widetilde{\mathbf{V}}^{\Delta\mathbf{p}}=\widetilde{\mathbf{V}}(\widetilde{ \sigma})\) and propagated to \(Z_{i}(\widetilde{\sigma})\) as: \[Z_{i}(\widetilde{\sigma})=\bigl{[}\widetilde{\mathbf{H}}^{i}\widetilde{\mathbf{ V}}(\widetilde{\sigma})(\widetilde{\mathbf{H}}^{i})^{\mathsf{T}}\bigr{]}(1,1) \tag{63}\] Note that \(Z_{i}\) is set via \(\widetilde{\sigma}\) rather than itself merely for the sake of understanding: for instance, \(Z_{i}(\widetilde{\sigma})\) with \(\widetilde{\sigma}=10\)\(\upmu\)m is equal to the curvature variance (squared error) arising from a position uncertainty of \(10\)\(\upmu\)m on each tracker layer. The assignment of \(\widetilde{\sigma}\) passing to \(Z_{i}(\widetilde{\sigma})\) should be optimized to attain the best alignment precision as discussed below in section 8.3. ### Alignment data sample Most of the collected cosmic-ray events are at low rigidities, below 10 GV [7][30]. To achieve micron level alignment accuracy for each sensor, the alignment will require billions of cosmic-ray events to overcome the multiple scattering arising from the detector materials, especially the large amounts between the external layers and inner tracker (\(\sim\)0.3 \(X_{0}\) between L1 and L2 and \(\sim\)0.2 \(X_{0}\) between L8 and L9). Multiple scattering decreases linearly with increasing rigidity [21]. By selecting the latitude and longitude where the minimal geomagnetic cutoff [32][33] in the AMS field of view is greater than 7.6 GV, the number of events at rigidities below 10 GV is reduced to 5%; while \(\sim\)40% of the high rigidity (\(>30\) GV) events are kept for the alignment. In the static alignment data sample, there are 1.6 billion cosmic-ray events, which corresponds to the full AMS dataset from May 2011 to January 2015 (over 3.5 years period). The track information from all those events is filled into one matrix to solve all the alignment parameters in one step (see sections 5 and 8.1). Owing to the massive amount of data used, the statistical error in the alignment is negligible. ### Alignment procedure After the previous dynamic alignment, the external tracker layers have been aligned with respect to the inner tracker. Next, the modules from the external layers and inner tracker can be aligned together to reduce the overall misalignment. In particular, the positions of the external layers in the ladder or sensor level can help to improve the alignment precision of the inner tracker. The developed global alignment approach as discussed in sections 3, 4, and 8.1 is applied for the static alignment. The GBL algorithm with multiple scattering and with free curvature (\(1/R\)) track fitting is used to derive the residuals \(\mathbf{\varepsilon}_{ij}(\mathbf{q}_{i}^{0},\mathbf{p}^{0})\), the partial derivatives with respect to the local track parameters of the residuals \(\partial\mathbf{\varepsilon}_{ij}/\partial\mathbf{q}_{i}\) and the scattering angles \(\partial\mathbf{\beta}_{ij}/\partial\mathbf{q}_{i}\), for Eqs.(B.5)(B.8)(B.9)(53). The GBL algorithm without multiple scattering and with free curvature track fitting is used to derive the partial derivatives of the residuals with respect to the local track parameters \(\partial\mathbf{\varepsilon}_{ij}/\partial\widetilde{\mathbf{q}}_{i}\) for Eqs.(58)(59). #### 8.3.1 Alignment validation with Monte Carlo An isotropic cosmic-ray Monte Carlo sample produced by Geant4 [29][7][30] is used to validate the static alignment. All the tracker modules in the MC are randomly displaced by Gaussian sampling using the displacement parameters similar to the flight data. The static alignment (see Eq.(51)) accuracy is optimized by varying the curvature constraint, namely \(\widetilde{\sigma}\), which defines the curvature variance \(Z_{i}(\widetilde{\sigma})\) for the alignment as shown in Eq.(63). Figure 19 shows the distributions of the proton full-span rigidity resolution (\(\delta R_{19}^{-1}\)) at 1.5 TV for no module displacement (dashed histogram), displaced modules before alignment (solid histogram), and displaced modules after alignment with \(\widetilde{\sigma}=200\)\(\upmu\)m (full circle histogram). As seen, the developed alignment procedure is capable of restoring most of the smeared rigidity resolution. Note that the small shift in the mean of the measured rigidity will be precisely corrected by using the rigidity-scale determination procedure in section 8.3.4. Figure 20 shows the proton rigidity resolutions of (a) the inner tracker (\(R_{28}\)), (b) L1 and the inner tracker (\(R_{18}\)), and (c) the full-span tracker (\(R_{19}\)) as functions of the curvature constraint \(\widetilde{\sigma}\) (full circles and dot-dashed curves). As seen, in the alignment, the optimal values of \(\widetilde{\sigma}\) that derive the best rigidity resolutions, are \(\sim\)150 \(\upmu\)m for \(R_{28}\), \(\sim\)200 \(\upmu\)m for \(R_{18}\), and \(\sim\)280 \(\upmu\)m for \(R_{19}\). It is clear that the curvature constraint \(\widetilde{\sigma}\) should be neither too tight as that will force no track curvature change before and after alignment, nor too loose as that will result in arbitrary change of the curvature in the alignment. Compared with a typical tracker intrinsic position resolution of \(\sim\)10 \(\upmu\)m, \(Z_{i}(\widetilde{\sigma})\) with \(\widetilde{\sigma}\sim 200\)\(\upmu\)m is a rather loose variance, which is equal to a curvature variance transformed from a position uncertainty of \(\sim\)200 \(\upmu\)m on each tracker layer. #### 8.3.2 Alignment optimization for the flight data As shown in the MC study (section 8.3.1), the alignment precision is sensitive to the curvature variance, \(Z_{i}(\widetilde{\sigma})\), used in Eq.(51), which also needs to be derived from the flight data. The primary goal for the alignment is to improve the track curvature (\(1/R\)) measurement precision, i.e. to reduce the curvature bias. Residuals cannot be used for the study of the curvature misalignment as the curvature bias cannot be seen from the residuals. However, the curvature bias or rigidity bias is very sensitive to the cosmic-ray flux measurement -- or, more precisely, the rigidity dependence of the cosmic-ray flux measured at high rigidities [7][30][34]. This feature can be exploited to probe the curvature misalignment. Figure 19: The distributions of the proton full-span rigidity resolution (\(\delta R_{19}^{-1}\)) at 1.5 TV for the MC samples with no tracker module displacement (dashed histogram), displaced modules before alignment (solid histogram), and displaced modules after alignment using \(\widetilde{\sigma}=200\)\(\upmu\)m (full circle histogram). Figure 20: The proton rigidity resolutions \(\sigma(1/R)\) of (a) the inner tracker (\(R=R_{28}\)), (b) L1 and the inner tracker (\(R=R_{18}\)), and (c) the full-span tracker (\(R=R_{19}\)) at 1.5 TV as functions of the curvature constraint \(\widetilde{\sigma}\), obtained from the alignment on the MC with the tracker modules displaced (full circles and dot-dashed curves). The rigidity resolutions for no module displacement (dashed lines) and displaced modules before alignment (solid lines) are also shown. As cosmic rays are isotropic, for an ideal tracker without misalignment, the cosmic-ray fluxes measured with a similar pattern in layers but different detector-module combinations, such as different ladder combinations (see one ladder combination illustrated in Fig. 21 (a)), are expected to be the same. Therefore, as a result of the differential curvature bias, the deviation of the fluxes or the rigidity dependencies of the event rates (the number of the collected events per second) obtained from different ladder combinations, is used as an estimator of the tracker misalignment. To display the relative rigidity dependence, the cosmic-ray event rates measured from the \(i\)-th ladder combination are divided by the event rates measured with the total tracker, denoted by \(n_{i}/n\). Then the obtained \(n_{i}/n\) is normalized by its acceptance fraction \(A_{i}/A\), as: \[\frac{\widehat{n}_{i}}{\widehat{n}}=\frac{n_{i}/n}{A_{i}/A}=\frac{n_{i}/n}{ \aleph_{i}/\aleph_{i}} \tag{64}\] where \(\aleph_{i}\) is the total number of events for the \(i\)-th ladder combination, which sums up all the passing events above 30 GV -- the rigidity region that has no influence from the geomagnetic field [35]; and \(\aleph_{i}/\aleph\) is the ratio of the total events between the \(i\)-th ladder combination and the full tracker, which is used to calculate the acceptance fraction as \(A_{i}/A=\aleph_{i}/\aleph\). For the \(i\)-th ladder combination, the normalized event ratio, \(\widehat{n}_{i}/\widehat{n}\), is fitted over the high rigidity range 90-1000 GV to derive the event-ratio slope \(k_{i}\), with: \[\frac{\widehat{n}_{i}}{\widehat{n}}=k_{i}\text{log}(R)+b_{i} \tag{65}\] where the slope \(k_{i}\) and the intercept \(b_{i}\) are the two fitting parameters. As an illustration, Fig. 21 (b) shows the slope fits to the normalized event ratios of 4 different ladder combinations. A clear deviation among the event-ratio slopes of different Figure 21: (a) Schematic of a ladder combination of the inner tracker and (b) the slope fits (lines) to the normalized event ratios (\(\widehat{n}_{i}/\widehat{n}\)) of 4 different ladder combinations with each specified by a set of symbols (up triangles, squares, circles, or down triangles). The different rigidity dependences of the ratios, or the deviation among the slopes, induced by the different curvature biases, are clearly seen. ladder combinations is seen. For each track pattern of L2-L8, L1-L8, or L1-L9, the standard deviation of the event-ratio slopes from the 1000 most populated ladder combinations (i.e. with the largest number of passing events) is used as a gauge to evaluate the misalignment. Figure 22 shows the standard deviations of the event-ratio slopes, \(\sigma(k)\), as functions of the alignment used \(\widetilde{\sigma}\) for the ladder combinations of (a) the inner tracker (\(R_{28}\)), (b) L1 and the inner tracker (\(R_{18}\)), and (c) the full-span tracker (\(R_{19}\)). As seen, the optimal values of \(\widetilde{\sigma}\) for the flight data that have the minimal curvature misalignment, are \(80-150\)\(\upmu\)m for \(R_{28}\), \(150-200\)\(\upmu\)m for \(R_{18}\), and \(\sim\)280 \(\upmu\)m for \(R_{19}\), which are consistent with the previous estimation from the MC (section 8.3.1). Taking all the track patterns (L2-L8, L1-L8, and L1-L9) into account, the curvature constraint of \(\widetilde{\sigma}=200\)\(\upmu\)m is chosen for the static alignment. As shown in the figure, after the static alignment, the quality of the rigidity measurement or the rigidity resolution has been significantly improved. There is also no misalignment of the residuals after this step. However, a small remaining misalignment of the curvature still exists and is further reduced by the 2nd static alignment performed afterwards using the curvature alignment approach introduced below. #### 8.3.3 Refinement with the curvature alignment In the 2nd static alignment, the alignment corrections obtained from the 1st static alignment are applied. Different from the 1st static alignment, which was using zero mean for the curvature constraint as \(\rho_{i}(\mathbf{p})=\rho_{i}(\mathbf{p}^{0})+\sum_{g^{\prime}}\frac{\partial\rho_{i}} {\partial p_{g^{\prime}}}\Delta p_{g^{\prime}}\) with \(\rho_{i}(\mathbf{p}^{0})=0\) in Eq.(51), the 2nd static alignment, namely the curvature alignment, uses the curvature bias \(\rho_{i}(\mathbf{p}^{0})\) estimated from the data to further improve the result. The method to obtain \(\rho_{i}(\mathbf{p}^{0})\) is based on the isotropic property of cosmic-ray fluxes, i.e. the same rigidity dependence of the cosmic-ray event rates measured with the different detector-module combinations. In the \(j\)-th rigidity bin \([R_{j},R_{j+1}]\), the event rate, \(n_{j}=N_{j}/T\) (the number of the events per second), measured from the total tracker which has a small curvature misalignment of \(\rho\), can be described by: \[n_{j}(\rho)=\int_{R_{j}}^{R_{j+1}}\frac{dR}{R^{2}}\int_{0}^{\infty}\Phi(R_{0}) A(R_{0})M\Big{(}R_{0},\frac{1}{R}-\frac{1}{R_{0}}+\rho\Big{)}dR_{0} \tag{66}\] where \(1/R+\rho\) and \(1/R\) are the measured inverse rigidities with and without the curvature bias respectively, \(R_{0}\) is the true rigidity before detector resolution smearing, \(\Phi(R_{0})\) is the cosmic-ray flux, \(A(R_{0})\) is the acceptance of the tracker, and \(M(R_{0},1/R-1/R_{0}+\rho)\) is the probability density function of the tracker rigidity resolution for a given true rigidity \(R_{0}\) expressed as a function of \(1/R-1/R_{0}+\rho\). The total tracker is assumed to have no curvature bias as \(\rho=0\). With \(A\) and \(M\) parameterized from the MC simulation, the parameterization of \(\Phi\) is obtained from the fit to the event rates measured with the total tracker. In the \(j\)-th rigidity bin, the ratio of the event rate of the \(i\)-th detector-module combination, \(n_{ij}\), to the total event rate, \(n_{j}\), is: \[\frac{n_{ij}}{n_{j}}=\frac{f_{i}n_{j}(\rho=\rho_{i})}{n_{j}(\rho=0)}=\frac{f_ {i}\int_{R_{j}}^{R_{j+1}}\frac{dR}{R^{2}}\int_{0}^{\infty}\Phi(R_{0})A(R_{0}) M\big{(}R_{0},\frac{1}{R}-\frac{1}{R_{0}}+\rho_{i}\big{)}dR_{0}}{\int_{R_{j}}^{R_{j+1 }}\frac{dR}{R^{2}}\int_{0}^{\infty}\Phi(R_{0})A(R_{0})M\big{(}R_{0},\frac{1}{ R}-\frac{1}{R_{0}}\big{)}dR_{0}} \tag{67}\] Figure 22: The standard deviations of the cosmic-ray event-ratio slopes (flux rigidity dependences), \(\sigma(k)\), as functions of the curvature constraint \(\widetilde{\sigma}\) for the ladder combinations of (a) the inner tracker (\(R_{28}\)), (b) L1 and the inner tracker (\(R_{18}\)), and (c) the full-span tracker (\(R_{19}\)), obtained from the static alignment on the flight data (full circles and dot-dashed curves). The deviations of the slopes before the static alignment (solid lines) and the statistical limits due to the slope uncertainties arising from the limited number of cosmic-ray events at high rigidities (dashed lines) are also shown. where \(\rho_{i}\) is the curvature bias of the \(i\)-th detector-module combination, \(f_{i}=A_{i}/A\) is the constant acceptance ratio of the \(i\)-th detector-module combination to the total tracker, and \(n_{ij}=A_{i}/A\cdot n_{j}(\rho=\rho_{i})=f_{i}n_{j}(\rho=\rho_{i})\). From the fit of Eq.(67) to the event-rate ratio at high rigidity bins (90-1000 GV as in Fig. 21 (b)), the curvature bias \(\rho_{i}\) is obtained. After the 1st static alignment, 2500 ladder combinations of the inner tracker (86% of the total sample), 4000 ladder combinations of L1 and the inner tracker (88% of the total sample), and 2000 ladder combinations of the full-span tracker (77% of the total sample) are estimated for their remaining curvature biases, which are used as the curvature reference \(\rho_{i}(\mathbf{p}^{0})\) in Eq.(51) for the 2nd static alignment. To illustrate the full (1st and 2nd) static alignment improvement, Fig. 23 shows the curvature biases of the 1000 most populated ladder combinations (\(\sim\)60% of the total sample) for each track pattern (L2-L8, L1-L8, or L1-L9) before the static alignment (open squares) and after the full static alignment (full circles), which are derived from Eq.(67). Figure 24 summarizes the curvature misalignments, defined as the standard deviations of curvature biases of the 1000 ladder combinations, \(\sigma(\rho)\), for the individual track patterns, together with the statistical limits (dashed line) due to the uncertainties arising from the limited number of cosmic-ray events at high rigidities in the curvature bias determination. As seen, with the static alignment approach, the misalignment of the tracker has been greatly reduced for all the track patterns. Figure 23: The curvature biases of the 1000 most populated ladder combinations for the individual track patterns of (a) L2-L8 (\(R_{28}\)), (b) L1-L8 (\(R_{18}\)), and (c) L1-L9 (\(R_{19}\)) before the static alignment (open squares) and after the full static alignment (full circles). #### 8.3.4 Determination of the total absolute rigidity scale After the previous 2 rounds of static alignment, the tracker becomes homogeneous, i.e. the relative curvature bias from module combination to module combination has vanished. However, the whole tracker can have an overall curvature bias, or a shift in the total absolute rigidity scale, which behaves as a coherent shift in the positions of the tracker layers. To determine the total absolute rigidity scale in space, a method using cosmic-ray electrons (\(e^{-}\)) and positrons (\(e^{+}\)) events to calibrate the detector has been developed. Similar method to estimate the curvature bias was used in the CMS experiment [36]. The basic idea is to use the property that the deflection curves of the track trajectories in the magnetic field are mirrored between a charged particle and its anti-particle with the same energy. When a coherent shift in the tracker layers occurs, the measured absolute inverse rigidity, \(|1/R|\), will be shifted by a positive (negative) and by a negative (positive) value for \(e^{-}\) and \(e^{+}\) respectively. The rigidity scale shift therefore can be evaluated by comparing the \(|1/R|\) distributions between \(e^{-}\) and \(e^{+}\) events with the same energy measured in the AMS electromagnetic calorimeter detector. To make full use of the collected cosmic-ray \(e^{+}\) and \(e^{-}\) events with different energies, an unbinned likelihood method was developed. The detailed description of the method is presented in Ref. [34]. Using this approach, the total rigidity scale is established with an accuracy of \(\pm 1/34\)\(\mathrm{TV^{-1}}\) based on 10 years of AMS data, limited mostly by the available positron statistics. The estimated small correction for the total curvature bias is converted into position offsets of the individual tracker layers [34], adding to the layer alignment parameters. Figure 24: The curvature misalignments, defined as the standard deviations of curvature biases of the 1000 most populated ladder combinations, \(\sigma(\rho)\), for the track patterns of the inner tracker (\(R_{28}\)), L1 and the inner tracker (\(R_{18}\)), and the full-span tracker (\(R_{19}\)) before (open squares) and after (full circles) the static alignment. The statistical limits (dashed line) due to the uncertainties arising from the limited number of cosmic-ray events at high rigidities in the curvature bias determination are also shown. ### Alignment results The results of the static alignment are classified into several aspects shown in the following sections. #### 8.4.1 Displacements of the tracker modules during launch After the static alignment, we obtain the changes between the positions of the tracker modules in space and those on the ground, which are expressed as the alignment parameters shown in Fig. 25. As seen in Fig. 25 (a-f), the translations of the inner tracker layers are \(\sim\)1 \(\upmu\)m, \(\sim\)1 \(\upmu\)m, and \(\sim\)32 \(\upmu\)m along the \(u_{P}\)-, \(v_{P}\)-, and \(w_{P}\)-axes (\(x\)-, \(y\)-, and \(z\)-axes) respectively and the rotations are \(\sim\)0.015 mrad, \(\sim\)0.04 mrad, and \(\sim\)0.004 mrad around the \(u_{P}\)-, \(v_{P}\)-, and \(w_{P}\)-axes respectively. The translation of \(\sim\)32 \(\upmu\)m along the \(w_{P}\)-axis (\(z\)-axis) can be explained by the outgassing of the support structure, i.e. the foam in the ladder reinforcement frame (see Fig. 2 (a)), which happened very rapidly under vacuum. This is confirmed by the fact that the odd and even layers of the inner tracker are shifted in opposite \(z\)-direction (see Fig. 25 (c)), since their ladders are mounted oppositely. Apart from that, the support structure of the inner tracker planes (the carbon fiber cylinder), exhibits excellent mechanical stability, holding the layers of the inner tracker in place at the micron level through the launch. As seen in Fig. 25 (g-l), the translations of the ladders are \(\sim\)13 \(\upmu\)m, \(\sim\)11 \(\upmu\)m, and \(\sim\)13 \(\upmu\)m along the \(u_{L}\)-, \(v_{L}\)-, and \(w_{L}\)-axes respectively and the rotations are \(\sim\)0.4 mrad, \(\sim\)0.1 mrad, Figure 25: The distributions of the alignment parameters of layers (top row), ladders (middle row), and sensors (bottom row) obtained from the static alignment of all the tracker modules in space. The layer alignment parameters of L1 and L9 are not included in the plots (a-f) as they are dynamically aligned using the position of the inner tracker for the reference. and \(\sim\)0.03 mrad around the \(u_{L}\)-, \(v_{L}\)-, and \(w_{L}\)-axes respectively. The sizable changes of the ladder positions are the major sources of the tracker misalignment in space. As seen in Fig. 25 (m-o), the translations of the sensors are \(\sim\)16 \(\upmu\)m and \(\sim\)5 \(\upmu\)m along the \(u_{s}\)- and \(v_{s}\)-axes respectively and the rotation is \(\sim\)0.1 mrad around the \(w_{s}\)-axis, which are also not small changes. In particular, the largest translation of \(\sim\)16 \(\upmu\)m along the \(u_{s}\)-axis reveals a systematic change of the ladder structure after the launch, that is an increased distance between the adjacent sensors in a ladder. The reason might also be related to the deformation of the foam in the ladder reinforcement frame. Figure 26 shows the residual biases of the individual sensors of the inner tracker in the sensor \(u_{s}\)- and \(v_{s}\)-directions before and after the static alignment for a selected cosmic-ray proton sample with rigidity \(R>30\) GV based on 10 years of AMS data. Obvious displacements of the tracker modules induced by the launch (before the static alignment) are seen. After the alignment, there is no bias in the residual of each sensor. #### 8.4.2 Stability of the tracker modules in space We have also examined the position stability of the inner tracker sensors in space through their residuals over time. During the 10 year period, in the microgravity environment, the changes of the sensor positions are found to be very small. In order to increase the sensitivity of detecting the tracker movement in space, a similar approach as in section 8.3.3 is applied to estimate the time dependent rigidity-scale shift of Figure 26: The residual biases of the individual sensors of the inner tracker in (a) the \(u_{s}\) direction and (b) the \(v_{s}\) direction before (open squares) and after (full circles) the static alignment for a selected cosmic-ray proton sample with rigidity \(R>30\) GV based on 10 years of AMS data. A circle or square represents a residual bias of each sensor. The circles or squares of a common group are the sensors from the same half of a tracker layer. the total tracker, by using that the cosmic-ray flux at high rigidities is constant in time. The curvature biases, or the rigidity-scale shifts, are measured in 40 time periods of 3 months each by fitting the measured event-rate ratios of those periods to the total over 10 years (\(n_{i=1-40}/n\)) with a function similar to Eq.(67). Figure 27 shows the rigidity-scale shifts as a function of time over 10 years obtained from the event rates of cosmic-ray protons (open symbols) and helium (full symbols) measured using the inner tracker (\(R_{28}\), circles), L1 and the inner tracker (\(R_{18}\), squares), and the full-span tracker (\(R_{19}\), triangles). As seen, the slow shift of the rigidity scale, or the long-term movement of the inner tracker, is evident before 2015 and progressively decreasing to near zero around 2016. The amplitude of this movement is fairly small, as the maximum rigidity-scale change of \(\sim\)0.18 TV\({}^{-1}\) shown in the figure is equivalent to a displacement of an inner tracker layer of \(<\)1 \(\upmu\)m [34]. It is also shown in the figure that the shift of the rigidity measured with the external layers (\(R_{18}\) or \(R_{19}\)) perfectly follows the shift of the rigidity measured with only the inner tracker (\(R_{28}\)), proving the high stability and reliability of the L1 and L9 dynamic-alignment procedure. The small correction for the time dependent rigidity-scale shift is converted into position offsets of the individual tracker layers [34], adding to the layer alignment parameters. #### 8.4.3 Alignment precision After the static alignment, the misalignment in the residual, or incoherent misalignment, is negligible (under a micron as seen in Fig. 26) compared with the intrinsic tracker coordinate resolution. Figure 28 shows the Gaussian sigma of the \(v_{s}\) residual, that is the \(v_{s}\) coordinate difference between the measurement from a sensor of L5 and the prediction from the track fit Figure 27: The rigidity-scale shifts as a function of time over 10 years obtained from the event rates of cosmic-ray protons (open symbols) and helium (full symbols) measured using the inner tracker (\(R_{28}\), circles), L1 and the inner tracker (\(R_{18}\), squares), and the full-span tracker (\(R_{19}\), triangles). The solid curve shows the fit with a logistic function. using the other layers, as functions of the incident particle direction in the sensor \(v_{s}w_{s}\)-plane, \(dv_{s}^{p}/dw_{s}^{p}\), for cosmic-ray helium (triangles) and carbon (full circles) nuclei with rigidities \(R>50\) GV. Owing to the precise alignment together with the advanced position finding algorithm [9], the average \(v_{s}\) or \(y\) coordinate resolutions are 6.5 (7.5) \(\upmu\)m for helium and 5.1 (5.8) \(\upmu\)m for carbon in the full-span (L1 and inner) tracker geometry. The detailed performance of the AMS tracker coordinate resolutions for all charged particles up to \(Q=26\) can be found in Ref. [9]. Another source of the misalignment in the static alignment is the misalignment of the curvature, or coherent misalignment, which is not visible in the residual and is more crucial. The curvature misalignment can be split into two parts: (a) the overall curvature bias that will shift the mean of the measured rigidity and (b) the differential curvature bias that will degrade the rigidity resolution. The overall curvature bias, or the rigidity scale shift of the total tracker, has been corrected to an accuracy of \(\pm 1/34\)\(\mathrm{TV}^{-1}\) by using cosmic-ray electrons and positrons events with the procedure discussed in section 8.3.4. The differential curvature biases for the different combinations of the tracker modules can smear the tracker resolution as shown in the MC study (see Fig. 19). With the unique alignment approach, most of the smeared rigidity resolution is recovered. By using the isotropic Figure 28: The standard deviation (\(\sigma\)) of the \(v_{s}\) residual (the \(v_{s}\) coordinate difference between the measurement from a sensor of L5 and the prediction from the track fit using the other layers), as functions of the incident particle direction \(dv_{s}^{p}/dw_{s}^{p}\) for cosmic-ray helium (triangles) and carbon (full circles) nuclei with rigidities \(R>50\) GV. The vertical dashed lines indicate the angular boundary of the full-span tracker geometrical acceptance, which includes 95% of the events. The vertical dot-dashed lines indicate the same boundary but of the L1-inner tracker acceptance. The intrinsic tracker spatial resolution is predominant in the residual \(\sigma\). The average \(v_{s}\) coordinate resolutions are 6.5 (7.5) \(\upmu\)m for helium and 5.1 (5.8) \(\upmu\)m for carbon in the full-span (L1 and inner) tracker geometry. property of cosmic-ray flux, direct assessment of the misalignment is performed on the data. As shown in Fig. 24, after the alignment, the standard deviations of the differential curvature biases among different ladder combinations, are better than 0.18 TV\({}^{-1}\), 0.125 TV\({}^{-1}\), and 0.11 TV\({}^{-1}\) for the rigidities measured using the inner tracker (\(R_{28}\)), L1 and inner tracker (\(R_{18}\)), and full-span tracker (\(R_{19}\)) respectively, which are the misalignments equivalent to additional smearings of the measured position of each layer by less than 0.7 \(\upmu\)m, 1.2 \(\upmu\)m, and 2.7 \(\upmu\)m for \(R_{28}\), \(R_{18}\), and \(R_{19}\) respectively. This estimation is based on different ladder combinations and does not include the contribution from the misalignment of the sensors, which cannot be accurately determined from the different sensor combinations due to the limited number of cosmic-ray events per sensor combination at high rigidities. Considering that the sensor position change during launch is small, \(\sim\)5 \(\upmu\)m, in the bending direction, based on the MC simulation, we assign an error of \(\sim\)2 \(\upmu\)m to the sensor misalignment. So, combining in quadrature, the total differential curvature misalignments equivalent to the position errors of each layer are 2.1 \(\upmu\)m, 2.3 \(\upmu\)m, and 3.3 \(\upmu\)m for \(R_{28}\), \(R_{18}\), and \(R_{19}\) respectively, which are smaller than both the intrinsic spatial resolution (e.g. 5.1 \(\upmu\)m for carbon nuclei in the full-span geometry) and the alignment errors of the external layers in the dynamic alignment (7.1 \(\upmu\)m for L1 and 7.9 \(\upmu\)m for L9). ## 9 Conclusion Precise alignment of the silicon tracker is invaluable for the success of the AMS mission. We have presented a series of new methods to align the large permanent magnetic spectrometer for the space experiment, starting from the alignment with the test beam data on the ground through the alignment with the cosmic-ray events in space, with an ultimate Figure 29: The rigidity resolutions, \(\sigma(1/R)\), of L1-inner (\(R=R_{18}\)) and full-span (\(R=R_{19}\)) track patterns as functions of the true rigidity for carbon nuclei obtained from MC simulation. The corresponding maximal detectable rigidities, \(R^{M}\), with \(R^{M}\sigma(1/R^{M})\equiv 1\), are \(R_{18}^{M}=1.6\) TV and \(R_{19}^{M}=3.6\) TV. precision of a few microns achieved under harsh conditions. This allows AMS to accurately measure cosmic rays up to the multi-TV region. As an example, Fig. 29 shows the rigidity resolutions of L1-inner track pattern, \(\sigma(1/R_{18})\), and of full-span track pattern, \(\sigma(1/R_{19})\), as functions of the true rigidity for carbon nuclei after the full alignment procedure. The maximal detectable rigidities, \(R^{M}\), with \(R^{M}\sigma(1/R^{M})\equiv 1\), are \(R^{M}_{18}=1.6\) TV and \(R^{M}_{19}=3.6\) TV, correspondingly. The developments of the new mathematical alignment algorithms, such as the alignment for the composite detector structure, the alignment for the dynamic system, and the alignment in the presence of the magnetic field, are useful for various HEP experiments equipped with the tracking detectors and particularly valuable for the future spaceborne magnetic spectrometers. ## Acknowledgements We acknowledge the continuous support from MIT and its School of Science. We are grateful for the support of the U.S. Department of Energy (DOE), Office of Science. We thank the strong support from CERN IT department. We thank Dr. Michael Capell for his diligent proofreading of the manuscript. ## Appendix A Coordinate transformation from the local sensor frame to the global tracker frame Substituting Eq.(1) into Eq.(2) gives: \[\begin{split}\mathbf{r}_{P}=&\mathbf{R}_{L}^{\sf T} \Delta\mathbf{R}_{L}\big{[}\mathbf{R}_{s}^{\sf T}\Delta\mathbf{R}_{s}(\mathbf{q}+ \Delta\mathbf{q}_{s})+\mathbf{r}_{0s}+\Delta\mathbf{q}_{L}\big{]}+\mathbf{r}_{0L}\\ \simeq&\mathbf{R}_{L}^{\sf T}\mathbf{R}_{s}^{\sf T }\big{[}(\mathbf{R}_{s}\Delta\mathbf{R}_{L}\mathbf{R}_{s}^{\sf T})\Delta \mathbf{R}_{s}\mathbf{q}+\Delta\mathbf{q}_{s}+\mathbf{R}_{s}\Delta\mathbf{R}_{L}\mathbf{r} _{0s}+\mathbf{R}_{s}\Delta\mathbf{q}_{L}\big{]}+\mathbf{r}_{0L}\end{split}\] (A.1) Subsequently, substituting Eq.(A.1) into Eq.(3) gives: \[\begin{split}\mathbf{r}_{g}\simeq&\mathbf{R}_{P}^{\sf T }\mathbf{R}_{L}^{\sf T}\mathbf{R}_{s}^{\sf T}\big{[}(\mathbf{R}_{s}\mathbf{R} _{L}\Delta\mathbf{R}_{P}\mathbf{R}_{L}^{\sf T}\mathbf{R}_{s}^{\sf T})( \mathbf{R}_{s}\Delta\mathbf{R}_{L}\mathbf{R}_{s}^{\sf T})\Delta\mathbf{R}_{s} \mathbf{q}+\Delta\mathbf{q}_{s}\\ &+(\mathbf{R}_{s}\mathbf{R}_{L}\Delta\mathbf{R}_{P}\mathbf{R}_{ L}^{\sf T}\mathbf{R}_{s}^{\sf T})\mathbf{R}_{s}\Delta\mathbf{R}_{L}\mathbf{r}_{0s}+ \mathbf{R}_{s}\Delta\mathbf{q}_{L}+\mathbf{R}_{s}\mathbf{R}_{L}\Delta\mathbf{R}_ {P}\mathbf{r}_{0L}+\mathbf{R}_{s}\mathbf{R}_{L}\Delta\mathbf{q}_{P}\big{]}+\mathbf{r}_{0P }\end{split}\] (A.2) The above equation can be simplified to: \[\mathbf{r}_{g}\simeq \mathbf{R}^{\sf T}(\mathbf{q}+\Delta\mathbf{q})+\mathbf{r}_{0}\] where \[\Delta\mathbf{q}= \big{[}(\mathbf{R}_{s}\mathbf{R}_{L}\Delta\mathbf{R}_{P}\mathbf{R}_{L }^{\mathsf{T}}\mathbf{R}_{s}^{\mathsf{T}})(\mathbf{R}_{s}\Delta\mathbf{R}_{L} \mathbf{R}_{s}^{\mathsf{T}})\Delta\mathbf{R}_{s}-\mathbf{E}\big{]}\mathbf{q}+\Delta \mathbf{q}_{s}\] \[+\big{[}(\mathbf{R}_{s}\mathbf{R}_{L}\Delta\mathbf{R}_{P}\mathbf{R }_{L}^{\mathsf{T}}\mathbf{R}_{s}^{\mathsf{T}})\mathbf{R}_{s}\Delta\mathbf{R}_{L }-\mathbf{R}_{s}\big{]}\mathbf{r}_{0s}\] (A.3) \[+\mathbf{R}_{s}\Delta\mathbf{q}_{L}+\mathbf{R}_{s}\mathbf{R}_{L}( \Delta\mathbf{R}_{P}-\mathbf{E})\mathbf{r}_{0L}+\mathbf{R}_{s}\mathbf{R}_{L} \Delta\mathbf{q}_{P}\] \[\mathbf{R}^{\mathsf{T}}= \mathbf{R}_{P}^{\mathsf{T}}\mathbf{R}_{L}^{\mathsf{T}}\mathbf{R}_ {s}^{\mathsf{T}}\] (A.4) \[\mathbf{r}_{0}= \mathbf{R}_{P}^{\mathsf{T}}\mathbf{R}_{L}^{\mathsf{T}}\mathbf{r}_{0s }+\mathbf{R}_{P}^{\mathsf{T}}\mathbf{r}_{0L}+\mathbf{r}_{0P}\] (A.5) ## Appendix B \(\chi^{2}\) minimization and alignment matrix in the global alignment Minimization of the \(\chi^{2}\) of Eq.(47) leads to the partial derivative with respect to each (\(g\)-th) global parameter \(\Delta p_{g}\) being zero: \[\begin{split}\frac{\partial\chi^{2}}{\partial p_{g}}=& 2\sum_{i=1}^{N_{track}}\sum_{j=1}^{n_{meas}} \Bigl{(}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial p_{g}}\Bigr{)}^{ \mathsf{T}}\mathbf{V}_{ij}^{-1}\mathbf{\varepsilon}_{ij}=0\\ \simeq& 2\sum_{i=1}^{N_{track}}\sum_{j=1}^{n_{meas}} \Bigl{(}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial p_{g}}\Bigr{)}^{ \mathsf{T}}\mathbf{V}_{ij}^{-1}\Bigl{[}\mathbf{\varepsilon}_{ij}(\mathbf{q}_{i}^{0}, \mathbf{p}^{0})+\sum_{l^{\prime}}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial q _{il^{\prime}}}\Delta q_{il^{\prime}}+\sum_{g^{\prime}}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial p_{g^{\prime}}}\Delta p_{g^{\prime}}\Bigr{]}\end{split}\] (B.1) where \(\mathbf{\varepsilon}_{ij}\!\!\simeq\!\!\mathbf{\varepsilon}_{ij}(\mathbf{q}_{i}^{0},\mathbf{p }^{0})+\sum_{l^{\prime}}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial q_{il^ {\prime}}}\Delta q_{il^{\prime}}+\sum_{g^{\prime}}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial p_{g^{\prime}}}\Delta p_{g^{\prime}}\) depends both on the local track parameters \(\Delta\mathbf{q}_{i}\) and the global alignment parameters \(\Delta\mathbf{p}\), and \(\mathbf{\beta}_{ij}\!\!\simeq\!\sum_{l^{\prime}}\frac{\partial\mathbf{\beta}_{ij}}{ \partial q_{il^{\prime}}}\Delta q_{il^{\prime}}\) as the intrinsic track property only depends on the local track parameters \(\Delta\mathbf{q}_{i}\). Eq.(B.1) can be further simplified in matrix form as: \[\sum_{i=1}^{N_{track}}\mathbf{d}^{i}=\Bigl{(}\sum_{i=1}^{N_{track}}\mathbf{C}^{i} \Bigr{)}\Delta\mathbf{p}+\sum_{i=1}^{N_{track}}\mathbf{G}^{i}\Delta\mathbf{q}_{i}\] (B.2) where \(\mathbf{d}^{i}\) is a vector whose \(g\)-th element is given by: \[d_{g}^{i}=-\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{\varepsilon}_{ij}}{ \partial p_{g}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1}\mathbf{\varepsilon}_{ij} (\mathbf{q}_{i}^{0},\mathbf{p}^{0})\] (B.3) \(\mathbf{C}^{i}\) is a matrix whose \((g,g^{\prime})\) entry is given by: \[C_{gg^{\prime}}^{i}=\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{\varepsilon} _{ij}}{\partial p_{g}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1}\frac{\partial \mathbf{\varepsilon}_{ij}}{\partial p_{g^{\prime}}}\] (B.4) and \(\mathbf{G}^{i}\) is a matrix whose \((g,l^{\prime})\) entry is given by: \[G_{g^{l^{\prime}}}^{i}=\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial p_{g}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1} \frac{\partial\mathbf{\varepsilon}_{ij}}{\partial q_{il^{\prime}}}\] (B.5) The partial derivatives of the residual with respect to the global alignment parameters, \(\partial\mathbf{\varepsilon}_{ij}/\partial\mathbf{p}\), are from Eqs.(13) (14) (15) and with respect to the local track parameters, \(\partial\mathbf{\varepsilon}_{ij}/\partial\mathbf{q}_{i}\), are derived from the track fitting algorithm. In this paper, the track fitting was done with the custom software implementation of the General Broken Lines algorithm [20]. Minimization of the \(\chi^{2}\) of Eq.(47) leads the partial derivative with respect to each (\(l\)-th) local track parameter of each (\(i\)-th) track, \(\Delta q_{il}\), to equal zero: \[\begin{split}\frac{\partial\chi^{2}}{\partial q_{il}}=& 2\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial q_{il}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1} \mathbf{\varepsilon}_{ij}+2\sum_{j=2}^{n_{scat}-1}\Bigl{(}\frac{\partial\mathbf{ \beta}_{ij}}{\partial q_{il}}\Bigr{)}^{\mathsf{T}}\mathbf{W}_{ij}^{-1}\mathbf{ \beta}_{ij}=0\\ \simeq& 2\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial q_{il}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1} \Bigl{[}\mathbf{\varepsilon}_{ij}(\mathbf{q}_{i}^{0},\mathbf{p}^{0})+\sum_{l^{\prime}} \frac{\partial\mathbf{\varepsilon}_{ij}}{\partial q_{il^{\prime}}}\Delta q_{il^ {\prime}}+\sum_{g^{\prime}}\frac{\partial\mathbf{\varepsilon}_{ij}}{\partial p_{g ^{\prime}}}\Delta p_{g^{\prime}}\Bigr{]}\\ &+2\sum_{j=2}^{n_{scat}-1}\Bigl{(}\frac{\partial\mathbf{\beta}_{ij}} {\partial q_{il}}\Bigr{)}^{\mathsf{T}}\mathbf{W}_{ij}^{-1}\sum_{l^{\prime}} \frac{\partial\mathbf{\beta}_{ij}}{\partial q_{il^{\prime}}}\Delta q_{il^{ \prime}}\end{split}\] (B.6) Eq.(B.6) can be simplified in matrix form as: \[\mathbf{b}^{i}=(\mathbf{G}^{i})^{\mathsf{T}}\Delta\mathbf{p}+\mathsf{\Gamma}^{i} \Delta\mathbf{q}_{i}\] (B.7) where \(\mathbf{b}^{i}\) is a vector whose \(l\)-th element is given by: \[b_{l}^{i}=-\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{\varepsilon}_{ij}} {\partial q_{il}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1}\mathbf{\varepsilon}_{ ij}(\mathbf{q}_{i}^{0},\mathbf{p}^{0})\] (B.8) \((\mathbf{G}^{i})^{\mathsf{T}}\) is the transpose of the matrix \(\mathbf{G}^{i}\) which is defined in Eq.(B.5), and \(\mathsf{\Gamma}^{i}\) is a matrix whose \((l,l^{\prime})\) entry is given by: \[\Gamma_{ll^{\prime}}^{i}=\sum_{j=1}^{n_{meas}}\Bigl{(}\frac{\partial\mathbf{ \varepsilon}_{ij}}{\partial q_{il}}\Bigr{)}^{\mathsf{T}}\mathbf{V}_{ij}^{-1} \frac{\partial\mathbf{\varepsilon}_{ij}}{\partial q_{il^{\prime}}}+\sum_{j=2}^{n_ {scat}-1}\Bigl{(}\frac{\partial\mathbf{\beta}_{ij}}{\partial q_{il}}\Bigr{)}^{ \mathsf{T}}\mathbf{W}_{ij}^{-1}\frac{\partial\mathbf{\beta}_{ij}}{\partial q_{il^ {\prime}}}\] (B.9) The partial derivatives of the scattering angle with respect to the local track parameters, \(\partial\mathbf{\beta}_{ij}/\partial\mathbf{q}_{i}\), are derived from the track fitting algorithm. Combining Eq.(B.2) and Eq.(B.7), all the global alignment parameters, \(\Delta\mathbf{p}\), and all the local track parameters, \(\Delta\mathbf{q}\), can be solved simultaneously from following matrix equation: \[\begin{pmatrix}\sum_{i}\mathbf{C}^{i}&\mathbf{G}^{1}&\dots&\mathbf{G}^{j}& \dots&\mathbf{G}^{N}\\ (\mathbf{G}^{1})^{\mathsf{T}}&\mathsf{\Gamma}^{1}&\dots&\mathbf{0}&\dots& \mathbf{0}\\ \vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\ (\mathbf{G}^{j})^{\mathsf{T}}&\mathbf{0}&\dots&\mathsf{\Gamma}^{j}&\dots& \mathbf{0}\\ \vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\ (\mathbf{G}^{N})^{\mathsf{T}}&\mathbf{0}&\dots&\mathbf{0}&\dots&\mathsf{ \Gamma}^{N}\end{pmatrix}\begin{pmatrix}\Delta\mathbf{p}\\ \Delta\mathbf{q}_{1}\\ \vdots\\ \Delta\mathbf{q}_{j}\\ \vdots\\ \Delta\mathbf{q}_{N}\end{pmatrix}=\begin{pmatrix}\sum_{i}\mathbf{d}^{i}\\ \mathbf{b}^{1}\\ \vdots\\ \mathbf{b}^{j}\\ \vdots\\ \mathbf{b}^{N}\end{pmatrix}\]
2301.02069
Deep Learning for Breast MRI Style Transfer with Limited Training Data
In this work we introduce a novel medical image style transfer method, StyleMapper, that can transfer medical scans to an unseen style with access to limited training data. This is made possible by training our model on unlimited possibilities of simulated random medical imaging styles on the training set, making our work more computationally efficient when compared with other style transfer methods. Moreover, our method enables arbitrary style transfer: transferring images to styles unseen in training. This is useful for medical imaging, where images are acquired using different protocols and different scanner models, resulting in a variety of styles that data may need to be transferred between. Methods: Our model disentangles image content from style and can modify an image's style by simply replacing the style encoding with one extracted from a single image of the target style, with no additional optimization required. This also allows the model to distinguish between different styles of images, including among those that were unseen in training. We propose a formal description of the proposed model. Results: Experimental results on breast magnetic resonance images indicate the effectiveness of our method for style transfer. Conclusion: Our style transfer method allows for the alignment of medical images taken with different scanners into a single unified style dataset, allowing for the training of other downstream tasks on such a dataset for tasks such as classification, object detection and others.
Shixing Cao, Nicholas Konz, James Duncan, Maciej A. Mazurowski
2023-01-05T13:59:59Z
http://arxiv.org/abs/2301.02069v1
# Deep Learning for Breast MRI Style Transfer ###### Abstract **Purpose:** In this work we introduce a novel medical image style transfer method, StyleMapper, that can transfer medical scans to an unseen style with access to limited training data. This is made possible by training our model on unlimited possibilities of simulated random medical imaging styles on the training set, making our work more computationally efficient when compared with other style transfer methods. Moreover, our method enables arbitrary style transfer: transferring images to styles unseen in training. This is useful for medical imaging, where images are acquired using different protocols and different scanner models, resulting in a variety of styles that data may need to be transferred between. **Methods:** Our model disentangles image content from style and can modify an image's style by simply replacing the style encoding with one extracted from a single image of the target style, with no additional optimization required. This also allows the model to distinguish between different styles of images, including among those that were unseen in training. We propose a formal description of the proposed model. **Results:** Experimental results on breast magnetic resonance images indicate the effectiveness of our method for style transfer. **Conclusion:** Our style transfer method allows for the alignment of medical images taken with different scanners into a single unified style dataset, allowing for the training of other downstream tasks on such a dataset for tasks such as classification, object detection and others. ## Introduction The same object can be depicted in an image in different styles. For example, a building can be shown in a photograph, a painting by a specific artist, or a sketch. Within the field of medical imaging, different styles manifest as data obtained by different scanner models and/or manufacturers. Deep learning, a subfield of artificial intelligence based on _artificial neural networks_, has demonstrated an exceptional ability of solving image analysis problems. However, such a difference in style can be detrimental to these methods because it violates their common assumption that training and testing data possess the same style [1]. Style transfer methods, such as the one introduced in this paper, were proposed to address this problem in deep learning. Style transfer is a methodology that aims to preserve the consistency of the content of an image while changing the visual "style". Building upon this, _Arbitrary_ style transfer aims to transfer images to new styles _unseen_ in training, during which the content of the image can be transferred to the new style with minimal or zero additional model optimization. Preserving content is crucial in the medical imaging field because it is very important to ensure that underlying anatomical structure is preserved throughout the transformation process, and changing it could negatively impact the accuracy of diagnosis. This task of unseen style transfer is important to develop for use within the medical setting. As an example: consider the case of one hospital, Hospital A, having MRI data of one style, Style A (e.g. GE scanner). Now, Hospital A receives certain data from another Hospital, Hospital B, of unknown or unseen style, Style B (e.g. Siemens scanner). If Hospital A wishes to use a model trained at Hospital B on images of Style B, but on their own Style A data, even if Hospital B only provides one or two images of Style B to Hospital A, our model could be used to extract the style code of Style B, and transfer all of Hospital A's Style A data to Style B, allowing Hospital A to use Hospital B's model on its own data. Section 2.2 involves an experiment that explores this exact scenario, where our model, StyleMapper, is used to transfer images of one MRI style to another MRI style unseen in training. At a high level, our method learns to extract informative disentangled numerical representations of style and anatomical content of images. These representations, or style and content _codes_, are obtained by inputting an image to a trained style encoder and content encoder. A pair of style and content codes, possibly from different images, can then be combined via a _decoder_ to synthesize a new image that contains the encoded content, but in the style described by the style code; both the encoders and the decoder are neural networks. We introduce our model beginning with Section 1.2.1, which introduces our method of training our model to extract style and content codes from both raw image data and images transferred to simulated styles. The simulated style images are created by applying randomly sampled image transformation functions to raw images; these transformations are well-representative of the range of many styles/scanner types seen within medical imaging. In this way, the style transfer model sees a different style at each iteration of training. Because the image transformation functions have continuously-random parameters, the model can observe practically unlimited distinct styles during training, giving the style encoder more styles to learn from. This characteristic allows the model to be trained on fewer datapoints, as a single datapoint can be "reused" with a different style at each occurrence of that datapoint in training. We proceed in Section 1.2.2 to introduce further key components of StyleMapper, including (1) the image style/content encoders and decoder, (2) the use of both raw and transformed images at each iteration of training to further encourage consistent style/content encoding and decoding operations, (3) image and style/content code reconstruction terms, and (4) a novel _cross-domain reconstruction triplet loss_ term that is used to encourage further generalization ability of the encoders and decoders for style transfer. We note that this method of generating unlimited possible style images in training could be adapted to train many other applicable style transfer methods within the medical imaging domain. In Section 2 we explore experiments run with StyleMapper on the tasks of transferring test data to new target styles both simulated (Section 2.1) and real (Section 2.2), all while being trained on just 528 datapoints. We then discuss the limitations of this work in Section 2.3, and conclude with Section 2.3. The contribution of this paper can be summarized as follows. 1. We introduce a new method for training arbitrary style transfer models with limited data within the medical imaging domain. 2. We propose a new disentangled-representation learning style transfer model that uses this method, included a novel loss component. 3. We demonstrate arbitrary style transfer and style discrimination on breast MRIs with our method, with both real and simulated medical imaging styles. ## Related Works ### Style Transfer in Natural Images Style transfer research in deep learning has often focused on transferring natural images to artistic styles. The seminal work of [2] surmounted the goal of arbitrary style transfer by leveraging the feature-extracting behavior that is intrinsic to convolutional neural networks [3]. In that work, the transfer between styles for a test image was performed by aligning the style information of the image with the information of the target style image. Later works expanded upon this idea with improvements such as greatly increased transfer speed [4], the accounting for cross-channel correlations within image feature maps [5], a closed form solution to the task [6], improved and more diverse stylization [7], and generally more robust stylization and content preservation [8; 9]. However, these models are trained on tens of thousands of content (and in some cases, style) examples, via content and style datasets such as MS-COCO [10] and WikiArt [11], respectively. This is not a problem within the aforementioned models' original context of _artistic_ style transfer, but if we wish to switch to the medical domain, obtaining similar quantities of usable, standardized training data can be very difficult in practice. As such, we wish to develop an arbitrary style transfer method to be used within the medical imaging domain that can be trained on the lower end of typical sizes of many medical imaging datasets: only a few hundred images. ### Style Transfer in Medical Images While a large portion of style transfer research focuses on artistic style transfer, there is still a rich literature of style transfer methods specialized for the domain of medical imaging. Following the many-to-many mapping approach of [12], works such as [13; 14] explore the adaptation of unpaired images across medical modality domains (CT and MR scans), also trained to utilize a shared content space with invertible mappings between image, style and content spaces. However, a key limitation of such methods is that they require observing the target test style in training and/or explicitly modifying the model architecture whenever an additional style is desired to be learned from and generalized to, something that StyleMapper does not require. Other models have been built to automatically standardize different MRI image types (e.g. created by different manufacturers) _without_ explicitly providing knowledge of the underlying scanner technology that was used to generate the image, such as [15], which used piecewise-linear mapping to normalize intensities across different anatomical regions. The study in [16] approaches the task of translating between different MRI modalities using Conditional GANs ([17]) and paired data. Unlike our approach and the other aforementioned disentangled representation-learning approaches, this method does not rely on the consistency of translating across image, content and style domains; instead it directly maps from one image space to another. Whereas our method forms separate estimates of style and content of test images, e.g. allowing for the interchange of styles for a fixed-content image, such style/content disentangling ability is not present in these works. The work of [18] uses CycleGANs ([19]) to learn normalization between breast MRIs of two different manufacturers, along with the addition of a mutual information loss term and a modified discriminator to ensure the consistency of intensity, noise and anatomical structure. As compared to traditional CycleGAN applications, the modifications of this method allow for training upon unpaired data, a philosophy that we follow due to the fact that cross-domain paired (medical) data is generally more scarce than unsorted data. However, this method is different from ours in that it cannot transfer to new styles unseen in training. Similar to our goal of transferring images to a single fixed style is the work of [1], which focuses on style transfer within the domain of 3D cardiovascular MRIs. Style transfer is performed in this work using hierarchical clustering methods to best map test images to the domain of training images, given the results of extracting features from various inner layers of a VGG-16 network. In this work, a test image is mapped to the most similar image out of the utilized training set using the Wasserstein distance, with style mapped according to a "style library" learned during training. A limitation of this is that rather than learning a fixed set/library of styles from the training set, we attempt to generalize our style encoder to be more flexible, and work on images of _unseen_ target styles that may only be vaguely similar to those seen in training. ## 1 Materials and Methods ### Dataset For this work, we experimented with 628 breast MR (Magnetic Resonance) images taken from 628 different breast cancer patients with a GE Healthcare MRI machine, obtained from the Breast Cancer DCE-MRI dataset of [20]. All images have a \(512\times 512\) resolution, and were pre-processed by assigning the top 1% of pixel values in the entire dataset to a value of 255, followed by linearly scaling the remaining pixel intensities to the 0-255 range, giving the data the same "raw" style. 528 datapoints were used to produce the training set, 50 were kept as a test set, and the other 50 were used for validation. 25 similarly preprocessed images from a Siemens scanner were also used in Section 2.2. We describe the details of creating our specific dataset in Appendix A. ### Methods In this section we will introduce a modified domain adaptation model [13] and its evolution to our proposed model, StyleMapper. #### The Modified Baseline Model We begin with an unsupervised Domain Adaptation, Disentangled Representation-learning (DADR) model that can map between two different image domains by disentangling style and content representations within both of the domains [12; 13]. In particular, given images \(X_{1}\) and \(X_{2}\) from different domains \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\), respectively, the model can learn the representation of an image \(X_{i}\) within a style space \(\mathcal{S}_{i}\) and a content space \(\mathcal{C}_{i}\) (\(i=1,2\)), described respectively via a style code \(s_{i}\) and a content code \(c_{i}\). We label this model as the _baseline model_. #### Diverse Styles via Image Transformation Functions The baseline model performs well with style transfer to a set of discrete styles seen in training, but we wish to extend the work to transfer from an image of some style to an _arbitrary_ target style that is unseen in training. This is beneficial in the medical imaging field because many styles exist that may be desired to be transferred to, some of which have limited available data. As such, we propose to train the model on both raw images and style-transferred versions of the same images, with these styles simulated via random _image transformation functions_ that act on the raw images. In this way, the model learns to both distinguish between and extract different styles while keeping content unchanged. This not only allows for the model to adapt to a wider range of possible styles, but also allows the model to learn from fewer data-points, because a single datapoint can be seen with a variety of distinct styles at different training iterations. In order to generate diverse styles for our model to learn, we use seven classes of some of the most common image intensity transformations [21]: (1) the linear transformation, (2) the negative transformation, (3) the log transformation, (4) the power-law (gamma) transformation, (5) the piecewise-linear transformation, and (6,7) the Sobel X and Y operators. At each training step, one of the seven transformations are randomly selected to change a raw image to a new style. Although not very representative of the many possible _artistic_ styles of traditional style transfer works, we believe that the simulated styles described by the application of these random transformations, which manifest as generally nonlinear changes in pixel intensities, are a good proxy for many possible styles seen in medical imaging, which do not vary nearly as drastically as artistic styles do. We provide example images of each class of transformation in Figure 1. The first five of these transformations are _parametrically randomized_: when selected, the transformation function randomly selects its parameters from some pre-determined distribution. This allows the exact transfer function to be previously unseen at each training iteration. The two-step randomized transformation function selection allows the model to extract codes corresponding to a practically unlimited range of distinct styles during training and to boost the style encoder's generalization ability and robustness at test time. We provide the explicit formulae and probabilistic schema for generating the parameters of these transformation functions, as well as visual examples of them, in Appendix B. We also take this simulated style approach for our experiments so that a "ground truth" deterministic transferred image can be directly compared to the neural style transfer result. It is important to consider that the first five transformation functions are all _invertible_, meaning that given an output pixel value \(I_{\text{out}}\), we can deterministically map \(I_{\text{out}}\) back to its corresponding input pixel value \(I_{\text{in}}\), implying that no information is lost through these transformations. The Sobel X and Y operators, however, could introduce information loss because of the additive nature of the convolution operation. In practice, because of the high resolution of the images, we assume that such an operation will only slightly affect the overall global content of the images, and thus decide to include the Sobel operators in the pool of possible transformations. Experiments of training without the Sobel operators further confirmed this statement. We do not "randomize" the Sobel transformations during training due to the more severe nature of these transformations as compared to the first five. _Modified Baseline Model Architecture_ A precursor of our final model, the _modified_ baseline model consists of two main components: (1) content and style encoders for obtaining content and style representations, or _codes_, of images and (2) generators/decoders for mapping content-style code pairs back to the space of images. In this model two encoder\(\rightarrow\)decoder\(\rightarrow\)encoder pipelines run in parallel: one for a raw image, and the other for the transformed version of the same image, where the image transformation is randomly chosen from one of the seven transformations described in Section 1.2.1. The model is trained with objectives that enforce reconstruction where applicable both in-domain and cross-domain, as well as adversarial objectives that ensure the translated images to be indistinguishable from real images in the target domain. #### Central Model: StyleMapper Using the modified baseline model of the previous section as a starting point, we created a custom style transfer model which we name StyleMapper (Figure Figure 1: A comparison of the effects of the seven different image transformation functions that we use (Section 1.2.1 on a DCE-MRI breast scan, with randomized transfer function parameters fixed to the means of their sampling distributions. 2). We will next outline (1) the novel components of StyleMapper and (2) the main differences between StyleMapper and the modified baseline model, both in the architecture and in the training procedure/loss function. #### 3.2.1 General Features **Multiple Data Pairs per Training Step.** As opposed to the modified baseline model, StyleMapper is uses _two_ raw images per training iteration: a pair of distinct raw data \((X_{1},X_{2})\), and the results of applying the same random image transformation \(T\) to that pair, \((T(X_{1}),T(X_{2}))\); \(T\) is resampled for each pair \((X_{1},X_{2})\) at each training step. This implies that \(X_{1}\) and \(X_{2}\) should have the same style but different content, \(X_{i}\) and \(T(X_{i})\) should have the same content but different style for \(i=1,2\), and \(T(X_{1})\) and \(T(X_{2})\) should have the same style yet different content. As will be shown shortly, consistency of both Figure 2: **StyleMapper**: Our novel architecture used for style transfer. Solid arrows indicate encoding operations, and dashed lines indicate pairs of codes that should be optimally equivalent (Equation (3)), with the model trained to achieve as such. The decoder/generator \(G\) is not pictured, as it receives input of various combinations of all of the pictured style and content codes. content and style encoding can then be enforced through reconstruction constraints of different pairings of these four images that should have the same content or style, respectively, further encouraging the encoders and decoder to work across diverse domains. **Unifying the Encoders and Decoders.** In the modified baseline model, separate content and style encoder/decoder groups are trained corresponding to each of the input images and their corresponding transformed versions. We must consider the possibility of there being inconsistencies between the members of each of these pairs of networks; to account for this, we switch to using single encoders for content and for style \(E^{c},E^{s}\) and a single decoder/generator \(G\), allowing for the potential of a boost in style transfer generalizability, and a simpler model. **Most Representative Style Code.** Upon inference, we introduce a fixed most representative style code \(s\) instead of a target style code (as in [13]) to map input images to the target style \(\overline{s}\) is defined as the style code that is closest on average to all of the style codes of that test set. Specifically, for each of the \(N\) images \(X_{i}\) in the target style test set, record the style code \(s_{i}\) obtained from the trained model's style encoder. If \(N>1\), we then compute the most representative style code as \[\overline{s}=s_{k}\quad\text{where}\quad k=\underset{i}{\text{argmin}}\frac{1} {N-1}\sum_{j:j\neq i}^{N-1}\text{MAE}(s_{i},s_{j}), \tag{1}\] and MAE\((\cdot,\cdot)\) is the mean absolute error function. In the case of \(N=1\), \(\overline{s}\) is simply the style code obtained from the single target style image. This task is labeled as few-shot style transfer. In Section 2.1, we show that \(N=1\) is sufficient for successful style transfer, meaning that our method is compatible to one-shot learning. #### Training Loss Terms **Image Reconstruction Loss.** The model should be able to reconstruct an image sampled from the data distribution after encoding and decoding. To achieve this, the style and content encoders \(E^{s},E^{c}\) and decoder \(G\) are trained to minimize the mean absolute error/\(L_{1}\) distance between reconstructed images and original images, via the image reconstruction loss from [13] of \[\mathcal{L}_{\text{recon}}(X_{1})=\underset{X_{1}\sim p(X_{1})}{\mathbb{E}} \left[\left\|G\left(E^{c}(X_{1}),E^{s}(X_{1})\right)-X_{1}\right)\right\|_{1} \right], \tag{2}\] where \(p(X_{1})\) is the distribution of \(X_{1}\) data. Further image reconstruction loss terms \(\mathcal{L}_{\text{recon}}(X_{2})\), \(\mathcal{L}_{\text{recon}}(T(X_{1}))\) and \(\mathcal{L}_{\text{recon}}(T(X_{2}))\) for \(X_{2}\), \(T(X_{1})\) and \(T(X_{2})\) are then respectively defined the same. We note that when building StyleMapper from the baseline model, we removed the discriminator because the generator can solely be trained by the image reconstruction loss, and we do not need the discriminator for any classification role. As such, the adversarial loss term of the modified baseline model ([13]) is not present for StyleMapper. **Latent Reconstruction Loss.** We wish to encourage translation and reconstruction across diverse domains of style and content. One characteristic of StyleMapper that is different from the modified baseline model is that the (encode\(\rightarrow\)decode\(\rightarrow\)encode) progression found in the modified baseline model used for latent space reconstruction is reduced to (encode\(\rightarrow\)decode), such that we no longer train for latent space consistency in this manner. Instead, we enforce these latent embedding consistency requirements with style and content reconstruction loss terms, adapted from [13], that are defined respectively as \[\begin{split}\mathcal{L}_{\text{same}_{s}}&=\mathbb{ E}\left[\left\lVert E^{s}(X_{1})-E^{s}(X_{2})\right\rVert_{1}\right]\\ \mathcal{L}_{\text{same}_{s:T}}&=\mathbb{E}\left[ \left\lVert E^{s}(T(X_{1}))-E^{s}(T(X_{2}))\right\rVert_{1}\right]\\ \mathcal{L}_{\text{same}_{c:X_{i}}}&=\mathbb{E} \left[\left\lVert E^{c}(X_{i})-E^{c}(T(X_{i}))\right\rVert_{1}\right],\end{split} \tag{3}\] with \(i=1,2\). These constraints can be seen via the dashed lines in Figure 2. **Cross-Domain Reconstruction Triplet Loss.** We include a _cross-domain reconstruction triplet loss_ term, to encourage content and/or style reconstruction given twelve certain combinations of the images \(X_{1},X_{2},T(X_{1})\) and \(T(X_{2})\), as \[\mathcal{L}_{\text{cross}}=\sum_{(p_{1},p_{2},p_{3})\in\mathcal{P}}\mathbb{E} \left[\left\lVert G\left(E^{c}(p_{1}),E^{s}(p_{2})\right)-p_{3}\right\rVert_{1 }\right], \tag{4}\] where \(\mathcal{P}\) is the set of twelve triplets constructed from \(p_{1},p_{2},p_{3}\in\{X_{1},X_{2},T(X_{1}),T(X_{2})\}\) by the condition \[\mathcal{P}=\{(p_{1},p_{2},p_{3}):E^{c}(p_{1})=E^{c}(p_{3}),E^{s}(p_{2})=E^{s}( p_{3})\} \tag{5}\] (note that \(p_{1},p_{2}\) and \(p_{3}\) don't necessarily have to be different images). This loss term is important for training the encoders and decoder to have flexible and generalizable performance across domains, and is written explicitly in Appendix C. We now come to the full loss function that is minimized to train StyleMapper, \[\begin{split}\mathcal{L}_{\text{StyleMapper}}&= \lambda_{\text{recon}}\left[\mathcal{L}_{\text{recon}}(X_{1})+\mathcal{L}_{ \text{recon}}(X_{2})\right.\\ &+\left.\mathcal{L}_{\text{recon}}(T(X_{1}))+\mathcal{L}_{\text {recon}}(T(X_{2}))\right]\\ &+\lambda_{\text{same}_{s}}\left(\mathcal{L}_{\text{same}_{s}}+ \mathcal{L}_{\text{same}_{e:T}}\right)\\ &+\lambda_{\text{same}_{c}}\left(\mathcal{L}_{\text{same}_{c:X_ {1}}}+\mathcal{L}_{\text{same}_{c:X_{2}}}\right)\\ &+\lambda_{\text{cross}}\mathcal{L}_{\text{cross}},\end{split} \tag{6}\] where \(\lambda_{\text{recon}}\), \(\lambda_{\text{cross}}\), \(\lambda_{\text{same}_{s}}\), and \(\lambda_{\text{same}_{c}}\) are loss weight hyperparameters. We will now proceed to implementational and training details in the next section, followed by our experimental results in Section 2. #### Implementational Details **Network Architecture.** We build off of the MUNIT model of [12]. Content encoders consist of several strided convolutional layers to downsample the input, and several residual blocks to further process it. All convolutional layers are followed by Instance Normalization (IN) modules [22]. Style encoders include several strided convolutional layers, followed by a global average pooling layer and a fully connected (FC) layer. We do not use IN layers in the style encoder, since IN removes the original feature mean and variance that represent important style information. **Network Training.** We use the Adam optimizer [23] to train StyleMapper by minimizing the loss (Equation (6)), with weight decay strength of 0.0001, \(\beta_{1}=0.5\) and \(\beta_{2}=0.999\). Kaiming's Method [24] was used to initialize model weights, and we trained with a learning rate of 0.0001. We used a batch size of 1 (due to memory limitations), training until no further minimization of the loss term(s) was observed (a minimum of about 10,000 iterations was needed in essentially all cases). For the loss weights, we used values of \(\lambda_{\text{recon}}=10\), \(\lambda_{\text{same}_{e}}=\lambda_{\text{same}_{s}}=5\) and \(\lambda_{\text{cross}}=1\). Additional hyperparameters from MUNIT are unchanged from their settings in that work. We train our models until we observe loss convergence, assisted by validation via the MAE residuals between style transferred image results through learning style mapping (our model) and transferred image results through direct image transformations (the "ground truth" to compare to). All computations are performed with an NVIDIA QUADRO M6000 24GB GPU. ## 2 Results ### One-shot Style Transfer I: Simulated Styles The core goal of our StyleMapper model is to be able to transfer a test image to some unseen target style while preserving content. To test this, we train StyleMapper following Equation (6) on all image transformation function-s/styles in Section 1.2.1, but _excluding_ a particular class of transformation \(T\) (with parameters fixed) from the pool of possible transformations, to be used as a target style. After training, we apply \(T\) to the first 25 of the test set to obtain \(\{T(X_{\text{target}})\}\), and then extract the content and style codes of each of these \(T(X_{\text{target}})\) using the style encoder to obtain codes \(\{s_{\text{target}}^{T}\}\). Finally, we obtain a most representative style code \(\overline{s}^{T}\) for the target style by applying Equation (1) to some \(N_{\text{target}}\) of these 25 style codes, to judge how many target style images the model needs to observe to compute a useful target style code. We evaluate the ability of the style encoder to extract the correct style code from the target style test images by taking the remaining 25 of the test set \(\{X_{\text{test}}\}\), transferring these images to the target style via \(\overline{s}^{T}\) to obtain \(\{X_{\text{test}}^{s:T}\}\), and comparing these to the "ground truth" of transformed images \(\{T(X_{\text{test}})\}\). In particular, the content encoder extracts content codes from the images \(\{X_{\text{test}}\}\), and the generator/decoder \(G\) takes each of these content codes with the target style code \(\overline{s}^{T}\) to synthesize the transferred images \(\left\{G(c_{\text{test}},\overline{s}^{T}):c_{\text{test}}\in\{c_{\text{test}} \}\right\}=\{X_{\text{test}}^{s:T}\}\). We do this comparison using the MAE between \(\{X_{\text{test}}^{s:T}\}\) and \(\{T(X_{\text{test}})\}\). We also note that for better performance comparison between styles, for a given style we normalize all MAEs across the different \(N_{\text{target}}\) values by dividing each MAE by \(\text{MAE}\left(\{X_{\text{test}}\},\{T(X_{\text{test}})\}\right)\). We will explore examples of this by transferring to (1) a target style that is fairly similar to those seen in training-the log transformation for a model trained on all transformations _but_ log, and the same but for the gamma/power-law transformation-and (2) a target style that is distinct from the styles/transformation seen in training (the exponential function \(\exp(\cdot)\)). We test these on a range of \(N_{\text{target}}\) to see if the most representative target style computation is dependent on the quantity of target style data seen by the style encoder. To begin we examine a log target style, which we define via the logarithmic intensity transfer function with parameter fixed to its average value (Equation (B3) in Appendix B). Next we perform the same experiments with a power-law target style, via the power-law transfer function with exponential parameter fixed to \(\tilde{\gamma}=0.5\) (Equation (B4) in Appendix B). Finally, we test a target style described by the exponential transfer function equation \(T(I_{\text{in}})=a\exp(bI_{\text{in}})\) with \(a=2.3,b=0.02\), on a style encoder trained on _all_ of the parametrically-random styles of Section 1.2.1. In this case, we have a style with a transfer function curve that is not as similar to any of the possible randomized transfer curves seen by the style encoder during training as in the first two examples, where the target log and power-law transformations had the possibility of being similar to certain settings of the randomized power-law and log transformations seen in training, respectively. The qualitative and quantitative results of these experiments are shown in Figures 3 and 4, respectively. We see that style transfer performance described by the MAE between \(\{X_{\text{test}}^{s:T}\}\) and \(\{T(X_{\text{test}})\}\) is mostly independent to \(N_{\text{target}}\), implying that only a single target style image is needed by the style encoder to perform style transfer. In particular, for this one-shot case, we have MAEs for the log, gamma and exp styles of 0.2595, 0.3902 and 0.3601, respectively. As explored in Appendix D, indeed the style codes extracted from different images of one same style are almost identical, implying that the most-representative style code obtained from aggregating \(N_{\text{target}}\) of these individual codes will be almost the same as any one of them. The behavior of the style encoder over a range of styles, as well as the structure of the style codes themselves, are worth exploring. Although beyond the scope of the main body of this work, we explore both how extracted style codes differ between (1) different styles and (2) different images of the same style in Appendix D. ### One-shot Style Transfer II: MRI Scanner Styles We will now explore the ability of StyleMapper to transfer images to a new medical scanner style that is unseen in training, in particular the real style Siemens MR scanners, as GE scanner data was used to train the model, while Siemens data has never been observed. We performed one-shot style transfer on the same set of 25 raw GE scanner images as in the previous section, with a StyleMapper trained on all of the randomized styles (Section 1.2.1), and a single Siemens scan image used to obtain the target style. Example results of this are shown in Figure 5. Given that there is no "ground truth" to compare the transferred results to as in the previous section of simulated target styles, we believe these results to be strong given the fact that certain stylistic characteristics of Siemens scans as compared to GE scans-such as Siemens on average appearing to be slightly brighter than GE-appear in the transferred results. Figure 4: **One-shot style transfer with various target styles: Quantitative Results.** See Section 2.1. Mean absolute error (MAE) between style transferred images \(\{X_{\text{test}}^{sT}\}\) and “ground truth” transformed images \(\{T(X_{\text{test}})\}\), indicating performance of style transfer, with respect to number of target style images \(N_{\text{target}}\) used to compute the most representative target style code that is used to perform style transfer. Accompanying qualitative results in Figure 3. Figure 3: **One-shot style transfer with various target styles: Qualitative Results.** See Section 2.1. Transferring a set of 25 MR test images \(\{X_{\text{test}}\}\) (top row) to different target styles not seen in training \(\{X_{\text{test}}^{s:T}\}\) (bottom row), with target style code obtained from a **single test image of the target style**. The transferred images are compared to the “ground truth” \(\{T(X_{\text{test}})\}\) (middle row) of the images directly transformed by the target style’s corresponding transformation function \(T\). From left to right, the target styles/transformations are the fixed log, gamma/power-law and exp transformations, as described in Section 2.1, and for each style, three random images are visualized. Accompanying quantitative results in Figure 4. We can also use the same StyleMapper to distinguish between these two styles of GE and Siemens. To do this, we use the style encoder of StyleMapper to extract style codes from 25 GE and 25 Siemens images (all with different content). Next, we performed dimensionality reduction on these 8-dimensional style code vectors via principal component analysis (PCA) to map them to \(\mathbb{R}^{2}\), and trained a support vector classifier (SVC) with a radial basis function kernel to discriminate between the two classes of points [25]. As shown in Figure 6, the style encoder is able to usually discriminate successfully between the two styles via the differences in their encodings, with an SVC accuracy of 88.0%. Figure 5: **MRI Style Transfer to Unseen Scanner Style.** Results (right column) of transferring GE scanner MR Images (center column) to the Siemens scanner style unseen in training (left column). ### Ablation Study: Finite Set of Training Styles In order to show the necessity of using parametrically-randomized transformations as training styles, rather than the fixed transformations tested in Appendix D-we will attempt one of the same few-shot style transfer experiments of Section 2.1, but with a style encoder trained only on these fixed transformations. In other words, the former configuration allows for the style encoder to see a continuous range of styles in training-technically a new particular style at every iteration (excluding the Sobel transformations)-while the latter only gives the style encoder a discrete, finite set of possible styles to learn to extract style codes from, a problem that is exacerbated when only limited training data is present. We will repeat the experiment with the same fixed-parameter log target style as in Section 2.1, but with a model trained on _fixed versions_ of all of the other six transformations, with parameters fixed to their average values (Appendix D), except for the power-law function which is fixed the same as in the target power-law style experiment of Section 2.1. The failure of using this finite set of transformations is seen when comparing the one-shot transfer and Figure 6: **Discriminating a realistic unseen style.** Using a StyleMapper style encoder we extracted style codes of a set of unpaired MR scans of two different manufacturers, GE and Siemens, with the latter style previously unseen by the model. Pictured are these style codes embedded into \(\mathbb{R}^{2}\), and the decision boundary learned by training a support vector classifier (SVC) on them. Classification accuracy: 88.0%. Figure recommended to be viewed in color. MAE between the transformed "ground truth" and the transferred result, of 0.2778, to be compared to 0.1178 for the non-ablated case (Section 2.1). Just as in the compared experiment, we found the MAE here to not be improved by increasing \(N_{\text{target}}\), the number of target style images used by the style encoder to estimate the target style code used to perform transfer. We note that we observed significantly more noise on MAE values about the one-shot transfer MAE with respect to \(N_{\text{target}}\) than in the other experiment, indicating that the style encoder was not nearly as consistent as for the case of it being trained on parametrically-random transformations, extracting erroneous, but different codes from the target style images. ## Discussion One limitation of our work is that in order to successfully estimate the correct target style code, test target styles usually need to be at least somewhat similar to the styles seen in training; not identical, but also not completely orthogonal. Target styles that are very distant from those seen in training, that require the model to perform large amounts of _extrapolation_, not just _interpolation_, give more trouble. We focused on training the model on simulated styles described by intensity transfer functions, in order to focus on content-preserving medical imaging styles and to facilitate training on a small dataset. However, an important future endeavor will be to explore how to train and test the model on non-medical images of more diverse styles, to see how well it can generalize to these situations, while potentially maintaining the requirement for only limited data. ## Conclusions In this work we introduced a novel medical image style transfer method, StyleMapper, that can transfer images to a new target style unseen in training while observing only a single image of this style at test time, and can be successfully trained on limited amounts of single-style data. We explored the applications of StyleMapper to both style transfer and the classification of unseen styles. Supplementary information.This article has accompanying supplementary appendices, that describe details of the dataset, the image transformation functions/training styles, additional details for the novel loss function, and further experiments.
2305.11290
Massively Scalable Inverse Reinforcement Learning in Google Maps
Inverse reinforcement learning (IRL) offers a powerful and general framework for learning humans' latent preferences in route recommendation, yet no approach has successfully addressed planetary-scale problems with hundreds of millions of states and demonstration trajectories. In this paper, we introduce scaling techniques based on graph compression, spatial parallelization, and improved initialization conditions inspired by a connection to eigenvector algorithms. We revisit classic IRL methods in the routing context, and make the key observation that there exists a trade-off between the use of cheap, deterministic planners and expensive yet robust stochastic policies. This insight is leveraged in Receding Horizon Inverse Planning (RHIP), a new generalization of classic IRL algorithms that provides fine-grained control over performance trade-offs via its planning horizon. Our contributions culminate in a policy that achieves a 16-24% improvement in route quality at a global scale, and to the best of our knowledge, represents the largest published study of IRL algorithms in a real-world setting to date. We conclude by conducting an ablation study of key components, presenting negative results from alternative eigenvalue solvers, and identifying opportunities to further improve scalability via IRL-specific batching strategies.
Matt Barnes, Matthew Abueg, Oliver F. Lange, Matt Deeds, Jason Trader, Denali Molitor, Markus Wulfmeier, Shawn O'Banion
2023-05-18T20:14:28Z
http://arxiv.org/abs/2305.11290v4
# Massively Scalable Inverse Reinforcement Learning in Google Maps ###### Abstract Optimizing for humans' latent preferences is a grand challenge in route recommendation, where globally-scalable solutions remain an open problem. Although past work created increasingly general solutions for the application of inverse reinforcement learning (IRL), these have not been successfully scaled to world-sized MDPs, large datasets, and highly parameterized models; respectively hundreds of millions of states, trajectories, and parameters. In this work, we surpass previous limitations through a series of advancements focused on graph compression, parallelization, and problem initialization based on dominant eigenvectors. We introduce Receding Horizon Inverse Planning (rhip), which generalizes existing work and enables control of key performance trade-offs via its planning horizon. Our policy achieves a 16-24% improvement in global route quality, and, to our knowledge, represents the largest instance of IRL in a real-world setting to date. Our results show critical benefits to more sustainable modes of transportation (e.g. two-wheelers), where factors beyond journey time (e.g. route safety) play a substantial role. We conclude with ablations of key components, negative results on state-of-the-art eigenvalue solvers, and identify future opportunities to improve scalability via IRL-specific batching strategies. ## 1 Introduction Inverse reinforcement learning (IRL) is the problem of learning latent preferences from observed sequential decision making behavior. First proposed by Rudolf Kalman in 1964 (when it went under the name of inverse optimal control [21], and later structural estimation [37]), IRL has now been studied in robotics [1, 33, 31], cognitive science [3], video games [25, 43], human motion behavior [24, 34] and healthcare [20, 49], among others. In this paper, we address a key challenge in all these applications: scalability [7, 28, 46]. With several notable exceptions, IRL algorithms require solving an RL problem at every gradient step, in addition to performing standard backpropagation [14, 39]. This is a significant computational challenge, and necessitates access to both an interactive MDP and a dataset of expert demonstrations that are often costly to collect. Through addressing the scalability issue, we aim to leverage recent advancements in training foundation-sized models on large datasets. To illustrate our claims, we focus on the classic route finding task, due to its immediate practical significance and the availability of large demonstration datasets. Given an origin and destination location anywhere in the world, our goal is to provide routes that best reflect travelers' latent preferences. These preferences are only observed through their physical behavior, which implicitly trade-off factors including traffic conditions, distance, hills, safety, scenery, road conditions, etc. Although we primarily focus on route finding, the advancements in this paper are general enough to find use more broadly. The worldwide road network contains hundreds of millions of nodes and edges. At first glance, even attempting to fit the graph features into memory to compute a single gradient step is infeasible. In this paper, we make a series of contributions which enable solving this world-wide IRL problem. Specifically, we introduce * **MaxEnt++** An improved version of MaxEnt IRL [51], which leverages a connection to dominant eigenvectors to initialize the backward pass closer to the desired solution. * **Receding Horizon Inverse Planning (rhip)** A novel approximated IRL algorithm which generalizes MaxEnt++, BIRL and MMP, and enables control of planning time and accuracy via a stochastic planning horizon parameter. * **Parallelization and graph compression strategies** Mathematical approximations, the former of which is necessary to make the world-scale IRL problem tractable, the latter of which provides an additional 2.7x speed-up. The best-performing rhip policy achieves a 15.9% and 24.1% lift in route accuracy for driving and two-wheelers, respectively, and was successfully applied to a large scale setting in Google Maps. To our knowledge, this represents the largest instance of IRL in a real-world setting to date. ## 2 Related Work IRL approaches can be categorized according to the form of their loss function.1 MaxEnt [51] optimizes cross-entropy loss, MMP [32] optimizes margin loss, and BIRL [30] optimizes sequential Bayesian likelihood. LEARCH [33] replaces the quadratic programming optimization in MMP [32] with stochastic gradient descent. Choi and Kim [9] replace the MCMC sampling in BIRL [30] with maximum a posteriori estimation. Extensions to continuous state-action spaces are possible through sampling-based techniques [13]. Our work builds on Wulfmeier, Ondruska, and Posner [44] and Mainprice et al. [27], who applied MaxEnt and LEARCH to the deep function approximator setting. Footnote 1: The IRL route optimization problem is reducible to supervised classification with infinitely many classes, where each class is a valid route from the origin to the destination, the features are specified by the edges, and the label is the demonstration route. Unlike typical supervised learning problems, solving this directly by enumerating all classes is intractable, so IRL approaches take advantage of the MDP structure to efficiently compute loss gradients. Existing approaches to scale IRL consider several orthogonal and often complimentary techniques. Michini, Cutler, and How [28] incorporate real-time dynamic programming [4], which is less applicable with modern accelerators' matrix operation parallelization. Chan and Schaar [7] apply a variational perspective to BIRL, which reduces computational cost but adds considerable algorithmic complexity and requires tuning in comparison to the maximum a posteriori version. MMP is inherently more scalable, as its inner loop only requires calling a planning subroutine (e.g. Dijkstra) [33]. However, it lacks robustness to real-world noise, and has lost favor to more stable and accurate probabilistic policies [29]. Figure 1: Google Maps route accuracy improvements in several world regions, when using our inverse reinforcement learning policy rhip. Full results are in Table 1 and Figure 7. Imitation learning approaches that directly attempt to recover the demonstrator policy have gained increased attention [16, 22]. Behavior cloning avoids performing potentially expensive environment roll-outs, but suffers regret quadratic in the horizon [35]. DAGGER solves the compounding errors problem by utilizing expert corrections [36]. IRL also addresses the compounding errors issue but without repeated expert involvement, although can be expensive as it requires solving the RL problem as a subroutine. Recent approaches use the demonstrators distribution to reduce exploration in the RL subroutine [39] and are complementary to our work, which focuses on expensive dynamic programming nearby the demonstration and cheaper planning beyond a fixed horizon. Mixed approaches such as GAIL simultaneously learn both a policy (generator) and reward function (discriminator) [14, 16, 22]. We avoid explicitly learning a policy to bypass instabilities of adversarial training [48] and due to the _goal conditioning_ requirement discussed in Section 3. ## 3 Inverse Reinforcement Learning A Markov decision process (MDP) \(\mathcal{M}\) is defined by a set of states \(\mathcal{S}\), actions \(\mathcal{A}\), transition function \(\mathcal{T}\) and reward function \(r\) (i.e. negative cost function). Given \(\mathcal{M}\setminus r\) and a set of state-action trajectory demonstrations \(\mathcal{D}=\{\tau_{1},\ldots,\tau_{N}\}\) sampled from a demonstration policy, the goal of IRL is to recover the latent \(r\).2 Footnote 2: This is an ill-conditioned problem, as multiple reward functions can induce the same set of trajectories. Methods impose regularization or constraints to induce a unique solution, e.g. the principle of maximum entropy [51]. For expository purposes, we initially restrict our attention to the classic path-planning problem, and discuss extensions in Section 6. In the path-planning context, states are referred to as nodes and allowable transitions between nodes are referred to as edges. We define these MDPs as discrete, deterministic, and undiscounted with non-positive rewards and a single self-absorbing zero-reward destination state \(s_{d}\) in line with prior work [51]. Thus, each unique destination induces a slightly different MDP. However, for the sake of notational simplicity and without loss of generality, we consider the special case of a single origin state \(s_{o}\) and single destination state \(s_{d}\). We do not make this simplification for any of our empirical results. Let \(R_{\theta}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) be the graph's sparse reward matrix, i.e. entry \(R_{\theta}[i,j]=r_{\theta}(s_{i},s_{j})\) denotes the reward of transitioning from \(s_{i}\) to \(s_{j}\) and non-allowable transitions have sparse value \(-\infty\). It can be shown that the loss gradient of the aforementioned IRL methods follows the common form \[\nabla_{\theta}\ell_{i}=\left\langle E_{i}^{*}-E_{i}^{\theta},\nabla_{\theta }R_{\theta}\right\rangle_{F} \tag{1}\] where we refer to sparse \(E_{i}^{*},E_{i}^{\theta}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) as the _target_ and _current_ edge counts, respectively. For MaxEnt and MMP, these correspond to the edge counts of the demonstration and policy (Algorithms 2 and 4). For BIRL, these correspond to the policy's edge counts beginning from each state-action pair or state in the demonstration path (Algorithm 3). All methods consist of a backward pass (which computes a policy) and a forward pass (which rolls out that policy). Goal conditioningLearning a function \(r_{\theta}\) using IRL is a concise representation of preferences and simplifies transfer across goal states \(s_{d}\), as the reward function is decomposed into a general learned term and a fixed modification at the destination (self-absorbing, zero-reward). In the tabular setting, the number of reward parameters is \(\mathcal{O}(SA)\) even when conditioning on \(s_{d}\). This is in contrast to approaches that explicitly learn a policy (e.g. BC, DAGGER [35], GAIL [16]), which require additional complexity when conditioning on \(s_{d}\), e.g. in the tabular setting, the number of policy parameters increases from \(\mathcal{O}(\mathcal{SA})\) to \(\mathcal{O}(S^{2}A)\). By learning rewards instead of policies, we can evaluate \(r_{\theta}\)_once offline_ for every edge in the graph, store the results in a database, precompute contraction hierarchies [15], and use a fast graph search algorithm to find the highest reward path3 for online \((s_{o},s_{d})\) requests. This is in contrast to a learned policy, which must be evaluated _online for every \((s_{o},s_{d})\) request_ and for every step in the sampled route - a computationally untenable solution in many online environments. Footnote 3: The highest reward path is equivalent to the _most likely_ path under MaxEnt [51]. ## 4 Methods In this section, we present a series of advancements which enable solving the world-scale IRL route finding problem, summarized in Figure 2. At the end of Section 6, we provide a useful summary of other directions which yield negative results. Parallelism strategiesWe use a sparse Mixture of Experts (MoE) strategy [38], where experts are uniquely associated to geographic regions and each demonstration sample is deterministically assigned to a single expert (i.e. one-hot sparsity). This minimizes cross-expert samples and allows each expert to learn routing preferences specific to its region. We used standard data parallelism strategies within each expert to further partition minibatch samples across accelerator devices [40]. MaxEnt++ initializationMaxEnt [51] is typically initialized to the value (i.e. log partition) function \(v^{(0)}=\log(\mathbb{I}_{s=s_{d}})\), where \(\mathbb{I}_{s=s_{d}}\) is the one-hot vector at the destination node (see Appendix C.1). This propagates information outwards from the destination, and requires that the number of dynamic programming steps is at least the graph diameter for arbitrary destinations. Instead, we initialize the values \(v^{(0)}\) to be the _highest reward to the destination from every node_. This initialization is strictly closer to the desired solution \(v\) by \[\mathbb{I}_{s=s_{d}} \leq \min_{\tau\in\mathcal{T}_{s,\tau_{d}}}e^{r(\tau)} \tag{2}\] \[\underbrace{\mathbb{I}_{s=s_{d}}}_{\text{MaxEnt initialization}} \leq \underbrace{\sum_{\tau\in\mathcal{T}_{s,\tau_{d}}}e^{r(\tau)}}_{ \text{MaxEnt++ initialization}}\leq\underbrace{\sum_{\tau\in\mathcal{T}_{s,\tau_{d}}}e ^{r(\tau)}=v}_{\text{Solution}}\] where \(\mathcal{T}_{s,s_{d}}\) is the (infinite) set of all paths which begin at \(s\) and end at \(s_{d}\) (proof in Appendix B.3). Note that equality only holds on contrived MDPs, and the middle term can be cheaply computed via Dijkstra or A*. We call this method MaxEnt++ (summarized in Algorithm 2).4 Footnote 4: a nod to the improved initialization of k-means++ [2]. The correspondence between MaxEnt and power iteration provides a more intuitive perspective. Specifically, the MaxEnt backward pass initialization \(e^{v^{(0)}}\) defines the initial conditions of power iteration, and the solution \(e^{v}\) is the dominant eigenvector of the graph. By more closely aligning the initialization \(v^{(0)}\) to the solution, the number of required iterates is decreased. We provide a separate convergence analysis in Appendix B. Receding Horizon Inverse Planning (rhip)We build on this idea to create an approximate IRL algorithm which generalizes existing methods. Consider the policy defined by \[\pi(a|s)\propto\sum_{\tau\in\mathcal{T}_{H,s,a}}e^{r(\tau)}\] where \(\mathcal{T}_{H,s,a}\) is the set of all paths which begin with a state-action pair \((s,a)\) and deterministically follow the highest reward path after horizon \(H\). At each time step, this policy considers a distribution over all paths _which follow the highest reward path after \(H\) steps_, probabilistically selects a path from this distribution, and executes the first action along this path. This assumption contrasts with MaxEnt [51] (which considers a distribution over _all paths_) and BIRL [30] (which considers a distribution over _all paths which follow the highest reward path after a single step_). This policy can be written in sequential form as \[\pi(a|s)\propto\sum_{\tau\in\mathcal{T}_{s,a}}\pi_{d}(\tau_{H+1})\prod_{h=1}^{ H}\pi_{H-h+1}(a_{h}|s_{h})\] Figure 2: Architecture overview. The final rewards are used to serve online routing requests. where \(\pi_{1},\ldots,\pi_{H}\) are the MaxEnt++ policies after \(1,\ldots,H\) and \(\pi_{d}\) is the deterministic policy that follows highest reward path, i.e. \(\pi_{d}(a|s)\in\{0,1\}\). Computing the gradient of this policy is prohibitive, since it requires storing all \(H+1\) policies during the backward pass for use during the forward pass. Even for small \(H\), this can significantly increase memory requirements, and is analogous to naive Forward Training (the precursor to DAGGER) [36]. Instead, we approximate the policy using \[\pi(a|s)\propto\sum_{\tau\in\mathcal{T}_{s,a}}\underbrace{\pi_{d}(\tau_{H+1})} _{\text{Deterministic policy}}\prod_{h=1}^{H}\underbrace{\pi_{s}(a_{h}|s_{h})}_{\text{ Stochastic policy }\pi_{H}}. \tag{3}\] where \(\pi_{s}=\pi_{H}\) denotes the stochastic policy after \(H\) steps of MaxEnt++. This approximated policy only requires storing \(\pi_{s}\) and \(\pi_{d}\). We call this method Receding Horizon Inverse Planning (rhip, pronounced _rip_). As described in Algorithm 1, rhip performs \(H\) backup steps of MaxEnt++, rolls out the resulting stochastic policy \(\pi_{s}\) for \(H\) steps, and switches to rolling out the deterministic policy \(\pi_{d}\) until reaching the destination. The receding horizon \(H\) controls rhip's compute budget (and accuracy) by trading off the number of stochastic and deterministic steps. The stochastic policy \(\pi_{s}\) is both expensive to estimate (backward pass) and roll-out (forward pass) compared to the deterministic policy \(\pi_{d}\), which can be efficiently computed via Dijkstra. In Appendix C, we show that rhip reduces to MaxEnt++ for \(H\)=\(\infty\), to BIRL for \(H\)=1, and to MMP [32] for \(H\)=0 (with margin terms absorbed into \(R_{\theta}\)). Graph compressionWe consider two graph compression approaches to improve scalability. The graph adjacency matrix is represented by a \(B\times S\times V\) tensor, where entry \((b,s,v)\) contains the reward of the \(v\)'th edge emanating from node \(s\) in batch sample \(b\). Thus, \(V\) is the maximum node degree valency, and nodes with fewer than \(V\) outgoing edges are padded. Although this solution does not apply to arbitrary MDPs, the degree of road graphs is tightly bounded (typically \(V<10\)). First, we'split' nodes with degree close to \(V\) into multiple nodes with lower degree. Since the majority of nodes have a much smaller degree than \(V\), this slightly increases \(S\) but can significantly decrease the effective \(V\), thus reducing the overall tensor size \(BSV\) in a lossless fashion. Second, we'merge' nodes with a single outgoing edge into its downstream node since there is only one feasible action. Feature vectors of the merged nodes are summed, which is lossless in the linear function approximator setting but introduces error in the nonlinear setting. Together, these compression techniques can be viewed as attempting to balance the graph's node degree distribution. ## 5 Empirical Study Road graphOur 200M state MDP is created from the Google Maps road network graph. Edge features contain predicted travel duration (estimated from historical traffic) and other relevant static road properties (e.g. distance, surface condition, speed limit, name changes, road type). Demonstration datasetDataset \(\mathcal{D}\) contains de-identified users' trips collected during active navigation mode[18]. We filter for data quality, e.g. by removing trips which contain loops, have poor match quality, or are unusually long. The dataset is a fixed-size subsample of these routes, spanning a period of two weeks and evenly split into training and evaluation sets based on date. Separate datasets are created for driving and two-wheelers, with the two-wheeler (e.g. mopeds, scooters) dataset being significantly smaller than the drive dataset due to a smaller region where this feature is available. In total, the number of iterated training and validation demonstration routes are 110M and 10M, respectively. Additional details are provided in Appendix D. \begin{table} \begin{tabular}{l l r r r r r} \hline \hline & & \multicolumn{3}{c}{Drive} & \multicolumn{3}{c}{Two wheelers} \\ \cline{3-8} Policy class & Reward \(r_{\theta}\) & NLL & Acc & IoU & NLL & Acc & IoU \\ \hline ETA & Linear & &.403 &.656 & &.450 &.705 \\ ETA+penalties & Linear & &.427 &.682 & &.448 &.715 \\ MMP/LEARCH [32, 33] & Linear & &.424 &.653 & &.469 &.705 \\ & SparseLinear & &.485 &.707 & &.523 &.746 \\ Deep LEARCH [27] & DNN & &.424 &.653 & &.478 &.714 \\ & DNN+SparseLinear & &.468 &.678 & &.520 &.730 \\ BIRL [9, 30] & Linear & 3.933 &.452 &.694 & 3.629 &.493 &.731 \\ & SparseLinear & 26.840 &.490 &.708 & 8.975 &.538 &.751 \\ Deep BIRL & DNN & 3.621 &.462 &.696 & 3.308 &.497 &.734 \\ & DNN+SparseLinear & 2.970 &.499 &.706 & 2.689 &.555 & **.759** \\ MaxEnt [51] & Linear & 4.441 &.452 &.694 & 3.957 &.491 &.729 \\ & SparseLinear & 26.749 &.492 &.709 & 8.876 &.540 &.752 \\ Deep MaxEnt [44] & DNN & 3.729 &.454 &.686 & 3.493 &.496 &.731 \\ & DNN+SparseLinear & 2.889 &.501 &.706 & 2.920 &.549 &.752 \\ RHIP & Linear & & 3.930 &.455 &.696 & 3.630 &.494 &.732 \\ & SparseLinear & 26.748 &.492 & **.710** & 8.865 &.541 &.752 \\ & DNN & 3.590 &.463 &.695 & 3.294 &.500 &.734 \\ & DNN+SparseLinear & **2.881** & **.503** &.709 & **2.661** & **.556** & **.759** \\ \hline Global ETA & Linear & &.389 &.654 & & & \\ Global ETA+penalties & Linear & &.428 &.691 & & & \\ Global RHIP & DNN+SparseLinear & 8.194 & **.496** & **.721** & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Route quality of manually designed and IRL baselines. Due to the high computational cost of training the global model (bottom 3 rows), we also evaluate in a smaller, more computationally tractable set of metros (top section). Metrics are NLL (negative log-likelihood), Acc (accuracy, i.e. perfect route match) and IoU (Intersection over Union of trajectory edges). Two-wheeler data is unavailable globally. Experimental regionDue to the high computational cost of training the global model, we perform initial experiments and hyperparameter selection on a smaller set of 9 experimental metros (Bekasi, Cairo, Cologne, Kolkata, Manchester, Manila, Nottingham, Orlando, and Syracuse). The top-performing configuration was then used to train the global driving model. Two-wheeler data is not available globally, and thus not reported. BaselinesWe evaluate both manually designed and IRL baselines. For fixed baselines, we consider (1) ETA: The fastest route, i.e. edge costs are the predicted travel duration and (2) ETA+penalties: ETA plus manually-tuned penalties for intuitively undesirable qualities (e.g. u-turns, unpaved roads), delivered to us in a closed form without visibility into the full set of underlying features. For IRL policy baselines, we compare MaxEnt (Algorithm 2) [51], Deep MaxEnt [44], the LEARCH [33] variation of MMP [32] (Algorithm 4), Deep LEARCH [27], and the maximum a posteriori variation of BIRL [9] (Algorithm 3). We also consider a deep version of BIRL similar to [6]. Reward model descriptionsWe evaluate three MoE function approximator classes: (1) a simple linear model, (2) a dense neural network (DNN) and (3) an \(\ell_{1}\)-regularized reward parameter for every edge in the graph (SparseLinear). The latter is of particular interest because it tends to highlight data-quality issues, for example in Figure 4. These models have 3.9k, 144k, and 360M global parameters, respectively. We constrain model weights to produce non-positive rewards and fine-tune all models from the ETA+penalties baseline. DNN+SparseLinear indicates additive DNN and SparseLinear components. Additional details are provided in Appendix D.2. MetricsFor serving online routing requests, we are interested in the highest reward path from \(s_{o}\) to \(s_{d}\) path under \(r_{\theta}\) (and not a probabilistic sample from \(\pi\) or a margin-augmented highest reward path). For accuracy, a route is considered correct if it perfectly matches the demonstration route. Intersection over Union (IoU) captures the amount of overlap with the demonstration route, and is computed based on unique edge ids. Negative log-likelihood (NLL) loss is reported where applicable. ### Results Our rhip policy with the largest 360M parameter reward model is the highest performing policy across most metrics, as shown in Table 1. We train the final global policy for 1.4 GPU-years on a large cluster of V100 machines, which results in a 15.9% and 24.1% increase in route accuracy relative to the ETA+penalties driving and two-wheeler models, respectively. Gains compared to the next-best IRL policy are more modest (0.4% and 0.2%, respectively). To better understand the methods' trade-offs, we evaluate the impact of the horizon \(H\) on evaluation accuracy and training time in Figure 5. MaxEnt and MMP occupy two extremes of the training time spectrum, and sweeping over the receding horizon parameter \(H\) in rhip enables realizing both accurate and efficient policies. Although MMP is fast, it exhibits poor accuracy and unstable training, likely due to its lack of robustness to noise [29]. rhip with \(H\)=10 provides both the best quality routes and 70% faster training times than MaxEnt, i.e. MaxEnt is not on the Pareto front. We hypothesize rhip achieves the highest accuracy due to improved policy specification. BIRL Figure 4: Example of the 360M parameter sparse model finding and correcting a data quality error in Nottingham. The preferred route is incorrectly marked as private property due to the presence of a gate (which is never closed), and incorrectly incurs a high cost. The detour route is long and narrow. The sparse model learns to correct the data error with a large positive reward on the gated segment. Additional examples are provided in Appendix D. and MaxEnt assume humans probabilistically select actions according to the highest reward path or reward of all paths beginning with the respective state-action pair, respectively. However, in practice, humans may take a mixed approach - considering all paths within some horizon, and making approximations beyond that horizon. Table 2 shows the impact of graph compression in our experimental region. The split strategy is lossless (as expected), and split+merge provides a significant speed-up with almost no impact on route quality metrics. All empirical results take advantage of the split+merge graph compression. Data structure choice has a significant impact on training time. We try using unpadded, coordinate format (COO) sparse tensors to represent the graph adjacency matrix, but find it to be 50x slower on our initial test metro of Bekasi. We observe dynamic programming convergence issues and large loss spikes in MaxEnt which tend to occur when the rewards become too close to zero. In Appendix B we prove this phenomena occurs precisely when the dominant eigenvalue of the graph drops below a critical threshold of 1 (briefly noted in Ziebart [50, pg. 117]) and show this set of allowable \(\theta\) is provably convex is the linear case. Fortunately, we are able to manage the issue with careful initialization, learning rates, and stopping conditions. Backtracking line search is another possible mitigation strategy, and it's unknown whether an efficient projection exists. Note that rhip (for \(H<\infty\)), BIRL and MMP provably do not suffer from this issue. All value functions (Algorithms 1, 2 and 3) are computed in log-space to avoid significant numerical stability issues. In Figure 6, we study the local geographic preferences learned by each expert in the mixture by performing an out-of-region generalization test. The drop in off-diagonal performance indicates the relative significance of local preferences. In Figure 7, we examine the relationship between region size and the performance of the model. Accuracy is nearly constant with respect to the number of states. However, training is significantly faster with fewer states, implying more equally sized regions would improve computational load balancing. Negative resultsWe study several other ideas which, unlike the above contributions, do not meaningfully improve scalability. First, the MaxEnt backward pass is equivalent to applying power iteration to solve for the dominant eigenvector of the graph (Equation 4). Instead of using power iteration (Algorithm 2), we consider using the state-of-the-art Arnoldi iteration from ARPACK[26], but find it to be numerically unstable due to lacking a log-space implementation (see results in Appendix A.1). Second, the forward pass used in MaxEnt has a closed form solution via the matrix geometric series (Equation 6). Using UMFPACK[11] to solve for this solution directly is faster on smaller graphs up to around 10k nodes, but provides no benefit on larger graphs (see results in Appendix A.2). ## 6 Discussion Future researchWe investigate various directions for improving the scalability of IRL and there are several future directions which we believe could lead to further gains. First, demonstration paths with the same (or nearly the same) destination can be merged together into a single sample. Instead of performing one IRL backward and forward pass per mini-batch sample, we can now perform one iteration _per destination_. This enables compressing large datasets and increasing the effective batch size, although it reduces shuffling quality, i.e. samples are correlated during training. Second, the graph could be pruned by removing nodes not along a'reasonable' path (i.e. within some margin of the current best path) from \(s_{o}\) to \(s_{d}\)[50, pg. 119]. This is difficult to do in practice since the pruning is unique for every origin-destination pair. From an accuracy perspective, we find that the sparse mixture-of-experts learn routing preferences specific to geographic regions. This vein could be further pursued with personalization, potentially via a hierarchical model [8, 10]. Due to engineering constraints, we use static ETA predictions, but would like to incorporate the dynamic GraphNet ETA predictions from Derrow-Pinion et al. [12]. We find that the sparse reward model tends to highlight groups of edges which were impacted by the same underlying data quality issue. A group lasso penalty may be able to leverage this insight. Including human domain knowledge may robustify and help shape the reward function [45, 47]. In this paper, we evaluate performance on driving and two-wheelers, and would like to incorporate other modes of transportation - especially walking and cycling - but are limited by engineering constraints. Extensions to other MDPsFor expository purposes, we restrict attention to discrete, deterministic, and undiscounted MDPs with a single self-absorbing destination state. Our contributions naturally extend to other settings. MaxEnt++ and rhip can be applied to any MDP where MaxEnt is appropriate and \(v^{(0)}\) can be (efficiently) computed, e.g. via Dijkstra's. Our parallelization extends to all MDPs with a reasonable partition strategy, and the graph compression extends to stochastic MDPs (and with further approximation, discounted MDPs). LimitationsIRL is limited by the quality of the demonstration routes. Even with significant effort to remove noisy and sub-optimal routes from \(\mathcal{D}\), our policy will inadvertently learn some rewards which do not reflect users' true latent preferences. Our MoE strategy is based on _geographic regions_, which limits the sharing of information across large areas. This could be addressed with the addition of global model parameters. However, global parameters would create undesirable dependencies during training, and the abundance of demonstrations and lack of correlation between region size and accuracy (Figure 7) suggests benefits may be minimal. ## 7 Conclusion Increasing performance via increased scale - both in terms of dataset size and model complexity - has proven to be an persistent trend in machine learning. Similar gains for inverse reinforcement learning problems have historically remained elusive, largely due to additional challenges scaling the MDP solver. The practical advancements in this paper enable scaling IRL training to problems with hundreds of millions of states, demonstration trajectories, and model parameters, respectively. Further, we contribute a theoretically motivated improvement to MaxEnt, new convergence analyses based on eigenvector connections, and the novel generalized algorithm based on a receding planning horizon which outperforms baselines in practice. Our final policy is applied in the large scale setting described above, which to our knowledge is the largest instance of IRL in a real-world setting to date. #### Acknowledgments We are grateful to Renaud Hartert, Rui Song, Thomas Sharp, Remi Robert, Zoltan Szego, Beth Luan, Brit Larabee and Agnieszka Madurska for an early exploration of this project for cyclists. Arno Eigenwillig provided useful suggestions for the graph's padded data structure, Jacob Moorman provided insightful discussions on the theoretical aspects of eigenvalue solvers, and Jonathan Spencer provided helpful references for MaxEnt's theoretical analysis. We are thankful for Remi Munos', Michael Bloesch's and Arun Ahuja's feedback on final iterations of this work.
2303.12417
CLIP$^2$: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data
Contrastive Language-Image Pre-training, benefiting from large-scale unlabeled text-image pairs, has demonstrated great performance in open-world vision understanding tasks. However, due to the limited Text-3D data pairs, adapting the success of 2D Vision-Language Models (VLM) to the 3D space remains an open problem. Existing works that leverage VLM for 3D understanding generally resort to constructing intermediate 2D representations for the 3D data, but at the cost of losing 3D geometry information. To take a step toward open-world 3D vision understanding, we propose Contrastive Language-Image-Point Cloud Pretraining (CLIP$^2$) to directly learn the transferable 3D point cloud representation in realistic scenarios with a novel proxy alignment mechanism. Specifically, we exploit naturally-existed correspondences in 2D and 3D scenarios, and build well-aligned and instance-based text-image-point proxies from those complex scenarios. On top of that, we propose a cross-modal contrastive objective to learn semantic and instance-level aligned point cloud representation. Experimental results on both indoor and outdoor scenarios show that our learned 3D representation has great transfer ability in downstream tasks, including zero-shot and few-shot 3D recognition, which boosts the state-of-the-art methods by large margins. Furthermore, we provide analyses of the capability of different representations in real scenarios and present the optional ensemble scheme.
Yihan Zeng, Chenhan Jiang, Jiageng Mao, Jianhua Han, Chaoqiang Ye, Qingqiu Huang, Dit-Yan Yeung, Zhen Yang, Xiaodan Liang, Hang Xu
2023-03-22T09:32:45Z
http://arxiv.org/abs/2303.12417v2
# CLIP\({}^{2}\): Contrastive Language-Image-Point Pretraining from ###### Abstract Contrastive Language-Image Pre-training, benefiting from large-scale unlabeled text-image pairs, has demonstrated great performance in open-world vision understanding tasks. However, due to the limited Text-3D data pairs, adapting the success of 2D Vision-Language Models (VLM) to the 3D space remains an open problem. Existing works that leverage VLM for 3D understanding generally resort to constructing intermediate 2D representations for the 3D data, but at the cost of losing 3D geometry information. To take a step toward open-world 3D vision understanding, we propose **C**ontrastive **L**anguage-**I**mage-**P**oint Cloud **P**retraining (CLIP\({}^{2}\)) to directly learn the transferable 3D point cloud representation in realistic scenarios with a novel proxy alignment mechanism. Specifically, we exploit naturally-existed correspondences in 2D and 3D scenarios, and build well-aligned and instance-based text-image-point proxies from those complex scenarios. On top of that, we propose a cross-modal contrastive objective to learn semantic and instance-level aligned point cloud representation. Experimental results on both indoor and outdoor scenarios show that our learned 3D representation has great transfer ability in downstream tasks, including zero-shot and few-shot 3D recognition, which boosts the state-of-the-art methods by large margins. Furthermore, we provide analyses of the capability of different representations in real scenarios and present the optional ensemble scheme. ## 1 Introduction Powerful 3D point cloud representation plays a crucial role in various real-world applications, e.g., 3D object recognition and detection [10, 47, 21, 34, 43]. Compared to 2D images, 3D point cloud provides specific information like accurate geometry that is robust to illumination changes. However, current methods [28, 43] that learn 3D representations generally rely on the predefined number of object categories and require plenty of labor-intensive annotations. Those learned 3D representations are insufficient for safety-critical scenarios like self-driving which includes a long-tail class distribution far beyond the predefined taxonomy. Therefore, it is highly demanded to learn a transferable 3D representation equipped with zero-shot recognition ability in vocabulary scalable real-world scenes. Figure 1 shows an open-world recognition example by our CLIP\({}^{2}\) in outdoor and indoor scenes, where the 3D objects can be classified with the correlation alignment between 3D representations and open-world vocabularies. The critical ingredient of open-world understanding is that the models learn sufficient knowledge to obtain general representations. To achieve this, recent Vision-Language Models (VLM) [14, 30, 41] leverage Internet-scale text-image pairs to conduct vision-language pretraining, which facilitates transferable 2D representation and demonstrates promising performance in 2D open-vocabulary tasks. However, 3D vision-language pretraining remains unexplored due to the limitation of existing 3D datasets in diversity and scale compared to the massive data sources in 2D counter-parts [14, 16, 30, 41]. Though some recent works [12, 13, 46] try to avoid this problem by transferring the pretrained 2D VLM into the intermediate representation including projected image patches [19, 12] or depth maps [1, 46], those representations suffer from the loss of 3D geometric information and limited viewpoints under realistic scenarios. Especially the camera images are only sometimes available due to the sensor failure in 3D scenes. We believe the 3D representation based on original point cloud data retains most information and is the optimal solution for 3D real world understanding, which requires a rethink of learning the transferable 3D representation under realistic scenarios. To this end, we propose a **C**ontrastive **L**anguage-**I**mage-**P**oint cloud **P**retraining framework, short for CLIP\({}^{2}\), which directly aligns 3D space with broader raw text and advances the 3D representation learning into an open-world era. Our learning process can be decomposed into two stages: **Firstly,** we introduce a _Triplet Proxy Collection_ to alleviate the limitation of accessible pretraining data by constructing language-image-point triplets from real-world scenes. Since the large-scale realistic 3D datasets for outdoor driving [20, 2] and indoor scenarios [9, 35] are collected in open-world, it contains huge amounts of realistic objects that vary in semantics and diversity. Thus we consider them as potential pretraining data sources without extra human supervision. Specifically, we propose "Proxy" instances as the bridges between language descriptions, 2D images and 3D point clouds. Enabled by a well-aligned VLM, a scalable caption list and the geometry transformation between 2D and 3D, we automatically create more than 1 million triplets to facilitate pretraining. **Secondly,** we further propose a _Cross-Modal Pretraining_ scheme to jointly optimize the feature space alignments of three modalities, _i.e_.point cloud, language and image. It contains both the contrastive learning objective of semantic-level text-3D correlation and instance-level image-3D correlation, which contributes to better transferability of learned 3D representation. We study the transferable capability of CLIP\({}^{2}\) by benchmarking the zero-shot recognition performance on four popular indoor and outdoor real-world datasets, and find a significant improvement over current methods, achieving Top1 accuracy 61.3% on SunRGBD [35], 43.8% on ScanNet [9]), 28.8% on nuScenes [2] and 56.0% on ONCE [20]. For a fair comparison with existing methods [13, 39, 46, 1], we conduct zero-shot and few-shot classification on single object dataset ScanObjectNN [37] and find consistent dominance, 16.1% relative improvement on zero-shot classification over previous state-of-the-art method [13]. To validate the vocabulary-increasing ability of CLIP\({}^{2}\), we report the quantity results and visualizations to show the improved discovery of the long-tail categories. Moreover, we make ablations and analisis on different representations, and investigate ensembling alternatives to merge complementary knowledge of all available representations in realistic applications. Our contributions can be summarized as follows: * We propose a novel CLIP\({}^{2}\) framework that aligns 3D space with open-world language representation, facilitating zero-shot transfer in realistic scenarios. * We present a Triplet Proxies Collection scheme in real-world scenes, which alleviates the shortage of text-3D data sources and facilitates the pretraining methods. * CLIP\({}^{2}\) jointly optimizes the correlation alignment between point cloud, language and image by proposed cross-modal pretraining mechanism, which enhances the transferability of learned 3D representation. * Our CLIP\({}^{2}\) achieves the state-of-the-art zero-shot transfer performance on 5 datasets (indoor/outdoor scenes and single-object) and shows quality results on vocabulary-increasing discovery in real world. ## 2 Related Work **Vision-Language Model.** Large vision language models (VLM) [14, 16, 30, 41] have demonstrated successful performance in downstream zero-shot tasks with the learned transferable 2D representations. CLIP [30] and ALIGN [14] push the limit by collecting Internet-scale image-text pairs and then learning the correlation alignment between image and language feature space with contrastive pretraining objectives. Those models can be directly transferred to zero-shot 2D recognition and achieve impressive results. Recent DetClip [41] learns to align image patches to test phrases after pretraining under hybrid supervision from detection, grounding and image-text pair data, which extends the ability to localize open-vocabulary 2D proposals in images. In this paper, we attempt to transfer the open-vocabulary ability of pre-trained VLM to the 3D domain, making language applicable to zero-shot point cloud recognition. **Zero-shot/Open-world Learning in 3D.** Recognizing 3D objects with a large vocabulary is necessary for safety-critical autonomous driving and robotic tasks, yet remains under-explored. Cheraghian et al. [5, 6, 7, 8] first attempt to associate PointNet [28] feature with category semantic information via a projection function, and separately proposed an unsupervised skewness loss [5] to mitigate the hubness problem. The transductive case [6] is discussed in which extends [5] using a triplet loss. Notably, the above works conduct experiments on synthetic datasets and need to divide datasets into "seen" categories as training data and "unseen" categories as testing data. Thus they are not suitable for realistic scenarios due to the domain gap between synthetic and real-world data, as well as the limited vocabulary-increasing ability. Recently, inspired by the success of VLMs [14, 30] in 2D tasks, some works [13, 46] propose to transfer the zero-shot recognition ability of pretrained CLIP [30] into 3D area. PointCLIP [46] directly projects point cloud into multi-view depth maps as image-like data input for pretrained CLIP to make classification predictions. While CLIP2Point [13] trains an image-depth embedding on ShapeNet [42] to better align the depth representation to the pretrained image space of CLIP. However, depth maps lost plenty of geometry information of the original point cloud data structure, resulting in poor performance especially in realistic scenarios. By contrast, we aim to learn transferable 3D representation based on the original point cloud data structure in realistic scenarios. 3D Representation Learning.Much progress has been made in learning a comprehensive 3D representation in an unsupervised manner. Most works [1, 17, 18, 24, 44, 43, 45] follow the paradigm that conducts pretraining on unlabeled datasets and then finetunes on the limited downstream annotations. Though the improved transferability of 3D representation, they can not be directly transferred to zero-shot tasks with open-world vocabularies. In this work, we conduct language-image-point cloud pretraining, which learns transferable 3D representation aligned to open-vocabulary language space to facilitate the zero-shot transfer. ## 3 Method In this section, we introduce CLIP\({}^{2}\) to learn a transferable 3D point cloud representation with arbitrary category recognition ability under realistic scenarios, illustrated in Figure 2. We will first present the _Triplet Proxy Collection_ in Section 3.1, which utilizes a pretrained VLM and geometric transformation to obtain language-image-point triplets from real-world scenes. Then we will elaborate _Cross-Modal Contrastive Pretraining_ mechanism in Section 3.2, which jointly optimizes the alignment correlations between language, image and point cloud feature space. ### Triplet Proxy Collection Inspired by the significant performance of 2D VLMs on open-vocabulary tasks, we aim to develop 3D vision-language pretraining to facilitate category-increasing capacity for real-world scenarios. However, the core challenge is the shortage of pretraining data. Compared to the 2D vision-language pretraining framework CLIP [30], which takes more than 400M image-language pairs from the Internet, the largest 3D single-object dataset ShapeNet [42] only contains 50K CAD models with 55 categories. In ad Figure 2: **Overview of CLIP\({}^{2}\) framework.** The main components contain two parts, the _Triplet Proxy Collection_ and the _Cross-Modal Pretraining_. The defined Triplet Proxy set \(\mathcal{D}_{\text{proxy}}\) consists of language captions \(\mathbf{X}^{T}\), corresponding image instances \(\mathbf{X}^{I}\) and raw 3D point cloud instances \(\mathbf{X}^{P}\), which come from the free data source under realistic scenarios without any labeling labor. On top of that, we pretrain a point cloud encoder \(E^{P}\) with the cross-modal contrastive learning objective. Equipped with CLIP\({}^{2}\), the learned 3D point cloud representation \(F^{P}\) is well aligned to the language representation, which facilitates downstream zero-shot 3D transfer tasks in the real world. dition to the insufficiency of data scale, pretraining on such synthetic data fails to transfer well in the real world due to the huge domain gap. Enlightened by the recent emergence of large-scale point cloud datasets collected in indoor [9, 35] and outdoor scenarios [2, 20], we observe that those naturally-collected datasets potentially contain vast amounts of open-world objects that vary in semantics and diversity. Considering the data collection itself is cheap except for laborious annotation, we novelly take leverage of those available datasets without human annotations as a practical yet effective pretraining data source. Specifically, given the realistic scene data \(\mathcal{S}=\{(P_{s},I_{s})_{s=1}^{|\mathcal{S}|}\}\), where \(P_{s}\in\mathbb{R}^{N_{P}\times 3}\) and \(I_{s}\in\mathbb{R}^{N_{I}\times H\times W\times 3}\) are corresponding 3D point clouds and images of scene \(s\), we propose a novel concept, _Proxy_, as the bridge between language, image and 3D point cloud. As illustrated in Figure 2, equipped by those proxy instances, we can automatically collect a massive number of language-image-point cloud pairs \(\mathcal{D}_{\text{proxy}}\) in the format of proxies under open-world scenes. We detail the process as follows. **Language Proxy.** We set the language proxies \(\mathbf{X}^{T}\in\mathbb{R}^{V}\) as a raw text list from the 2D open-world dataset [11], where \(V=1206\) denotes the vocabulary size of language proxies. **Image Proxy.** Next, we obtain the image proxies \(\mathbf{X}^{I}\) by an open vocabulary detector DetCLIP [41], denoted as \(M\), which is trained with open-world data and performs open-set detection. Concretely, given language proxies \(\mathbf{X}^{T}\) and input scene image \(I_{s}\), we extract corresponding image proposals as image proxies \(X_{s}^{I}\) with \(M\) by the similarity between input language embeddings and proposal features as \[\{X_{s}^{I}\}_{s\in|\mathcal{S}|}=\text{M}(\{I_{s}\}_{s\in|\mathcal{S}|}, \mathbf{X}^{T}). \tag{1}\] **3D Proxy.** We exploit the naturally-existed geometry relations between 2D and 3D scenes to obtain 3D proxies \(\mathbf{X}^{P}\), which consists of point cloud instances corresponding to image proposals in \(\mathbf{X}^{I}\). We simplify the geometry transformation as \(\text{G}(\cdot)\) and formulate the relations as: \[X_{i}^{P}=\text{G}(X_{i}^{I}). \tag{2}\] Detailedly, for _indoor scenes_ equipped with RGB-D sensors, we first remove the background pixels by unsupervised segmentation algorithm [31] for each image proxy \(X_{s,i}^{I}\), \(i\in|X_{s}^{I}|\). Since depth information is known, we then transform the segmented pixels from \(uod\) coordinate \(X_{s,i}^{I,uvd}\in\mathbb{R}^{n,3}\) to \(xyz\) coordinate \(X_{s,i}^{P,xyz}\in\mathbb{R}^{n,3}\) as a 3D point cloud proxy with the given camera parameters. For _outdoor scenes_ captured by LiDAR sensors, we first create a 3D frustum for each image proxy by extruding the 2D image proposal into 3D space following [23, 27]. Then we conduct DBSCAN algorithm [32] within frustum and select the point cloud cluster as the point proxy \(X_{s,i}^{P,xyz}\). Eventually, we construct Triplet Proxy \(\mathcal{D}_{\text{proxy}}=\{\mathbf{X}^{T},X_{s}^{I},X_{s}^{P}\}_{s=1}^{| \mathcal{S}|}\) by combining corresponding language proxies \(\mathbf{X}^{T}\), image proxies \(\mathbf{X}^{I}\) and 3D proxies \(\mathbf{X}^{P}\), where \(\mathbf{X}^{I}\)=\(\{X_{s}^{I}\}_{s=1}^{|\mathcal{S}|}\) and \(\mathbf{X}^{P}\)=\(\{X_{s}^{P}\}_{s=1}^{|\mathcal{S}|}\). 220K and 1.4M proxy triplets are formed for indoor and outdoor scenes, respectively. More details can be found in the appendix. ### Cross-Modal Contrastive Pretraining With the triplet proxies \(\mathcal{D}_{\text{proxy}}\), a straightforward pretraining objective is forcing the alignment between the embedding spaces of point cloud \(X_{i}^{P}\) and language \(X_{i}^{T}\) from scratch. However, it might not promise good transferability of learned representation, since the number of language-image-point pretraining data triplets remains two orders of magnitude smaller than the language-image pairs adopted by CLIP [30] and the vocabulary size is much more limited. Therefore, we design to learn the correlation alignment based on the pretrained embedding space of CLIP. The comparison of current pretraining strategies [13, 46] is illustrated in Figure 4, which is a series of 3D variants of CLIP. Notably, both existing methods exploit projected depth map as the intermediate representation of point cloud, which are respectively learned to align to language space [46] and image space [13]. Intuitively, as illustrated in Figure 3, depth representation lost plenty of geometry information compared to the original point cloud, especially in outdoor scenarios. Moreover, images are sometimes unavailable for 3D objects. Thus we conduct pretraining on original 3D point cloud data as an optimal representation. Figure 4: **Comparison of different pretraining strategies.****(a)** CLIP aligns image and language embedding space [30] as \(L_{TI}\) based on large-scale text-image pairs. **(b)** PointClip [30] aligns projected depth map to CLIP language space as \(L_{TD}\). **(c)** Clip2Point aligns depth map to CLIP image space as \(L_{ID}\). **(d)** our CLIP2 aligns original 3D point cloud to both CLIP language space and image space via cross-modal objective \(L_{CM}\). Figure 3: **Illustration of three representation models** of two 3D objects examples under indoor and outdoor scenarios. Toward learning more transferable representation, we introduce a cross-modal contrastive learning objective to jointly optimize the correlation alignment across language, image and point cloud, including _Semantic-Level Language-3D Alignment_ and _Instance-Level Image-3D Alignment_. Specifically, the overall architecture of CLIP\({}^{2}\), shown in Figure 2, contains language encoder \(E_{\theta}^{T}\), point cloud encoder \(E_{\theta}^{P}\) and visual encoder \(E_{\theta}^{I}\), which respectively embed the triplet proxies into text feature \(f^{T}\in\mathcal{R}^{1\times C_{T}}\), point cloud feature \(f^{P}\in\mathcal{R}^{1\times C_{P}}\) and image feature \(f^{I}\in\mathcal{R}^{1\times C_{I}}\), where \(C\) is the embedding dimension. **Semantic-Level Language-3D Alignment.** In order to inherit the open-world recognition ability from pretrained CLIP [30], we align the point cloud feature \(f^{P}\) with text embedding \(f^{T}\) from well-trained CLIP with Language-Point Proxy \(\{X_{i}^{T},X_{i}^{P}\}\) input. We replace _classname_ in the prompts, like "point cloud of a { _classname_ }." with raw text in proxy \(X_{i}^{T}\) as language sentences. The core idea is to drive the feature centroids of 3D instances and the corresponding text prompt closer. We compute the loss function of between of language proxy and point cloud proxy as: \[l(i,T,P)=-\log\frac{\exp(f_{i}^{T}\cdot f_{i}^{P}/\tau)}{\exp(f_{i}^{T}\cdot f _{i}^{P}/\tau)+\sum\limits_{j\in N,X_{j}^{T}\neq X_{i}^{T}}\exp(f_{i}^{T}\cdot f _{j}^{P}/\tau)}, \tag{3}\] where \(N\) is the mini-batch size, \(\tau\) is the temperature coefficient. Within a training mini-batch, the language-3D alignment objective \(L(T,P)\) can be described as: \[L(T,P)=\frac{1}{N}\sum\limits_{i\in N}l(i,T,P). \tag{4}\] Instance-Level Image-3D Alignment.In addition to the alignment between semantic language and 3D proxy instances, we further introduce the contrastive alignment between instance-wise image proxy and 3D proxy instances. Note that the instance-aware visual concept has been well-studied in the embedding space of pretrained CLIP. We believe instance-sensitive learning can contribute to further correlation learning and benefits to the transferability of learned 3D representation. The contrastive aligned objective \(L(I,P)\) across point cloud and image is formulated as: \[l(i,I,P)=-\log\frac{\exp(f_{i}^{I}\cdot f_{i}^{P}/\tau)}{\exp(f_{i}^{I}\cdot f _{i}^{P}/\tau)+\sum\limits_{j\in N,j\neq i}\exp(f_{i}^{I}\cdot f_{j}^{P}/\tau)}, \tag{5}\] \[L(I,P)=\frac{1}{N}\sum\limits_{i\in N}l(i,I,P). \tag{6}\] Finally, we obtain the resultant cross-modal contrastive learning objective \(L_{CM}(T,I,P)\) as the combination of \(L(T,P)\) and \(L(I,P)\), where both alignments of semantic-level text-3D correlation and instance-level image-3D correlation are injected: \[L_{CM}(T,I,P)=\lambda_{1}L(T,P)+\lambda_{1}L(I,P), \tag{7}\] where the hyper-parameters \(\lambda_{1}\) and \(\lambda_{2}\) are both set to 0.5. ## 4 Experiment In this section, we evaluate CLIP\({}^{2}\) on realistic indoor and outdoor scenarios. We report the zero-shot transfer results on various datasets [35, 36, 2, 9] and make further analysis on the designs of pretraining strategy. ### Zero-shot Transfer **Setting.** After pretraining, natural language is applied to reference learned 3D representation to enable following zero-shot transfer tasks. **(i) Zero-Shot Recognition:** we evaluate zero-shot recognition performance for realistic objects, where \(K\) category names are transferred to text prompt "point cloud of {CLASS} " to encode the text features \(F_{K}\in\mathbb{R}^{K\times C}\). Then the classification logits are calculated with the 3D feature \(f^{P}\) and text features as: \[\text{logits}_{i}=\text{softmax}(f_{i}^{P}(F_{K})^{T}). \tag{8}\] We present the results under both indoor and outdoor scenarios in Table 1, Table 2 and Table 5, as well as the object-level benchmark in Table 6. **(ii) Open-vocabulary recognition:** we enlarge the category vocabularies of ScanNet to 249 and 384 to study the open-vocabulary recognition ability in Table 3. **(iii) Open-vocabulary localization:** we study the open-vocabulary localization ability by localizing open-world 3D objects with our proxy generation process and then classifying them with our learned 3D representation, of which the visualization is illustrated in Figure 5 and evaluation results are reported in Table 4. Notably, we investigate representation ensembling alternatives to enable knowledge merging of all available representations for realistic applications, illustrated in Table 8. #### 4.1.1 Indoor Scenarios **Datasets and details.** We adopt the widely used indoor 3D dataset SUN RGB-D [35] as the realistic indoor scenario that provides pretraining data source, a single-view RGB-D dataset consisting of \(\sim\)10K scenes. To validate the transferability of learned 3D representation, we also evaluate another popular indoor 3D dataset ScanNet [9], which contains \(\sim\)1.5K scenes of 3D reconstructed meshes. We remove objects in ScanNet with less than 5 points, leaving 384 noisy categories. For open-vocabulary recognition, we evaluate performance on the ScanNet 384-class set and a 249-class merged set. In addition to the scene-wise indoor dataset, we conduct evaluations on ScanObjectNN [37], which collects \(\sim\)3K individual realistic objects with 15 categories and is applied in the previous zero-shot evaluation [13, 46]. During the proxy collection process, we empirically set \(\epsilon=0.3\) in [41] as a tradeoff between filtering FPs and preserving TPs to generate image proxies. Considering the occurrence frequencies of different indoor categories vary a lot, we adopt the class balance strategy [4] to mitigate the class imbalance. During pretraining process, we adopt [29] as point cloud encoder and set the overall training epoch number to 100. Quantity results. For zero-shot recognition task, we take two recent works as our baselines, _i.e._ PointClip [46] and Clip2Point [13], which study the zero-shot classification task on 3D object-level benchmarks [37, 38] by leveraging pretrained CLIP with projected depth maps. Focusing on the real-world scenarios, we conduct comparison not only on the realistic object-level [37] as illustrated in Table 6 but also on the scene-level datasets shown in Table 1 and Table 2, where the evaluation follows the common classes split in [47, 10] and reports the instance Top1 accuracy of each class. As shown in tables, our CLIP\({}^{2}\) can outperform baselines on all benchmarks by large margins. Besides, we apply our triplet proxy generation mechanism (TP.) to baseline methods, and achieve considerable improvements on SUN RGB-D and ScanNet by 26.5% and 19.8% for PointClip, 38.3% and 10.3% for Clip2Point. On the one hand, the contrasts demonstrate the effectiveness of our triplet proxies for open-world understanding. On the other hand, our learned 3D representation is superior in 3D object recognition by retaining more 3D-specific information than depth representation. Besides, we present the optional ensembling scheme (En.) when camera images are available, which can take advantage of multi-modal knowledge and further boost the performance by 8.3%. To further validate the open-vocabulary recognition ability, we conduct evaluation on a larger category set of ScanNet in Table 3 and report the instance Top5 accuracy, which illustrates the superiority of our CLIP\({}^{2}\) when vocabulary increases. Beyond that, CLIP\({}^{2}\) is also equipped with zero-shot 3D localization ability by proxy generation. On indoor scenario SUN RGB-D, we compare with a SOTA indoor 3D detector 3DETR [22] and a recent work OV3D [19] that studies open-vocabulary detection, where evaluation is conducted on the same "unseen" split in [19]. Since CLIP\({}^{2}\) do not fit the tight bounding boxes of point cloud instances, we estimate the maximum bounding box of proxies and GT instances to conduct evaluation following the same metrics mAP\({}_{25}\) and AR\({}_{25}\) in [19], as shown in Table 4. Notably, compared to baseline works that train on "seen" 3D annotations and test on "unseen" categories, we have no access to any 3D annotations yet achieve comparable localization ability, which yields 5.3% AR\({}_{25}\) improvement over OV3D [19]. We further evaluate segmentation results in Table 4. by considering the geometry information. #### 4.1.2 Outdoor Scenarios **Datasets and details.** We exploit a prevalent large-scale 3D dataset nuScenes [2] as the outdoor data source and extra validate the performance on the ONCE dataset [20]. The nuScenes dataset consists of \(\sim\)28K frames with 10 categories, while ONCE contains 6 annotated sequences with 5 categories. Similarly, we set the \(\epsilon=0.3\) for image proxies collection and adopt the class balance strategy [4]. **Quantity results.** Since the outdoor point cloud is collected by LiDAR sensors, it has a wider perception range than RGB-D but leads to sparse distribution. Thus the projected depth representation of baselines results in severer information lost, as illustrated in the second row in Figure 3. As shown in Table 5, our CLIP\({}^{2}\) considerably outperforms the baseline recognition results by more than 20%, and our triplet proxies respectively boost two baselines by 9.5% and 4.8%. Additionally, we evaluate the localization ability on the outdoor scenario nuScenes in Table 4. Due to the lack of works that tackle outdoor open-vocabulary localization problems, we follow classic detection accuracy metrics Precision(P.) and Recall(R.) as evaluation metrics. Specifically, we calculate the center distance between groundtruth bounding boxes and our 3D proxies that are predicted to belong to the same category as the groundtruth, and set the distance threshold as \(\lambda=2\)m. For those matched pairs that are closer than \(\lambda\), we count the proxies as TPs. Otherwise, for those unmatched proxies and groundtruth, we count them as FPs and FNs respectively, thus P.=\(\frac{\text{TPs}}{\text{TPPs+FPs}}\), R.=\(\frac{\text{TPs}}{\text{TPPs+FNs}}\). As shown in Table 4, our CLIP\({}^{2}\) pipeline can provide high recall for outdoor objects. Since CLIP\({}^{2}\) is highly sensitive to open-world objects and can perceive categories beyond groundtruth list, it tends to create overmuch predictions thus the precision is comparably low. The perception ability of open-world objects can be viewed in Figure 5(b). **Quality results.** We show off two outdoor scenes of nuScenes [2] in Figure 5(b-i) and Figure 5(b-ii). In addition to perceiving those common categories, our CLIP\({}^{2}\) surprisingly localizes and recognizes those uncommon 3D objects in 3D scenes such as the tires of vehicles, the plastic bag in the hand of pedestrian as well as the plastic bag on the road. We believe it contributes to auto-driving safety by providing the localization and recognition of universal obstacles to facilitate follow-up driving decisions. ### Few-shot Classification **Setting.** Lightweight few-shot learning is practical for application by finetuning the pretraining model with given limited data annotations, which can also validate the generalization capability of our learned representation. To make a fair comparison, we follow the existing methods [1, 46] to conduct experiments under "K-way N-shot" setting on the challenging realistic object-level dataset ScanObjectNN [37], where we randomly sample N point cloud objects from each of the randomly selected K classes. **Quantity results.** As illustrated in Table 6, we compare with representative 3D networks PointNet++ [29], the recent zero-shot approach PointClip [46] as well as a state-of-the-art representation learning method CrossPoint [1], which conducts contrastive pretraining between point cloud and rendered images on CAD dataset ShapeNet [3]. As we \begin{table} \begin{tabular}{c|c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{ZS} & \multicolumn{3}{c|}{15-way} & \multicolumn{3}{c}{10-way} \\ & & 4-S & 8-S & 16-S & 10-S & 20-S \\ \hline PointNet++ [28] & - & 41.0 & 47.6 & 55.0 & - & - \\ PointCLIP [46] & 15.4 & 46.0 & 50.0 & 55.6 & - & - \\ Clip2Point [13] & 23.3 & - & - & - & - & - \\ CrossPoint [1] & - & - & - & - & 58.7\(\pm\)1.8 & 64.6\(\pm\)1.2 \\ \hline CLIP\({}^{2}\) & 39.1 & 51.3 & 59.6 & 62.5 & 60.6\(\pm\)2.5 & 66.3\(\pm\)3.2 \\ \hline \hline \end{tabular} \end{table} Table 6: **Zero-shot and Few-shot classification results on ScanObjectNN. ZS: zero-shot. K-way N-shot: few-shot settings.** \begin{table} \begin{tabular}{c c|c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Avg.} & \multicolumn{6}{c}{nuScenes} & \multicolumn{6}{c}{ONCE} \\ & & Car & Truck & Bus & Ped. & Bicycle & Trailer & C.V. & Motor. & Barrier & T.C. & Car & Cyc. & Ped. & Truck & Bus \\ \hline PointClip [46] & 11.7 & 0.0 & 0.0 & 0.0 & 29.1 & 41.8 & 3.4 & 0.0 & 0.1 & 42.5 & 0.0 & 0.0 & 13.8 & 79.2 & 0.2 & 7.61 \\ Clip2Point [13] & 12.4 & 0.4 & 0.3 & 0.1 & 13.5 & 31.0 & 1.8 & 9.3 & 1.5 & 66.2 & 0.3 & 17.3 & 11.8 & 95.4 & 35.7 & 4.0 \\ PointClip [46] w/ TP. & 28.3 & 18.8 & 0 & 5.5 & 74.0 & 17.9 & 57.0 & 1.9 & 4.5 & 2.1 & 29.7 & 51.8 & 9.2 & 99.8 & 5.0 & 46.8 \\ Clip2Point [13] w/ TP. & 33.0 & 26.7 & 16.8 & 51.2 & 45.2 & 15.8 & 13.9 & 20.0 & 5.7 & 10.5 & 34.2 & 39.4 & 27.8 & 95.5 & 40.6 & 51.7 \\ \hline CLIP\({}^{2}\) & 37.8 & 41.9 & 41.3 & 22.5 & 40.3 & 21.1 & 20.6 & 24.8 & 22.4 & 17.3 & 35.3 & 52.7 & 27.3 & 77.7 & 78.5 & 44.0 \\ \hline \hline \end{tabular} \end{table} Table 5: **Zero-shot recognition results in outdoor scenario**: nuScenes (Left) and ONCE (Right). **TP.:** our Triplet Proxy set. **Avg.:** the mean average Top1 accuracy across all categories of two benchmarks. can see, with a slight number of samples, our CLIP\({}^{2}\) can boost the classification results by a large margin, exceeding PointClip by 5.3%, 9.6% and 6.9% with 4, 8 and 16 shots. Besides, we outperform CrossPoint with considerable gain, illustrating our pretraining strategy on collected proxies can learn sufficient knowledge from realistic open world to generate transferable 3D representation, which is superior to the pretraining on a small-scale synthetic dataset. ### Ablations and Analysis **Ablations on representation learning.** To observe the transferability of different representations and the effect of different learning objectives, we conduct ablations and report the mean average Top1 accuracy across all classes of zero-shot recognition in indoor scenarios [35] (Avg_IN), outdoor scenarios [2] (Avg_OUT) and object-level benchmark [37] (Avg_OBJ), which is shown in Table 7. Firstly, for fair comparisons, we follow [13, 46] to project input point cloud into depth maps in \(N_{V}\) different views as alternative representation. Secondly, we adopt various objectives to learn different correlation alignments across language, image and point cloud feature space or depth space. Specifically, comparing (a) and (b), aligning depth space to image space yields better transfer performance due to the similar data structure of image and depth map. In (c) and (d), point cloud representation is better when aligning to image space in indoor scenes, while better to align with language space in outdoor scenes due to the data discrepancy between image-like RGB-D points and sparse LiDAR points. Generally, 3D point cloud representation outperforms depth representation in all benchmarks due to preserving the complete 3D structure and sufficient 3D-specific information. Comparing (e) and (d), the joint alignment between three feature spaces contributes to the best 3D point cloud representation transferability on all benchmarks. **Analysis of representation ensembling.** Intuitively, different representations contain different perspectives of knowledge, which can be potentially merged to achieve the optimum results during inference. To validate the ensembling application, we adopt three optional representation modals, _i.e_. point clouds, projected depth maps and corresponding image patches, where depth representation is trained on our proxies and image representation is generated from pretrained image branch of CLIP. We ensemble their predicted logits by simple summation as the final output, and illustrate the separate recognition results and ensembling performance in Table 8. Benefiting from the sufficient knowledge learned from massive CLIP training data, image representation presents best performance in separate applications. By merging the complementary knowledge, our 3D representation leads to gains of 4.6% indoors [35] and 2.8% outdoors. Though further improving the indoor recognition performance with 0.9% when merged, depth representation yeilds 1.6% drop for outdoor objects, illustrating the information lost especially for outdoor scenarios. Since image representation is sometimes missing, such as in [37], our 3D representation is more robust for 3D applications. ## 5 Limitation As a pilot work for the language-3D pretraining problem, though CLIP\({}^{2}\) enables zero-shot localization and recognition with proposed triplet proxy generation and learned transferable 3D representation, it can not provide the accurate tight bounding box for open-world 3D objects as a common detector does. We believe CLIP\({}^{2}\) can facilitate the Figure 5: **Visualizations of the zero-shot localization and recognition results** by CLIP\({}^{2}\) under open-world **(a)** indoor realistic scene [35] and **(b)** outdoor scenes [2]. Notably, the whole pipeline of CLIP\({}^{2}\) not only has no access to human annotations, but also enables the open-world vocabularies beyond groundtruth annotations, such as ‘Picture’ in **(a)** and ‘Plastic bag’, ‘Time’ in **(b)**. Best viewed in colors. \begin{table} \begin{tabular}{c c c c|c c c} \hline \hline & PC. & Image & Depth & Avg\_IN & Avg\_OUT & Avg\_OBJ \\ \hline (i) & ✓ & & & 61.3 & 28.8 & 39.4 \\ (ii) & & ✓ & & 64.2 & 41.1 & - \\ (iii) & & & ✓ & 56.9 & 23.9 & 39.0 \\ \hline (f) & ✓ & ✓ & & 68.7 & 43.9 & - \\ (g) & ✓ & & ✓ & 64.8 & 30.4 & 43.2 \\ (h) & ✓ & ✓ & ✓ & 69.6 & 42.3 & - \\ \hline \hline \end{tabular} \end{table} Table 8: Analysis on the representation ensembling schemes. development of open-world 3D detectors by introducing the recognition ability to general 3D detectors or providing presented 3D proxies to enable further training of 3D detectors. ## 6 Conclusion In this paper, we present a novel contrastive language-point cloud pretraining framework, CLIP2, which consists of a triplet proxy collection scheme and a cross-modal contrastive learning mechanism. Based on the observation that realistic scenarios contain a massive amount of open-world objects, we innovatively propose to collect triplet proxies from realistic scenes as pretraining data. We then conduct cross-modal contrastive alignment across language, image and point cloud feature space to learn transferable 3D representation. The zero-shot transfer results on various indoor and outdoor benchmarks validate the ability of CLIP2 for 3D open-world understanding. Footnote 2: [https://www.mindspore.cn/](https://www.mindspore.cn/) AcknowledgementsWe gratefully acknowledge the support of MindSpore1, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor in this work. Footnote 1: [https://www.mindspore.cn/](https://www.mindspore.cn/)
2301.13170
Hamiltonian-Oriented Homotopy QAOA
The classical homotopy optimization approach has the potential to deal with highly nonlinear landscape, such as the energy landscape of QAOA problems. Following this motivation, we introduce Hamiltonian-Oriented Homotopy QAOA (HOHo-QAOA), that is a heuristic method for combinatorial optimization using QAOA, based on classical homotopy optimization. The method consists of a homotopy map that produces an optimization problem for each value of interpolating parameter. Therefore, HOHo-QAOA decomposes the optimization of QAOA into several loops, each using a mixture of the mixer and the objective Hamiltonian for cost function evaluation. Furthermore, we conclude that the HOHo-QAOA improves the search for low energy states in the nonlinear energy landscape and outperforms other variants of QAOA.
Akash Kundu, Ludmila Botelho, Adam Glos
2023-01-30T18:41:00Z
http://arxiv.org/abs/2301.13170v1
# Hamiltonian-Oriented Homotopy QAOA ###### Abstract The classical homotopy optimization approach has the potential to deal with highly nonlinear landscape, such as the energy landscape of QAOA problems. Following this motivation, we introduce Hamiltonian-Oriented Homotopy QAOA (HOHo-QAOA), that is a heuristic method for combinatorial optimization using QAOA, based on classical homotopy optimization. The method consists of a homotopy map that produces an optimization problem for each value of interpolating parameter. Therefore, HOHo-QAOA decomposes the optimization of QAOA into several loops, each using a mixture of the mixer and the objective Hamiltonian for cost function evaluation. Furthermore, we conclude that the HOHo-QAOA improves the search for low energy states in the nonlinear energy landscape and outperforms other variants of QAOA. ## 1 Introduction Speedup of practical applications is yet to be realized for quantum devices as they are small and noise prone. The limitations of available hardware initiated the Noisy Intermediate Scale Quantum (NISQ) era [1]. The NISQ algorithms [2] can operate on limited amount of resources., in particular by distributing tasks between quantum and classical devices. Many of those algorithms are represented by a broad class of variational quantum algorithms (VQAs) [3]. Their generic structure consists of two subroutines: a parametric quantum circuit (PQC) is implemented on quantum hardware that generates a quantum state, and classical hardware calculates the cost function and optimizes the parameters of PQC. One of the advantages of VQAs is that they can be easily adapted to various computational problems as long as the Hamiltonian can be designed whose ground state corresponds to the solution of the problem. To mention a few, VQAs has potential applications in finding the ground state of a molecule [4], solving linear [5] and nonlinear [6] system of equations, quantum state-diagonalization [7], and quantum device certification [8]. A detailed review can be found in [3]. Quantum approximate optimization algorithm (QAOA) [9] is a variational quantum algorithm dedicated to combinatorial optimization problems. The PQC in QAOA is a trotterized adiabatic evolution i.e. the circuit consist of interchangeably applied so-called mixer and problem Hamiltonians. It has potential application in solving problems like graph coloring [10, 11, 12], MaxE3Lin2 [13], Max-\(K\)-Vertex Cover [14], or traveling salesman problem [15, 12]. To improve the performance of QAOA, multiple optimization strategies have been introduced [16, 17, 18, 19, 20, 21, 22, 23, 24]. This is because given the limited resources of quantum computers it is essential to effectively explore the landscape of cost function for PQC. On the other hand, the landscape of energy function in QAOA is highly nonlinear and to deal with such complicated landscapes, sophisticated methods are necessary. This motivate us to formulate a heuristic optimization strategy that uses classical homotopy optimization for QAOA. The homotopy optimization has potential application in dealing with highly nonlinear functions [25]. The homotopy method comprises a homotopy map, which for each value of interpolating parameter \(\alpha\in[0,1]\) outputs an optimization problem. In particular, for \(\alpha=0\), the problem is easy-to-solve, and for \(\alpha=1\) the homotopy map returns the problem of interest. During the interpolation process, which changes the value of \(\alpha\) from \(0\) to \(1\), the solution continuously changes and is expected to be optimal, or close to optimal for the intermediate problems. If the intermediate optimization succeed, in the end we obtain the optimum of the target problem. One can see quantum annealing as a particular type of homotopy optimization. A homotopy optimization for VQE was already proposed in [26] and improved in [27, 28]. However, its applicability for QAOA was only briefly mentioned in [29]. The introduced Hamiltonian-Oriented Homotopy QAOA (HOHo-QAOA) decomposes the optimization into several loops. The homotopy map smoothens between the mixer Hamiltonian and the problem Hamiltonian during the optimization and each loop uses the mixture of these two Hamiltonians for cost function evaluation. In each loop, the quantum state is optimized with respect to such intermediate cost functions. This strategy simplifies the search for good QAOA parameters while keeping the PQC unchanged. To show this, first we empirically analyze the impact of the choice of the homotopy parameters: the initial \(\alpha_{\rm init}\) value and the step parameter \(\alpha_{\rm step}\) which defines the difference between two consecutive \(\alpha\) values. Although theoretically, a choice of \(\alpha_{\rm init}\) and \(\alpha_{\rm step}\) very close to zero provides a better approximation to the optimal solution, empirically we show that one can still get a good approximation to the optimal solution even if \(\alpha_{\rm init}\) and \(\alpha_{\rm step}\) are detached from zero. This hugely reduces the computational cost of HOHo-QAOA. Finally, we compare HOHo-QAOA with other commonly used QAOA optimization strategies [9, 22]. The rest of the paper is organized in the following way. In Section 2, we provide a brief overview of the adiabatic quantum computing, variants of QAOA and the homotopy method. Throughout Section 3, we numerically investigate the efficient settings of the homotopy parameters. Furthermore, we compare HOHo-QAOA with the other variants of QAOA considered in the literature. Finally, we conclude the article in Section 4. ## 2 Preliminaries ### Qaoa The core concept of Adiabatic Quantum Computing (AQC) lies in the adiabatic theorem. Let us consider \(H(s)=H(t/T)\), a time-dependent smoothly varying Hamiltonian for all \(t\in[0,T]\) i.e. \(s\in[0,1]\), where \(T\) is the total time of evolution. Let us denote by \(|E_{i}(s)\rangle\) an eigenvector of \(H(s)\) with corresponding eigenvalue \(E_{i}(s)\), where we assume \(E_{0}(s)\leq E_{1}(s)\leq\ldots\). The adiabatic theorem roughly states that a system that is initially prepared in \(|E_{0}(0)\rangle\) of \(H(t=0)\), after time-evolution that is piloted by Schrodinger equation with the given Hamiltonian \(H(s)\), will approximately keep the state of the system in the \(|E_{0}(1)\rangle\) at \(t=T\), provided that the change in \(H(s)\) is "sufficiently slow". Traditionally the sufficiently slow change is given by the condition [30, 31] \[T\gg\Delta^{-2}\max_{s\in[0,1]}\left\|\left[\frac{\partial H(s)}{\partial t} \right]^{2}\right\|, \tag{1}\] where \(\Delta=\min_{s}\left(E_{1}(s)-E_{0}(s)\right)\), is the spectral gap. A class of independent conditions on \(T\) has been discussed in [32, 33, 34, 35]. AQC has the potential to take a initial Hamiltonian say \(H_{\rm mix}\) whose ground state is easy-to-prepare to the ground state of a computationally hard problem Hamiltonian \(H_{\rm obj}\). A particular time-dependent Hamiltonian interpolates between the \(H_{\rm mix}\) and \(H_{\rm obj}\) as \[H(s)=\left(1-s\right)H_{\rm mix}+sH_{\rm obj}, \tag{2}\] AQC in the form of quantum annealing has been used for a variety of applications including real-world problems [36, 37, 38, 39, 40, 41], and in quantum chemistry [42]. For a rigorous review of AQC check [31, 43]. The Quantum Approximate Optimization Algorithm (QAOA), uses the first order Suzuki-Trotter transformation of \(\exp(-{\rm i}H(s))\) as the variational ansatz to solve combinatorial optimization problems. The trotterization gives rise to the operators \(\exp(-{\rm i}\gamma_{j}H_{\rm obj})\) and \(\exp(-{\rm i}\beta_{j}H_{\rm mix})\), where \(\gamma_{j}\) is the parameter corresponding to objective Hamiltonian and \(\beta_{j}\) corresponds to mixer Hamiltonian for \(j\)-th step. The mixer Hamiltonian is traditionally expressed as \(H_{\rm mix}=-\sum_{i}X_{i}\), where \(X_{i}\) is Pauli \(X\) operator acting on \(i\)-th qubit and \(H_{\rm obj}\) is the objective Ising Hamiltonian whose ground state encodes the optimal solution of the problem. This results in state \[|\vec{\gamma},\ \vec{\beta}\rangle=\prod_{j=1}^{L}\exp\left(-{\rm i}\beta_{j}H_{ \rm mix}\right)\exp\left(-{\rm i}\gamma_{j}H_{\rm obj}\right)|+\rangle^{ \otimes N}, \tag{3}\] where \(N\) is the number of qubits, \(L\) is the number layers that defines the number of repeated application of mixer and objective Hamiltonian, and \(|+\rangle^{\otimes N}\) is the ground state of \(-\sum_{i}X_{i}\). The algorithm utilizes quantum hardware to evaluate the energy expectation value \(E(\vec{\gamma},\vec{\beta})=\langle\vec{\gamma},\vec{\beta}|H_{\rm obj}|\vec{ \gamma},\vec{\beta}\rangle\). Then the parameters \(\vec{\gamma}\) and \(\vec{\beta}\) are optimized using classical optimization methods so that the energy is minimized. This energy evaluation along with classical optimization QAOA is well defined for any combinatorial optimization problems as long as \(H_{\rm obj}\) can be implemented efficiently. While the proposed \(X\)-mixer combined with 2-local Ising model is frequently used in the literature, different choices were also considered [12, 15, 44, 45, 46]. Heuristic learning of QAOA has been explored in trajectories QAOA (T-QAOA) [22]. T-QAOA is a heuristic strategy that utilizes interpolation-based prediction of good QAOA parameters. With the random initialization, the cost for optimization of QAOA is exponential in the number of layers of QAOA [22]. On the other hand, with increased number of layers, \(H_{\rm mix}\) may gradually turn off while the \(H_{\rm obj}\) turns on, which is reminiscent of AQC. However, QAOA could learn via following a diabatic path to achieve higher success probability [47, 48, 49], which is beyond the adiabatic process natural for AQC. This fact was used in T-QAOA by reusing the optimal angles found for \(L\)-layers in the \((L+1)\)-layers PQC. The T-QAOA variant considered in this paper runs as follow. It starts with a number of layer \(L_{0}\), and finds the locally optimal parameters \((\vec{\gamma}^{L_{0}},\vec{\beta}^{L_{0}})\). Then it uses the optimal parameters of layer \(L_{0}\) to construct the initial parameters for the layer \(L_{0}+1\) by sampling the last entries of \(\vec{\gamma}^{L_{0}+1}\) from a uniform random distribution and setting \(\vec{\beta}^{L_{0}+1}=0\). With such initialization, the \((L_{0}+1)\)-th layer PQC is optimized, and the procedure is repeated until a final number of layer \(L\) is reached. Note that different interpolation method can be used [22]. Note that for QAOA, the energy landscape with respect to a single parameter \(\theta\) is related to the following process. First, an initial quantum state is prepared. Then, if applicable, all the unitary operations which precedes the \(\theta\)-dependent operation are applied, which transforms the initial state into a different state (possibly a mixed state for noisy evolution). Afterwards, under an assumption of pure evolution, a unitary \(\exp(-{\rm i}\theta H)\) for mixer or objective Hamiltonian \(H\) is applied. Finally, the remaining operations are applied and the energy estimation with respect to observable is conducted. As shown in the Appendix B, the energy function with respect to \(\theta\) takes the form \[C+\sum_{i>j}A_{i,j}\cos(\theta(E_{i}-E_{j})+B_{i,j}), \tag{4}\] in which \(\{E_{i}\}\) is the set of all eigenvalues of the operator \(H\), and real parameters \(C,A_{i,j},B_{i,j}\) depend on the initial state, observable, and \(\theta\)-independent quantum operations. Note that Eq. (4) is highly Figure 1: Illustration of highly nonlinear energy landscape of QAOA for Max-Cut for 10 nodes with weighted Barabasi-Albert graph for objective Hamiltonian (left) and mixer Hamiltonian (right). \(E_{norm}\) is a standarized energy of the objective Hamiltonian, so that the eigenvalues lies in \([0,1]\) nonlinear, therefore its optimization may be difficult in practice. This is in contrast to typically used VQE approaches, in which the parameter-dependent unitary can be reduced to a single-qubit gate, which in turn may result in a simple, yet powerful gradient-free optimization technique [50, 51]. Unfortunately, the number of cosines in Eq. (4) may grow quadratically with number of distinct eigenvalues of the considered Hamiltonian. In the case of the objective Hamiltonian the number may be particularly high. While for many simple problems like unweighted Max-Cut or Max-SAT the number of different eigenvalues usually grows polynomially with the size of the data, for weighted Max-Cut each partition may result in a different objective value, which may give \(\mathcal{O}(2^{n})\) different energies in general. A complicated energy landscape can be seen already even for a small and simple instance, see Fig. 1. For problems generating such a complicated landscapes, more sophisticated methods may be at hand. ### Homotopy optimization method One of the well-known methods to solve a system of highly nonlinear problems is _homotopy optimization_, where a homotopy map is constructed between two systems. The solution corresponding to one of the systems is transformed into the solution of the other system. For example, consider the function \(f_{\rm targ}(x)\) which encodes a computationally hard problem and \(f_{\rm init}(x)\) which is a problem with an easy-to-find solution. Then the particular homotopy map between the systems can be given as \[\mathcal{F}(\alpha,x)=g_{1}(\alpha)f_{\rm targ}(x)+g_{2}(\alpha)f_{\rm init}(x ),\ \ \ \ 0\leq\alpha\leq 1, \tag{5}\] where \[g_{1}(0)=0,\ \ \ g_{2}(0)=1,\] \[g_{1}(1)=1,\ \ \ g_{2}(1)=0. \tag{6}\] Here, we get a family of problems corresponding to \(\min_{x}\mathcal{F}(\alpha,x)=0\) for each \(\alpha\) value from 0 to 1. We track the optimized solutions starting from \((\alpha,x)=(0,x_{0})\), as \(\alpha\) moves from 0 to 1, which for a successful homotopy map leads to \((\alpha,x)=(1,x_{1})\), where \(x_{1}\) is ideally the optimal solution of \(f_{\rm targ}\). The state-of-the-art approach is to start from \((\alpha_{\rm init},x_{\rm init})\) with \(x_{\rm init}\) minimizing \(\mathcal{F}(0,x)=f_{\rm init}(x)\). Then the problem \(\min_{x}\mathcal{F}(\alpha+\alpha_{\rm step},x)=0\) is iteratively solved using the solution of \(\min_{x}\mathcal{F}(\alpha,x)\) as a starting point, for sufficiently small \(\alpha_{\rm step}>0\)[25]. ## 3 Hamiltonian-Oriented Homotopy QAOA ### Proposed method The Hamiltonian-oriented homotopy QAOA decomposes the optimization process of the objective Hamiltonian into several optimization loops. Each loop optimizes the energy \[E_{\alpha}(\vec{\gamma},\vec{\beta})=\langle\vec{\gamma},\vec{\beta}|H( \alpha)|\vec{\gamma},\vec{\beta}\rangle, \tag{7}\] where \(H(\alpha)\) encodes the homotopy map \[H(\alpha)=g_{1}(\alpha)H_{\rm mix}+g_{2}(\alpha)H_{\rm obj},\ \ 0\leq \alpha\leq 1. \tag{8}\] For \(\alpha=1\) the expectation value in Eq. (7) is the energy corresponding to the \(H_{\rm obj}\). While there is a freedom in the choice of \(g_{1}\) and \(g_{2}\), throughout the paper we a simple case \[g_{1}(\alpha)=1-\alpha,\ \ \ g_{2}(\alpha)=\alpha. \tag{9}\] During the optimization process, we choose an initialization of mixer and objective parameters (at \(\alpha=0\)) in such a way that the parameters corresponding to the mixer are sampled from the uniform random distribution \(\mathrm{U}(a,b)\) in an interval \([a=0,b=2\pi]\) and the objective parameters are all set to 0. With this initialization we make sure that the homotopy starts from the exact ground state of the mixer on a noise-free setting, as application of mixer on its eigenstate does not change the state. For \(\alpha^{\prime}>\alpha\geq 0\) the initial parameters are chosen as \[(\vec{\gamma},\vec{\beta})^{\rm init}_{\alpha^{\prime}}=(\vec{\gamma},\vec{ \beta})^{*}_{\alpha}, \tag{10}\] here \(*\) denotes the optimal parameters for \(\alpha\). It should be noted that each run of HOHo-QAOA follows the generic structure of homotopy process as in Eq. (8) where the "run-time" of HOHo-QAOA is characterized by the \(\alpha_{\rm step}\), for a fixed \(\alpha_{\rm init}\). The parameter \(\alpha_{\rm init}\) fixes the initial \(\alpha\) value. Generally, it can be inferred that better approximation to the optimal solution can be achieved if we choose a sufficiently small value of \(\alpha_{\rm step}\) and \(\alpha_{\rm init}\). They can be described in a more elaborated way as follows. Small value of \(\alpha_{\rm step}\) helps us realizing the homotopy of Eq. (8) and at the same time if we initiate with \(\alpha_{\rm init}\to 0\), it becomes easier to find the ground state for the first step. To show this, throughout the paper, we investigate the normalized energy \[E_{\rm norm}(E_{\alpha}(\vec{\gamma},\vec{\beta}),\alpha)=\frac{E_{\alpha}( \vec{\gamma},\vec{\beta})-\min H(\alpha)}{\max H(\alpha)-\min H(\alpha)}, \tag{11}\] with respect to parameters of HOHo-QAOA, where \(E_{\rm norm}(\alpha)=0\), is the normalized ground energy for any \(\alpha\in[0,1]\), and \(\min H(\alpha)\) (\(\max H(\alpha)\)) denotes minimum (maximum) of \(H(\alpha)\). ### Initialization strategy In the following, first we numerically discuss proposed settings for initial QAOA parameters \((\vec{\gamma},\vec{\beta})^{\rm init}\). With this setting we show that the homotopy parameters i.e. \(\alpha_{\rm init},\ \alpha_{\rm step}\) can be chosen detached from zero without compromising the efficiency of the method. We consider and optimized energy \(E_{\rm norm}^{*}\), or in the case of HOHo-QAOA also an intermediate optimized step energy \(E_{\rm norm}^{*}(\alpha)\). In the numerical results the \(E_{\rm norm}^{*}\) is averaged over 100 experiments. Details of the experiment can be found in Appendix A. For the numerical investigation of optimal QAOA parameters, which is illustrated in Figure 2, we consider three possible initialization choices of the mixer and objective parameters at \(\alpha=\alpha_{\rm init}\): 1. RR (Random Random): When the parameters corresponding to mixer and objective Hamiltonians are chosen from a uniform random distribution \(\mathrm{U}(0,2\pi)\) i.e. \(\gamma_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\), \(\beta_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\). 2. NZR (Near-Zero Random): The parameters corresponding to mixer Hamiltonian are chosen from \(\mathrm{U}(0,2\pi)\) but objective parameters are sampled from the values very close zero i.e. \(\gamma_{j}^{\rm init}\sim\mathrm{U}(0,v),\ \beta_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\), where \(v\) is 0.05. 3. ZR (Zero Random): Mixer parameters are sampled from \(\mathrm{U}(0,2\pi)\) chosen and objective is all zeros i.e. \(\gamma_{j}^{\rm init}=0,\ \beta_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\) as proposed before. From Figure 2 we conclude that ZR gives the best approximation to the ground state. This is because, under the ZR setting the initial parameters of QAOA always starts corresponding to the exact ground Figure 2: The impact of different methods of initialization of \(\gamma_{j},\beta_{j}\) on HOHo-QAOA. The left, the middle and the right figures are representing the convergence for RR (Random Random), NZR (Near-Zero Random) with parameter \(v=0.05\), and ZR (Zero Random) initialization respectively, see Sec. 3.2 for details. It is visible that the ZR is outperforming the other two initializations. Although for \(\alpha_{\rm init}\leq 0.2\) the performance of NZR and ZR are comparable but as we tune \(\alpha_{\rm init}>0.2\), the minima for NZR scatters in region \(0.10<E_{\rm norm}<0.15\) whereas the minima for ZR clusters in a very narrow \(E_{\rm norm}\)-width. state of \(H_{\rm mix}\) while the \(H_{\rm obj}\) is turned off. This is within the spirit of the homotopy optimization, in which starting in the optimal solution of the initial system is critical. Hence this good approximation to the initial parameters lead us to the better solution to the ground state of \(H_{\rm obj}\). Keeping in mind that if we sample \(\alpha_{\rm init}\) in the range \(0\leq\alpha_{\rm init}\leq 0.2\), we see that NZR shows comparable performance to ZR and the choice of initialization of \(\gamma_{j},\beta_{j}\) can be either one of them, relaxing the conditions on the choice of \(\vec{\gamma}\) and \(\vec{\beta}\). In the remaining of this paper all the numerical results are initialized with the ZR setting. Now we move to the analysis of the choice of suitable \(\alpha_{\rm init}\). In the Figure [3] we investigate the \(\alpha_{\rm init}\) dependency of the \(E^{*}_{\rm norm}\), where the energy is averaged over 100 experiments. From Figure [3](a) we take 3 layers of HOHo-QAOA and observe that the mean optimal energy and the corresponding standard deviation remain unchanged (which we term as _region of stability_) with respect to \(\alpha_{\rm init}\) in the range \(0\leq\alpha_{\rm init}\leq 0.5\). With an increase in the number of nodes from 6 to 16, the _region of stability_ shifts upwards but remains in the range \(0\leq\alpha_{\rm init}\leq 0.5\). This observation lead us to conclude that \(\alpha_{\rm init}\) can be chosen detached from zero without degrading the performance of HOHo-QAOA, or that at least that the region of stability does not shrink rapidly with the increased size of the problem. So setting \(\alpha_{\rm init}\) in the region of stability along with \(\gamma_{j}^{\rm init}=0,\beta_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\) yields a solution with particularly small energy value. In Figure [3](b) we investigate how the efficiency of the optimization depends on the \(\alpha_{\rm step}\). During this investigation, we take 10 layers of HOHo-QAOA. We observe that in the range \(10^{-4}\leq\alpha_{\rm step}<0.5\) the approximation to the ground energy and the corresponding standard deviation with increasing \(\alpha_{\rm step}\to 0\) remains almost unchanged, giving rise to a region of stability with respect to \(\alpha_{\rm step}\). This behavior of \(E^{*}_{\rm norm}\) with \(\alpha_{\rm step}\) is similar to what we can observe for \(\alpha_{\rm init}\). This lead us to a conclusion that one can choose \(\alpha_{\rm step}\) detached from zero for HOHo-QAOA. It should be noted that due to high simulation cost the experiment for 16 qubits is halted at the \(\alpha_{\rm step}=10^{-2}\), whereas the investigation for 6, 16 qubits is extended to \(10^{-4}\). The discussion and numerical results from the previous paragraphs give us the following initialization rules of HOHo-QAOA, which leads to a high efficiency of the method: 1. The parameters of mixer and objective should be initialized with ZR setting i.e. \(\gamma_{j}^{\rm init}=0,\beta_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\), 2. Although one can infer that \(\alpha_{\rm init}\to 0\) along with \(\alpha_{\rm step}\to 0\) gives the best result, our investigations Figure 3: We illustrate the dependency of \(E^{*}_{\rm norm}\) with \(\alpha_{\rm init}\) and \(\alpha_{\rm step}\). In (a) the variation of \(E^{*}_{\rm norm}\) with \(\alpha_{\rm init}\) for 3 layers of HOHo-QAOA is presented, with \(\gamma_{j}^{\rm init}=0,\beta_{j}^{\rm init}\sim\mathrm{U}(0,2\pi)\). In the figure, we see a _region of stability_ of HOHo-QAOA in respect with \(\alpha_{\rm init}\) in the range 0.0 to 0.50. In (b) we present \(E^{*}_{\rm norm}\) vs \(\alpha_{\rm step}\) using 10 layers of HOHo-QAOA. Just like in the case of \(\alpha_{\rm init}\), for \(\alpha_{\rm step}\) a same _region of stability_ can be observed. This gives us the preference on the choice of _step parameter_ while utilizing HOHo-QAOA. It should be noted that the y-axis in (a) is in linear scale and whereas in (b) it is in log scale. The lines in both the plots are taken \(\alpha_{\rm init}\) and \(\alpha_{\rm step}\)-wise and is the mean of 100 experiments. The area under the plots are standard deviation of energies. show that one can choose the homotopy parameters detached from zero. This greatly reduces the cost of simulating HOHo-QAOA Figure 4: Comparison of different initialization for QAOA and T-QAOA. In the left (right) figure we illustrate how the \(E^{*}_{\text{norm}}\) changes with increasing number of layers in QAOA (T-QAOA) under the RR and ZR settings. The solid line is the median energy over 100 experiments, meanwhile the dashed line represents the best sample, taken layer-wise and node-wise by choosing the minimum energy among all the experiments. The areas are delimited by the first and third quartile. Figure 5: Performance of HOHo-QAOA compared to QAOA and T-QAOA. On both figures, for all the QAOA methods, we applied the ZR settings. The areas are delimited by the first and third quartile. The solid line presents \(E^{*}_{\text{norm}}\) median over 100 experiments for the left figure and 50 experiments for the right figure, and the dashed line represents the best sample, taken layer-wise and node-wise by choosing the minimum energy among all the experiments. On the left figure, the number of nodes is fixed to 10. On the right, the number of layers is fixed to 5 and the energy is sampled within 6 to 18 nodes. The homotopy parameters are set as \(\alpha_{\text{init}}=0\) and \(\alpha_{\text{step}}=0.01\). One can see that in both cases the averaged energy as well as the best sample of HOHo-QAOA outperforms the other variants of QAOA. ### Performance analysis In this section we analyze the performance of the introduced algorithm with respect to other optimization strategies introduced above. While it is natural for HOHo-QAOA to initialize using ZR strategy, it is unclear whether this choice will improve or worsen the results for QAOA or T-QAOA. Therefore before comparing state-of-the-art methods to the introduced one, we verify whether there is any difference in the performance for QAOA and T-QAOA with respect to the initialization of the optimized angles. In Fig. 4 we investigate state-of-the art methods for parameters \((\gamma_{j},\beta_{j})^{\mathrm{init}}\) initialized with RR and ZR strategy. We observe that the performance of QAOA and T-QAOA is not influenced by the chosen strategies. This justifies using ZR strategy when comparing QAOA, T-QAOA and HOHo-QAOA. Note that for QAOA we are observing undesired non-monotonic behavior with respect to the number of layers. We claim that this is caused because of a complicated landscape of the energy function, which makes difficult to optimize it if no information about the problem instance is used during the initialization from large number of nodes. This argument is complies with good performance of T-QAOA where the initial parameters of \((L+1)\)-layer step is evaluated based on local optimal solutions of the \(L\)-layers step. In Fig. 5 we compare the performance of HOHo-QAOA with the other variants when \((\gamma_{j},\beta_{j})^{\mathrm{init}}\) are initialized using ZR setting. In the first experiment we run the algorithms with a fixed number of nodes while increasing number layers. In the second experiment the number of layers is fix while we vary the number of nodes. The plots present optimized energy values, averaged respectively over 100 and 50 instances. The data shows that the introduced HOHo-QAOA gives us significantly smaller energy in both experiment setups. Good improvements remains as more layers of the HOHo-QAOA are used and also outperforms the other variants of QAOA for higher number of nodes. This conclusions remain valid also for the best sample solution chosen (dashed line). It should be noted that the HOHo-QAOA outperforms QAOA and the T-QAOA in each and every layer starting from initial layer 5 to final layer 100. ## 4 Conclusion In the article we present a novel algorithm for combinatorial optimization. The method is a combination of homotopy optimization with an application in QAOA. In our method the observable used for computing the energy is changed during the optimization process. The process starts with observable being a mixer, for which the initial state of QAOA is a grounds state, and is slowly moved into the objective Hamiltonian. In addition we verify that, although traditionally in the homotopy method the initial value of transition parameter \(\alpha\) should be 0 and the step should be as small as possible, for QAOA for the value of considered parameters can be detached from 0. A homotopy optimization is an algorithm dedicated for nonlinear optimized functions, and since even simple QAOA landscape is a linear combination of many - for some problems exponentially many - sinusoidal functions, our approach is well motivated for such energy function. This is in contrast to typical VQE optimization process, in which the function landscape with respect to a single parameter is just a sine. By comparing our approach and QAOA algorithm with the typical choice of optimization strategies we numerically confirmed that our method outperforms state-of-the-art approaches. While our algorithm was only presented for QUBO and \(X\)-mixer,it is not restricted to it. In particular, if the transition function is of the form \(H(\alpha)=g_{1}(\alpha)H_{\mathrm{mix}}+g_{2}(\alpha)H_{\mathrm{obj}}\), we only require energy of the \(H_{\mathrm{mixer}}\) to be efficiently computable. This includes XY-mixer [46] and Grover mixer [45] for which the initial state can be efficiently prepared. Moreover, our approach remains also valid for higher-order binary problems [11, 15] and more advanced pseudo-code based QAOA Hamiltonian implementation [12]. AcknowledgmentA.K., A.G. and L.B. has been partially supported by Polish National Science Center under grant agreements 2019/33/B/ST6/02011. A.G. acknowledges support from National Science Center under grant agreement 2020/37/N/ST6/02220. The authors would like to thank Zoltan Zimboras, Ozlem Salehi and Jaroslaw A. Miszczak for valuable discussions and comments on the manuscript. Data and code availabilityData and code available in [https://doi.org/10.5281/zenodo.7585691](https://doi.org/10.5281/zenodo.7585691)
2305.01083
Computationally Relaxed Locally Decodable Codes, Revisited
We revisit computationally relaxed locally decodable codes (crLDCs) (Blocki et al., Trans. Inf. Theory '21) and give two new constructions. Our first construction is a Hamming crLDC that is conceptually simpler than prior constructions, leveraging digital signature schemes and an appropriately chosen Hamming code. Our second construction is an extension of our Hamming crLDC to handle insertion-deletion (InsDel) errors, yielding an InsDel crLDC. This extension crucially relies on the noisy binary search techniques of Block et al. (FSTTCS '20) to handle InsDel errors. Both crLDC constructions have binary codeword alphabets, are resilient to a constant fraction of Hamming and InsDel errors, respectively, and under suitable parameter choices have poly-logarithmic locality and encoding length linear in the message length and polynomial in the security parameter. These parameters compare favorably to prior constructions in the poly-logarithmic locality regime.
Alexander R. Block, Jeremiah Blocki
2023-05-01T20:31:01Z
http://arxiv.org/abs/2305.01083v3
# Computationally Relaxed Locally Decodable Codes, Revisited+ ###### Abstract We revisit computationally relaxed locally decodable codes (crLDCs) (Blocki et al., Trans. Inf. Theory '21) and give two new constructions. Our first construction is a Hamming crLDC that is conceptually simpler than prior constructions, leveraging digital signature schemes and an appropriately chosen Hamming code. Our second construction is an extension of our Hamming crLDC to handle insertion-deletion (InsDel) errors, yielding an InsDel erLDC. This extension crucially relies on the noisy binary search techniques of Block et al. (FSTTCS '20) to handle InsDel errors. Both crLDC constructions have binary codeword alphabets, are resilient to a constant fraction of Hamming and InsDel errors, respectively, and under suitable parameter choices have poly-logarithmic locality and encoding length linear in the message length and polynomial in the security parameter. These parameters compare favorably to prior constructions in the poly-logarithmic locality regime. ## I Introduction Locally decodable codes (LDCs) are error-correcting codes that admit super-efficient (i.e., poly-logarithmic time) recovery of individual symbols of an encoded message by querying only a few locations into a received word. For an alphabet \(\Sigma\) and a (normalized) metric \(\mathrm{dist}\), a pair of algorithms \(\mathsf{Enc}\colon\Sigma^{k}\to\Sigma^{K}\) and \(\mathsf{Dec}\colon[k]\to\Sigma\) (for \([k]:=\{1,\ldots,k\}\)) is a \((\ell,\rho,p)\)-\(\mathsf{LDC}\) if \(\mathsf{Dec}\) is a randomized oracle algorithm such that for any message \(x\) and any received word \(y^{\prime}\), if \(\mathrm{dist}(\mathsf{Enc}(x),y^{\prime})\leqslant\rho\) then for every \(i\), \(\mathsf{Dec}^{y^{\prime}}(i)\) makes at most \(\ell\) queries to \(y^{\prime}\) and outputs \(x_{i}\) with probability at least \(p\). Here, \(k\) and \(K\) are the _message_ and _block lengths_, respectively, \(k/K\) is the _rate_, \(\ell\) is the _locality_, \(\rho\) is the _error-rate_, and \(p\) is the _success probability_. Studied extensively in the context of worst-case _Hamming errors_[14, 15, 16, 17, 18, 19] where \(\mathrm{dist}\) is the normalized Hamming distance (HAM), _Hamming_\(\mathsf{LDCs}\) seem to have irreconcable trade-offs between the rate, error-rate, and locality. For constant error-rate (the target of most applications), the best known constructions with constant \(\ell\geqslant 3\) locality have super-polynomial rate [18, 19, 20], for \(\ell=2\) it is known that \(K=\Theta(\exp(k))\)[16], and the best known constructions with constant rate have super-logarithmic (sub-polynomial) locality [14]. Furthermore, the best known lower bounds for general Hamming LDCs with constant error-rate and locality \(\ell\geqslant 3\) are \(K=\Omega(k^{\frac{\ell+1}{\ell-1}})\)[20], and any locality \(\ell=3\) linear Hamming LDC has \(K=\Omega(k^{2}/\log(k))\)[20]. See surveys [17, 18] for more details. To remedy these dramatic trade-offs, Ben-Sasson et al. [19] introduced _relaxed_ LDCs (rLDCs). Relaxed LDCs are LDCs that additionally allow the decoder to output a symbol \(\bot\not\in\Sigma\), which signifies that the decoder does not know the correct value, under the following restrictions: the decoder (a) does not output \(\bot\) "too often"; and (b) never outputs \(\bot\) when the queried codeword is uncorrupted. This relaxation yields LDCs with constant locality and block length \(K=k^{1+\varepsilon}\) for (small) constant \(\varepsilon>0\). Blocki et al. consider a further relaxation of rLDCs known as _computationally relaxed_ LDCs (crLDCs) [19]: rLDCs that are only resilient against adversarial channels that are computationally bounded (i.e., probabilistic polynomial time (PPT) channels). This relaxation, inspired by the work of Lipton [20], yields crLDCs with constant rate, constant error-rate, and polylog locality. Recently, advances in coding theory have turned their focus to understanding and constructing codes which are resilient to _insertion-deletion errors_ (InsDel errors) [17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 108, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 214, 215, 216, 217, 218, 219, 224, 219, 230, 225, 217, 226, 227, 228, 229, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 300, 311, 320, 321, 323, 335, 336, 337, 338, 340, 351, 352, 363, 371, 372, 373, 38, 390, 40, 41, 423, 44, 453, 46, 473, 48, 491, 50, 492, 50, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 108, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 179, 180, 190, 191, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 209, 209, 210, 211, 223, 214, 215, 216, 217, 228, 229, 232, 233, 234, 235, 236, 237, 238, ### _Overview of Results_ In this work, we revisit \(\mathsf{crLDCs}\) with respect to both Hamming and InsDel errors. We begin by defining \(\mathsf{crLDCs}\). **Definition 1** (Computationally Relaxed Locally Decodable Codes).: _Let \(\mathcal{C}=\{C_{\lambda}[K,k,q_{1},q_{2}]\}_{\lambda\in\mathbb{N}}\) be a code family with encoding algorithms \(\{\mathsf{Enc}_{\lambda}\colon\Sigma_{1}\to\Sigma_{2}\}_{\lambda\in\mathbb{N}}\) where \(|\Sigma_{i}|=q_{i}\). We say \(\mathcal{C}\) is a \((\ell,\rho,p,\delta,\mathrm{dist})\)-computationally relaxed locally decodable code (\(\mathsf{crLDC}\)) if there exists a family of randomized oracle decoding algorithms \(\{\mathsf{Dec}_{\lambda}\colon[k]\to\Sigma_{1}\}_{\lambda\in\mathbb{N}}\) such that:_ 1. _For all_ \(\lambda\in\mathbb{N}\) _and any_ \(\tilde{y}\in\Sigma_{*}^{k}\)_,_ \(\mathsf{Dec}_{\lambda}^{\tilde{y}}(i)\) _makes at most_ \(\ell\) _queries to_ \(\tilde{y}\) _for any_ \(i\in[k]\)_;_ 2. _For all_ \(\lambda\in\mathbb{N}\) _and any_ \(x\in\Sigma_{1}^{k}\)_, we have_ \[\Pr[\mathsf{Dec}_{\lambda}^{\mathsf{Enc}_{\lambda}(x)}(i)=x_{i}]=1\] _for all_ \(i\in[k]\)_;_ 3. _Define binary predicate_ \(\mathsf{Fool}(\tilde{y},\rho,p,x,y,\lambda)=1\) _iff_ 1. \(\mathrm{dist}(y,\tilde{y})\leqslant\rho\)_; and_ 2. \(\exists i\in[k]\) _such that_ \[\Pr[\mathsf{Dec}_{\lambda}^{\tilde{y}}(i)\in\{x_{i},\bot\}]<p,\] _where the probability is taken over_ \(\mathsf{Dec}_{\lambda}\)_;_ _otherwise_ \(\mathsf{Fool}(\tilde{y},\rho,p,x,y,\lambda)=0\)_. We require that for all PPT adversaries_ \(\mathcal{A}\) _there exists a negligible function_ \(\varepsilon_{\mathsf{F}}(\cdot)\) _such that for all_ \(\lambda\in\mathbb{N}\) _and all_ \(x\in\Sigma_{1}^{k}\)_, we have_ \[\Pr[\mathsf{Fool}(\mathcal{A}(y),\rho,p,x,y,\lambda)=1]\leqslant\varepsilon_{ \mathsf{F}}(\lambda),\] _where the probability is taken over_ \(\mathcal{A}\) _and_ \(y=\mathsf{Enc}_{\lambda}(x)\)_._ 4. _Define binary predicate_ \(\mathsf{Limit}(\tilde{y},\rho,\delta,x,y,\lambda)=1\) _iff_ 1. \(\mathrm{dist}(y,y^{\prime})\leqslant\rho\)_; and_ 2. \(|\mathsf{Good}(y^{\prime})|<\delta\cdot k\)_, where_ \[\mathsf{Good}(y^{\prime}):=\{i\in[k]\colon\Pr[\mathsf{Dec}_{\lambda}^{y^{ \prime}}(i)=x_{i}]>2/3\}\] _and the probability is taken over_ \(\mathsf{Dec}_{\lambda}\)_;_ _otherwise_ \(\mathsf{Limit}(\tilde{y},\rho,\delta,x,y,\lambda)=0\)_. We require that for all adversaries PPT adversaries_ \(\mathcal{A}\) _there exists a negligible function_ \(\varepsilon_{\mathsf{L}}(\cdot)\) _such that for all_ \(\lambda\in\mathbb{N}\) _and all_ \(x\in\Sigma_{1}^{k}\)_, we have_ \(\Pr[\mathsf{Limit}(\mathcal{A}(y),\rho,\delta,x,y,\lambda)=1]\leqslant \varepsilon_{\mathsf{L}}(\lambda)\)_, where the probability is taken over_ \(\mathcal{A}\) _and_ \(y=\mathsf{Enc}_{\lambda}(x)\)_._ _If \(\mathrm{dist}\) is the normalized Hamming distance \(\mathsf{HAM}\), we say the code is a Hamming \(\mathsf{crLDC}\); if \(\mathrm{dist}\) is the normalized edit distance \(\mathsf{ED}\), we say the code is a InsDel \(\mathsf{crLDC}\). Here, \(\ell\) is the locality, \(\rho\) is the error-rate, \(p\) is the success probability, and a function is negligible if it is \(o(x^{-c})\) for all constants \(c>0\). If \(q_{2}=2\), we say that \(\mathcal{C}\) is a family of binary \(\mathsf{crLDCs}\), and if \(q_{1}=q_{2}\) we simply write \(C_{\lambda}[K,k,q_{1}]\)._ Definition 1 closely follows the \(\mathsf{crLDC}\) definition of Blocki et al. [1] with a few modifications. First, the constructions of [1] utilize a public random seed for a collision-resistant hash function, so their \(\mathsf{crLDC}\) definition is quantified over the randomness of the seed generation algorithm. Our constructions do not require a public random seed so we omit this algorithm from our definition and instead quantify the security of our \(\mathsf{crLDC}\) over a code family \(\{C_{\lambda}\}_{\lambda\in\mathbb{N}}\). This quantification also captures the notion of asymptotic security when interacting with PPT adversaries, which differs from standard (\(r\))\(\mathsf{LDC}\) definitions that consider information-theoretic adversaries. Moreover, [1] requires the public random seed to be generated in an honest (i.e., trusted) way, and our definition and constructions circumvent this requirement. Second, we slightly strengthen the security definition by tweaking the predicate \(\mathsf{Fool}\): in Definition 1, the adversary wins if there _exists_ an index \(i\) (not necessarily known by the adversary) such that the probability the decoder outputs correctly on input \(i\) is less than \(p\). In contrast, [1] requires the adversary to output corubord \(y^{\prime}\) and a target index \(i\) such that the probability the decoder outputs correctly on index \(i\) is less than \(p\). Note that requiring Definition 1 to hold for \(p=2/3\), \(\varepsilon_{\mathsf{F}}(\lambda)=\varepsilon_{\mathsf{L}}(\lambda)=0\), and for all computationally unbounded adversaries \(\mathcal{A}\) results in the original \(\mathsf{rLDC}\) definition [1]. Our first contribution is constructing a family of binary Hamming \(\mathsf{crLDCs}\) satisfying Definition 1. Our construction borrows from code concatenation techniques [14], which utilize an outer code \(C_{out}=(\mathsf{Enc}_{out},\mathsf{Dec}_{out})\) and an inner code \(C_{in}=(\mathsf{Enc}_{in},\mathsf{Dec}_{in})\) and encodes a message \(x\) as follows: 1. compute \(y=\mathsf{Enc}_{out}(x)\); 2. partition \(y\) into some number \(d\) of blocks \(y^{(1)}\|\ldots\|y^{(d)}\); 3. compute \(Y^{(i)}=\mathsf{Enc}_{in}(y^{(i)})\) for all \(i\); and 4. output \(Y=Y^{(1)}\|\ldots\|Y^{(d)}\); here, \(\|\) denotes string concatenation. In our construction, we use the identity function as \(C_{out}\), utilize a suitable _digital signature scheme_ to sign each block \(y^{(i)}\), and use a classical Hamming code as \(C_{in}\). Briefly, a digital signature scheme with signatures of length \(r(\cdot)\) is a tuple of PPT algorithms \(\Pi=(\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) that satisfy the following properties: 1. \(\mathsf{Gen}\) takes as input security parameter \(\lambda\in\mathbb{N}\) (in unary) and outputs a key pair \((\mathsf{pk},\mathsf{sk})\), where \(\mathsf{pk}\) is the _public/verification key_ and \(\mathsf{sk}\) is the _private/signing key_; 2. \(\mathsf{Sign}\) takes as input a message \(m\) of arbitrary length and the signing key \(\mathsf{sk}\) and outputs a signature \(\sigma\in\{0,1\}^{r(\lambda)}\) of message \(m\). 3. \(\mathsf{Ver}\) is deterministic and takes as input a message \(m\), some signature \(\sigma\), and a verification key \(\mathsf{pk}\) outputs \(1\) iff \(\sigma\) is a valid signature of message \(m\) and \(0\) otherwise. 4. For all PPT adversaries \(\mathcal{A}\), for \((\mathsf{pk},\mathsf{sk})\leftarrow\mathsf{Gen}(1^{\lambda})\), if \(\mathcal{A}\) is given pk as input and given oracle access to \(\mathsf{Sign}_{\mathsf{sk}}(\cdot)\), then \(\Pi\) is _secure_ if, except with negligible probability in \(\lambda\), \(\mathcal{A}\) cannot output a pair \((\tilde{m},\tilde{\sigma})\) such that \(\mathsf{Ver}_{\mathsf{pk}}(\tilde{m},\tilde{\sigma})=1\) and \(\mathcal{A}\) never queried \(\mathsf{Sign}_{\mathsf{sk}}(\tilde{m})\). Given a secure digital signature scheme and any binary Hamming code, we obtain our first main result. **Theorem 1**.: _Let \(\Pi\) be a \(r:=r(\lambda)\) length signature scheme. Let \(C_{in}\) be a binary Hamming code with rate \(\beta_{in}\) and error-rate \(\rho_{in}\). Then for every positive polynomial \(k(\cdot)\) and constant \(c\in(0,1/2)\), there exists a code family \(\mathcal{C}_{\mathsf{H}}:=\{C_{\mathsf{H},\lambda}[K,k(\lambda),2]\}_{\lambda\in \mathbb{N}}\) and function \(\mu\!:=\!\mu(\lambda)\) such that \(\mathcal{C}_{\mathsf{H}}\) is a \((\ell,\rho,p,\delta)\)-Hamming \(\mathsf{crLDC}\) with_ * \(K=O((1/\beta_{in})\max\{k(1+\log(k)/r),r\})\)_,_ * \(\ell=O((\mu/\beta_{in})\cdot(r+\log(k)))\)_,_ * \(\rho=c\cdot\rho_{in}\)_,_ * \(p=1-\exp(-\mu(1/2-c)^{2}/2(1-c))>2/3\)_, and_ * \(\delta=1/2\)_,_ _where \(k:=k(\lambda)\)._ Our code family \(\mathcal{C}_{\mathsf{H}}\) is constant rate whenever \(\beta_{in}=\Theta(1)\) and \(\Omega(\log(k(\lambda)))=r(\lambda)\leqslant k(\lambda)\). Our construction allows for \(r(\lambda)>k(\lambda)\), but this results in locality \(\ell\geqslant K\), so it is more efficient to use a Hamming code with comparable rate and error-rate. Any choice of \(\mu\) satisfying \(p>2/3\) ensures that \(\delta=1/2\); e.g., \(\mu(\lambda):=O(\log^{1+\epsilon}(\lambda))\) for constant \(\epsilon>0\) gives us polylog locality and success probability \(1-\mathsf{negl}(\lambda)\), where \(\mathsf{negl}\) denotes some unspecified negligible function. We can instantiate Theorem 1 with a constant rate and error-rate binary Hamming code \(C_{in}\) (e.g., [11]) and an appropriate signature scheme to achieve a constant rate and error-rate Hamming \(\mathsf{crLDC}\) with polylog locality. Our construction shines when \(r(\lambda)=\operatorname{polylog}(\lambda)\) and under standard idealized models there exist signature schemes with \(r(\lambda)\) as small as \(\Theta(\log^{1+\epsilon}(\lambda))\) for small constant \(\epsilon>0\)[1, 1], assuming these schemes satisfy the following notion of concrete security: for security parameter \(\lambda\), any adversary running in time \(2^{\lambda/2}\) can violate the security of the scheme with probability at most \(2^{-\lambda/2}\) for signatures of length \(r(\lambda)=\lambda\). Plugging in \(\lambda^{\prime}=\Theta(\log^{1+\epsilon}(\lambda))\), said schemes are secure against super-polynomial time adversaries with negligible security in \(\lambda\), which implies they satisfy our definition of security for signature schemes. Using such a scheme with a constant rate and error-rate Hamming code \(C_{in}\) and \(\mu(\lambda):=O(\log^{1+\epsilon}(\lambda))\), we obtain the following corollary. **Corollary 1**.: _Let \(\Pi\) be a \(r(\lambda)=\Theta(\log^{1+\epsilon}(\lambda))\) length signature scheme for constant \(\epsilon>0\). Then for all sufficiently large positive polynomials \(k(\cdot)\), there exists code family \(\{C_{\mathsf{H},\lambda}[K,k(\lambda),2]\}_{\lambda\in\mathbb{N}}\) that is a \((\ell,\rho,p,\delta)\)-Hamming \(\mathsf{crLDC}\) with_ * \(K=O(k)\)_,_ * \(\ell=O(\log^{2(1+\epsilon)}(\lambda))\)_,_ * \(\rho=\Theta(1)\)_,_ * \(p=1-\mathsf{negl}(\lambda)\)_, and_ * \(\delta=1/2\)_,_ _where \(k:=k(\lambda)\)._ The parameters of Corollary 1 are comparable to the Hamming \(\mathsf{crLDC}\) construction of [1], which achieves \(K=O(k)\), \(\ell=\operatorname{polylog}(n)\), \(\rho=\Theta(1)\), \(p=1-\mathsf{negl}(\lambda)\), and \(\delta=\Theta(1)\). Our construction is arguably conceptually simpler than that of [1], which utilizes local expander graphs and collision-resistant hash functions (with a trusted setup), whereas our construction simply partitions, signs, and encodes. Moreover, our use of signatures does not require public key infrastructure as such schemes exist from one-way functions [1]. #### Iii-A1 Extension to InsDel Errors Our second contribution is extending the construction of Theorem 1 to handle InsDel errors. Prior constructions of InsDel LDCs utilized a so-called "Hamming-to-InsDel" compiler [1, 1, 1]. Key to this compiler is a _noisy binary search_ algorithm, which intuitively allows one to search an almost sorted list and find most entries with high probability. We use this algorithm to find blocks of codewords that are not "too corrupt", allowing us to handle more general InsDel errors. We use the noisy binary search tools of Block et al. [1] and the well-known Schulman-Zuckerman InsDel code [13] for \(C_{in}\) to extend Theorem 1 to the InsDel setting. Together with a secure digital signature scheme, we obtain our second main result. **Theorem 2**.: _Let \(\Pi\) be a \(r\!:=\!r(\lambda)\) length signature scheme. There exists a constant \(c\in(0,1/2)\) such that for every positive polynomial \(k(\cdot)\) and constant \(\rho^{*}\in(0,1/3)\), there exists a code family \(\mathcal{C}_{\mathsf{Ins}}:=\{C_{\lambda}[K,k(\lambda),2]\}_{\lambda\in\mathbb{N}}\) and a function \(\mu:=\mu(\lambda)\) such that \(\mathcal{C}_{\mathsf{Ins}}\) is a \((\ell,\rho,p,\delta)\)-InsDel \(\mathsf{crLDC}\) with_ * \(K=O(\max\{k(1+\log(k)/r),r\})\)_,_ * \(\ell=O((\log^{3}(n)+\mu)\cdot(r+\log(k)))\)_,_ * \(\rho=\Theta(1)\)_,_ * \(p=1-\rho^{*}-\exp(-\mu(1/2-c)^{2}/2(1-c))>2/3\)_, and_ * \(\delta=1-\Theta(\rho)\)_,_ _where \(k:=k(\lambda)\)._ As with Theorem 1, our family \(\mathcal{C}_{\mathsf{Ins}}\) is constant rate whenever \(\Omega(\log(k(\lambda)))=r(\lambda)\leqslant k(\lambda)\), and additionally has the same downside whenever \(r(\lambda)>k(\lambda)\), in which case it is more efficient to directly encode with an (asymptotically) optimal InsDel code (e.g., [13]). We again choose \(\mu\) such that \(p=1-\mathsf{negl}(\lambda)>2/3\); moreover, under the same set of assumptions on the underlying signature scheme as with our Hamming \(\mathsf{crLDC}\) (e.g., [1, 1]), for \(\mu(\lambda)=\Theta(\log^{1+\epsilon}(\lambda))\) for small constant \(\epsilon>0\), we obtain the following corollary. **Corollary 2**.: _Let \(\Pi\) be a \(r(\lambda)=\Theta(\log^{1+\epsilon}(\lambda))\) length signature scheme for constant \(\epsilon>0\). Then for all sufficiently large positive polynomials \(k(\cdot)\), there exists code family \(\{C_{\mathsf{I},\lambda}[K,k(\lambda),2]\}_{\lambda\times\mathbb{N}}\) that is a \((\ell,\rho,p,\delta)\)-InsDel \(\mathsf{crLDC}\) with \(K=O(k)\), \(\ell=O(\log^{3(1+\epsilon)}(\lambda))\), \(\rho=\Theta(1)\), \(p=1-\mathsf{negl}(\lambda)\), and \(\delta=1-\Theta(\rho)\), where \(k:=k(\lambda)\)._ To the best of our knowledge, our InsDel \(\mathsf{crLDCs}\) are the first of their kind and compare favorably to the prior InsDel LDCs of Block et al. [1] and are comparable to the private and resource-bounded LDCs of Block and Blocki [1]. ### _Related Work_ Classical InsDel codes were initially studied in [11], inspiring a rich line of research into these codes; see surveys [1, 1, 1] for more information. Recently, \(k\)-deletion correcting codes with optimal rate were constructed, answering a long standing open question [1]. Randomized codes with positive rate that are correct a large fraction of deletions are studied in [1, 1]. Another line of work extends list decoding to InsDel codes [1, 1, 1, 1]. Finally, [1] constructs explicit synchronization strings which can be "locally decoded" in the following sense: each index of the string is computable using symbols located at a small number of other locations in the string. These synchronization strings are used to construct near linear time interactive coding schemes for InsDel errors. [11] initiated the study of codes resilient to errors introduced by computationally bounded channels. Several follow-up works adopt this channel model, yielding Hamming codes with better parameters than their classical counterparts [14, 15, 16]. It has been argued that any real-world communication channel can be reasonably modeled as a computationally bounded channel [11, 1], so one can reasonably expect error patterns encountered in nature to be modeled by some (possibly unknown) PPT algorithm. This channel model has also been extended to the LD setting for both Hamming [17, 18, 19, 20] and, more recently, InsDel errors [1]. [1] introduced the notion of relaxed locally decodable codes. In a follow-up work, [2] introduced and construct _relaxed locally correctable codes_ (rLCC) for Hamming errors: codes with local correction algorithms which can correct corrupt codeword symbols via querying a few locations into the received word. Their construction has significantly better parameters than classical Hamming LCCs, achieving constant locality, constant error-rate, and polynomial block length. Furthermore, their rLCC is also a rLCC since their code is systematic. Follow-up work continued to give improved rLCC constructions [1, 2, 2][2] studies Hamming rLCCs/rLCCs in the context of computationally bounded channels (crLDC/crLCC). Our work directly adapts this model but for InsDel errors. [16] initiated the study of InsDel LDCs. They give a compiler which transforms any Hamming LDC into an InsDel LDC, asymptotically preserving the rate and error-rate of the underlying Hamming LDC at the cost of a poly-logarithmic increase in the locality. [2] reproves this result with a conceptually simpler analysis using techniques borrowed from the study of a cryptographic object known as memory-hard functions [1, 1, 2]. [10] proposes the notion of Hamming/InsDel LDCs with randomized encodings in various settings, including when the encoder and decoder share randomness or when the channel adds error patterns non-adaptively. In the InsDel case, [10] invokes the compiler of [16] and obtain a code with block length \(O(k)\) or \(O(k\log(k))\) and \(\operatorname{polylog}(k)\) locality. Recently, [1] extends the compiler of [2] to the private-key setting of [17], where the encoder and decoder share a secret key unknown to the channel, and to the resource-bounded setting of [1], where the channel is assumed to be resource constrained in some way. While it is likely that applying the "Hamming-to-InsDel" compiler to the crLDC of [2] or our crLDCs would yield an InsDel crLDC, this has not been formally claimed or proven in prior work. Finally, there has been recent progress in obtaining lower bounds for InsDel LDCs. [2] proved that InsDel LDCs with constant locality, even in the private-key setting, require exponential block length, and also show that linear \(2\)-query InsDel LDCs do not exist. This makes it all the more surprising that a constant rate InsDel crLDC in the polylog locality regime exist. ## II Technical Overview The main technical ingredients for both our Hamming and InsDel crLDC constructions are the use of a digital signature scheme \(\Pi\) with \(r\)-length signatures along with a suitable inner code \(C_{in}\). The encoding algorithms for both codes are nearly identical, with the main difference being the choice of \(C_{in}\). The decoding algorithms are also similar: the InsDel decoder is a (non-trivial) modification of the Hamming decoder to handle InsDel errors using noisy binary search techniques. #### Ii-1 Hamming crLDC Construction Let \(C_{in}\) be an appropriate Hamming code (i.e., non-local), and let \(\Pi=(\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) be an \(r\)-length signature scheme. _The Hamming Encoder:_ We define a family of encoding algorithms \(\{\mathsf{Enc}_{\mathsf{H},\lambda}\}_{\lambda}\). Let \(\lambda\in\mathbb{N}\) be the security parameter. For any message \(x\in\{0,1\}^{k}\), encoder \(\mathsf{Enc}_{\mathsf{H},\lambda}\) partitions \(x\) into \(d=\lceil k/r(\lambda)\rceil\) blocks \(x=x^{(1)}\|\cdots\|x^{(d)}\), where \(x^{(i)}\in\{0,1\}^{r(\lambda)}\) for all \(i\) (padding with \(0\) as necessary). Each \(x^{(i)}\) is now signed using \(\Pi\): \(\mathsf{Enc}_{\mathsf{H},\lambda}\) generates key pair \((\mathsf{pk},\mathsf{sk})\leftarrow\mathsf{Gen}(1^{\lambda})\) and computes signature \(\sigma^{(i)}\leftarrow\mathsf{Sign}_{\mathsf{sk}}(x^{(i)}\|i)\). Next, the block \(x^{(i)}\|\sigma^{(i)}\|\mathsf{pk}\|i\) is encoded using \(C_{in}\) to obtain codeword \(c^{(i)}\), where \(\mathsf{pk}\) is the public key generated previously. Finally, \(\mathsf{Enc}_{\mathsf{H},\lambda}\) outputs \(C=c^{(1)}\|\cdots\|c^{(d)}\in\{0,1\}^{K}\). If \(r(\lambda)\geq k\), only a single block is signed and encoded at the cost of locality \(\geq K\), so it is more efficient to use a Hamming code with similar rate and error-rate rather than \(\mathsf{Enc}_{\mathsf{H},\lambda}\). We give the formal encoding algorithm in Algorithm 1. ``` Input : A message \(x\in\{0,1\}^{k}\). Output : A codeword \(C\in\{0,1\}^{K}\). Hardcoded : Hamming code \(C_{in}\); \(r(\cdot)\)-length signature scheme \(\Pi\); and \(\lambda\in\mathbb{N}\) in unary. 1 Sample \((\mathsf{pk},\mathsf{sk})\leftarrow\mathsf{Gen}(1^{\lambda})\) 2 Set \(d=\lceil k/r(\lambda)\rceil\) 3 Partition \(x=x^{(1)}\|\cdots\|x^{(d)}\) where \(x^{(j)}\in\{0,1\}^{r(\lambda)}\) for every \(j\in[d]\) (padding last block as necessary) 4foreach\(j\in[d]\)do 5\(\sigma^{(j)}\leftarrow\mathsf{Sign}_{\mathsf{sk}}(x^{(j)}\|j)\) 6\(C^{(j)}=\mathsf{Enc}_{in}(x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j)\) 7 Define \(C:=C^{(1)}\|\cdots\|C^{(d)}\in\{0,1\}^{K}\) returnC ``` **Algorithm 1**Hamming Encoder \(\mathsf{Enc}_{\mathsf{H},\lambda}\) _Strawman Decoder:_ Given \(\mathsf{Enc}_{\mathsf{H},\lambda}\), there is a natural decoding algorithm that does not satisfy Definition 1. The strawman decoder proceeds as follows. Let \(x\in\{0,1\}^{k}\), \(C=\mathsf{Enc}_{\mathsf{H},\lambda}(x)\in\{0,1\}^{n}\), and let \(\tilde{C}\in\{0,1\}^{n}\) such that \(\mathsf{HAM}(\tilde{C},C)\leq\rho\). Let \(i\in[k]\) be the input given to the strawman decoder and let \(\tilde{C}\) be its oracle. Since the goal is to recover bit \(x_{i}\) from string \(\tilde{C}\), the strawman decoder first calculates index \(j\in[d]\) such that bit \(x_{i}\) resides in block \(x^{(j)}\) Since \(\tilde{C}\) only contains Hamming errors, the strawman decoder views its oracle \(\tilde{C}\) as blocks \(\tilde{C}^{(1)}\|\cdots\|\tilde{C}^{(d)}\) and recovers block \(\tilde{C}^{(j)}\). The strawman decoder then runs the decoder of \(C_{in}\) with input \(\tilde{C}^{(j)}\) to obtain some string \(\tilde{m}^{(j)}\) which can be viewed as some (potentially corrupt) string \(\tilde{x}^{(j)}\|\tilde{\sigma}^{(j)}\|\tilde{\mathsf{pk}}\|\tilde{j}\). The strawman decoder then proceeds to use the signature scheme to verify the contents of this decoded message by checking if \(\mathsf{Ver}_{\tilde{\mathsf{pk}}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)}) \stackrel{{?}}{{=}}1\). If verification fails, then the decoder outputs \(\bot\); otherwise, the decoder outputs \(\tilde{x}^{(j)}_{i^{*}}\), where \(i^{*}\) is the index of \(x^{(j)}\) that corresponds to bit \(x_{i}\). Notice that if \(\tilde{C}=C\), then this strawman decoder outputs the correct bit \(x_{i}\) with probability \(1\), satisfying Item 2 of Definition 1. However, this strawman decoder can never satisfy Item 3 if we desire error-rate \(\rho=\Theta(1)\). Consider the following simple attack. Let \(\mathcal{A}\) be a PPT adversary that operates as follows: (1) Given codeword \(C\), the adversary \(\mathcal{A}\) decodes block \(C^{(1)}\) to obtain \(x^{(1)}\|\sigma^{(1)}\|\mathsf{pk}\|1\). (2) \(\mathcal{A}\) then generates its own key pair \((\mathsf{pk}^{\prime},\mathsf{sk}^{\prime})\), a message \(x^{\prime}=1-x^{(1)}\), and computes \(\sigma^{\prime}=\mathsf{Sign}_{\mathsf{sk}^{\prime}}(x^{\prime}\|1)\). (3) \(\mathcal{A}\) then computes \(C^{\prime}=\mathsf{Enc}_{in}(x^{\prime}\|\sigma^{\prime}\|\mathsf{pk}^{\prime} \|1)\) and outputs \(\tilde{C}=C^{\prime}\|C^{(2)}\|\cdots\|C^{(d)}\). Intuitively, this attack succeeds for two reasons. The first reason is that corruption of \(C^{(1)}\) to \(C^{\prime}\) is a small fraction of the total amount of corruptions allotted to transform \(C\) to \(\tilde{C}\). The second reason is that the strawman decoder relies on the public key \(\mathsf{pk}^{\prime}\)to perform verification. The key to preventing this attack is addressing the recovery of the public key. Notice that if the decoder recovered the true public key pk used by \(\mathsf{Enc}_{\mathsf{H},\lambda}\), then this attack fails since the verification procedure fails and the decoder outputs bot. Thus we modify the strawman decoder to recover the true public key to obtain our final Hamming decoder. The Hamming DecoderWe define a family of decoding algorithms \(\{\mathsf{Dec}_{\mathsf{H},\lambda}\}_{\lambda}\). Let \(\lambda\in\mathbb{N}\), \(\mu\in\mathbb{N}\) be a parameter of our choice, \(x\in\{0,1\}^{k}\), \(C=\mathsf{Enc}_{\mathsf{H},\lambda}(x)\), and \(\tilde{C}\leftarrow\mathcal{A}(C)\) such that \(\mathsf{HAM}(C,\tilde{C})\leqslant\rho\), where \(\mathcal{A}\) is a PPT adversary. On input \(i\in[k]\) and given oracle access to \(\tilde{C}\), the decoder \(\mathsf{Dec}_{\mathsf{H},\lambda}\) tries to recover \(x_{i}\) via a two-step process. First, \(\mathsf{Dec}_{\mathsf{H},\lambda}\) tries to recover the true public key pk. It begins by uniformly sampling \(j_{1},\ldots,j_{n}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _Challenges to Decoding InsDel Errors:_ InsDel errors allow an adversary to insert symbols into and delete symbols from codewords, which introduces challenges that do not arise with Hamming errors. One may hope to simply use our family \(\{(\mathsf{Enc}_{\mathsf{H},\lambda},\mathsf{Dec}_{\mathsf{H},\lambda})\}_{\lambda}\) with \(C_{in}=\mathsf{SZ}\) to achieve Theorem 2; however this yields a trivial InsDel \(\mathsf{r\mathsf{r\mathsf{LOC}}}\). Let \(\mathsf{Enc}^{\prime}_{\mathsf{H},\lambda}\) be identical to \(\mathsf{Enc}_{\mathsf{H},\lambda}\) except we use the \(\mathsf{SZ}\) code as the code \(C_{in}\). For any \(x\) and \(C=\mathsf{Enc}^{\prime}_{\mathsf{H},\lambda}(x)\), there is a simple attack to ensure that \(\mathsf{Dec}_{\mathsf{H},\lambda}\)_always_ outputs \(\bot\): the adversary simply transforms \(C=C^{(1)}\|\ldots\|C^{(d)}\) into \(\tilde{C}=C_{1}^{(d)}\|C^{(1)}\|\cdots\|C^{(d-1)}\|C_{0}^{(d)}\), where \(C_{0}^{(d)},C_{1}^{(d)}\) are the first and second halves of \(C^{(d)}\), respectively. This implies that \(\{(\mathsf{Enc}^{\prime}_{\mathsf{H},\lambda},\mathsf{Dec}_{\mathsf{H}, \lambda})\}_{\lambda}\) is an InsDel \(\mathsf{r\mathsf{LOC}}\) with \(\delta=0\); i.e., it always outputs \(\bot\) given a corrupt codeword. However, we can handle this and more general attacks by leveraging the noisy binary search techniques of Block et al. [1]. _Noisy Binary Search Overview:_ To understand the noisy binary search algorithm \(\mathsf{NBS}\) and its guarantees, we require the notion of \(\gamma\)_-goodness_. For \(x,y\in\{0,1\}^{*}\), we say that \(y\) is \(\gamma\)_-good with respect to \(x\)_ if \(\mathsf{ED}(x,y)\leqslant\gamma\). The notion of \(\gamma\)-goodness (albeit under different formal definitions) has been useful in the design and analysis of depth-robust graphs, a combinatorial object used extensively in the study of memory-hard functions [1, 1, 2], and it is essential to the success of \(\mathsf{NBS}\). Intuitively, for a fixed "correct" ordered list of strings \(A=(a_{1},\ldots,a_{n})\), each of length \(\kappa\), and some other list of strings \(B=(b_{1},\ldots,b_{n^{\prime}})\), the algorithm \(\mathsf{NBS}\) finds any string \(b_{j}\) that is \(\gamma\)-good with respect to the string \(a_{j}\) for \(j\in[n]\), except with negligible probability. In our context, each \(b_{j}\) corresponds to blocks in the (possibly corrupt) codeword. Given a tolerance parameter \(\rho^{*}\in(0,1/2)\), the \(\mathsf{NBS}\) algorithm on input \(j\in[n]\) outputs \(b_{j}\) for at least \((1-\rho^{*})\)-fraction of the \(\gamma\)-good indices \(j\), except with negligible probability. Moreover, \(\mathsf{NBS}\) runs in time \(\kappa\cdot\mathrm{polylog}(n^{\prime})\), which is only possible by allowing \(\mathsf{NBS}\) to fail on a small fraction of \(\gamma\)-good indices, else the algorithm requires \(\Omega(\kappa n^{\prime})\) time. Suppose that \(\tilde{C}\in\{0,1\}^{n^{\prime}}\) for some \(n^{\prime}\) is a corrupt codeword (from an appropriate encoding algorithm) and let \(i\in[k]\). We use the \(\mathsf{NBS}\) algorithm to search \(\tilde{C}\) for some (possibly corrupt) block \(\tilde{m}^{(j)}\) which contains the desired symbol \(x_{i}\). So long as \(\tilde{C}\) and \(\tilde{m}^{(j)}\) are "not too corrupt", then \(\mathsf{NBS}\) outputs \(\tilde{m}^{(j)}\) with high probability. For searching, \(\mathsf{NBS}\) utilizes a block decoding algorithm \(\mathsf{BlockDec}\) to find \(\tilde{m}^{(j)}\) within \(\tilde{C}\) with the following guarantee: for input \(i\), if \(i\) is within a (small) ball around a \(\gamma\)-good block \(\tilde{m}^{(j)}\), then \(\mathsf{BlockDec}\) outputs \(\tilde{m}^{(j)}\) with probability at least \(1-\gamma\). Assuming \(\tilde{m}^{(j)}\) is not too corrupt, we can parse it as \(\tilde{x}^{(j)}\|\tilde{\sigma}^{(j)}\|\tilde{\mathsf{pk}}\|\tilde{j}\) and use \(\mathsf{Ver}\) to ensure that \(\tilde{x}^{(j)}\) is correct. Note that both \(\mathsf{NBS}\) and \(\mathsf{BlockDec}\) can fail and output \(\bot\), which we take into consideration for our decoder. _The InsDel Encoder:_ We define a family of encoding algorithms \(\{\mathsf{Enc}_{\mathsf{I},\lambda}\}_{\lambda}\). Let \(\lambda\in\mathbb{N}\) be the security parameter and let \(\alpha\) be a constant specified by the \(\mathsf{NBS}\) algorithm [1]. For any message \(x\in\{0,1\}^{k}\), encoder \(\mathsf{Enc}_{\mathsf{I},\lambda}\) behaves identically to \(\mathsf{Enc}_{\mathsf{H},\lambda}\) by partitioning \(x\) into \(d=\lceil x/(\lambda)\rceil\) blocks, generating \((\mathsf{pk},\mathsf{sk})\leftarrow\mathsf{Gen}(1^{\lambda})\), computing \(\sigma^{(j)}\leftarrow\mathsf{Sign}_{\mathsf{sk}}(x^{(j)}\|j)\), and computing \(c^{(j)}=\mathsf{SZ}\). \(\mathsf{Enc}(x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j)\) for every \(j\). Next, the encoder computes buffered codewords \(C^{(j)}=0^{\alpha r(\lambda)}\|c^{(j)}\|\mathsf{0}^{\alpha r(\lambda)}\) for every \(j\), where \(0^{\alpha r(\lambda)}\) is a all-zero vector of length \(\alpha r(\lambda)\) and ensure the success of the \(\mathsf{NBS}\) and \(\mathsf{BlockDec}\) algorithms. Finally, \(\mathsf{Enc}_{\mathsf{I},\lambda}\) outputs \(C=C^{(1)}\|\cdots\|C^{(d)}\). Again, if \(r(\lambda)\geqslant k\), it is more efficient to simply to encode \(x\) using the \(\mathsf{SZ}\) code. We give the formal encoding algorithm in Algorithm 3; any differences between this encoder and the Hamming encoder (Algorithm 1) are highlighted in blue and with an inline comment. ``` Input : A message \(x\in\{0,1\}^{k}\). Output : A codeword \(C\in\{0,1\}^{K}\). Hardcoded : The \(\mathsf{SZ}\) InsDel code; \(r(\cdot)\)-length signature scheme \(\Pi\); \(\alpha\in\mathbb{N}\); and \(\lambda\in\mathbb{N}\) in unary. 1 Sample \((\mathsf{pk},\mathsf{sk})\leftarrow\mathsf{Gen}(1^{\lambda})\) Set \(d=\lceil k/r(\lambda)\rceil\) Partition \(x=x^{(1)}\|\cdots\|x^{(d)}\) where \(x^{(j)}\in\{0,1\}^{r(\lambda)}\) for every \(j\in[d]\) (padding last block as necessary) foreach\(j\in[d]\)do// \(\mathsf{Enc}_{\mathsf{H},\lambda}\)Diff 2\(\sigma^{(j)}\leftarrow\mathsf{Sign}_{\mathsf{sk}}(x^{(j)}\|j)\)\(c^{(j)}=\mathsf{SZ}\). \(\mathsf{Enc}(x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j)\)\(C^{(j)}=(0^{\alpha\cdot\mathsf{m}}\|c^{(j)}\|\phi^{\alpha\cdot\mathsf{m}})\); \(\mathsf{m}=\big{\lfloor}x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j\big{\rfloor}\) 3 Define \(C:=C^{(1)}\|\cdots\|C^{(d)}\in\{0,1\}^{K}\). return\(C\) ``` **Algorithm 3**InsDel Encoder \(\mathsf{Enc}_{\mathsf{I},\lambda}\) _The InsDel Decoder:_ We define a family of decoding algorithms \(\{\mathsf{Dec}_{\mathsf{I},\lambda}\}_{\lambda}\). Let \(\lambda\in\mathbb{N}\), \(\mu\in\mathbb{N}\) be a parameter of our choice, \(x\in\{0,1\}^{k}\), \(C=\mathsf{Enc}_{\mathsf{I},\lambda}(x)\), and \(\tilde{C}\leftarrow\mathcal{A}(C)\) such that \(\mathsf{ED}(C,\tilde{C})\leqslant\rho\), where \(\mathcal{A}\) is a PPT adversary and \(\tilde{C}\in\{0,1\}^{K^{\prime}}\) for some \(K^{\prime}\). Then on input \(i\in[k]\) and given oracle access to \(\tilde{C}\), the decoder \(\mathsf{Dec}_{\mathsf{I},\lambda}\) tries to recover \(x_{i}\) via the same two-step process as \(\mathsf{Dec}_{\mathsf{H},\lambda}\): first, recover the public key \(\mathsf{pk}\); and second, find block \(j\) that is supposed to contain \(x_{i}\) and use the recovered \(\mathsf{pk}\) to verify its integrity. Recovery of \(\mathsf{pk}\) is done similarly to \(\mathsf{Dec}_{\mathsf{H},\lambda}\), except we leverage \(\mathsf{BlockDec}\) to find blocks with potential public keys by first sampling \(i_{1},\ldots,i_{\mu}\stackrel{{\mathsf{c}}}{{\leftarrow}}[n]\) uniformly at random, then obtaining \(\tilde{m}^{(j_{\kappa})}\leftarrow\mathsf{BlockDec}(i_{\kappa})\) for each \(\kappa\in[\mu]\), where \(j_{\kappa}\in[d]\). Intuitively, in the InsDel setting we need to search for each block \(j_{\kappa}\) whereas in the Hamming setting we knew exactly where each block was located. If \(\tilde{m}^{(j_{\kappa})}=\bot\), then we set \(\mathsf{pk}_{\kappa}=\bot\); else, we parse \(\tilde{m}^{(j_{\kappa})}\ **Theorem 2**: _Proof Overview:_ We give a high-level overview of the proof of Theorem 2; full details can be found in Section V. The main technical challenge of proving Theorem 2 is showing that \(\{(\mathsf{Enc}_{\mathsf{l},\lambda},\mathsf{Dec}_{\mathsf{l},\lambda})\}_{\lambda}\) satisfies Items 3 and 4 of Definition 1. Towards Item 3, for any \(x\in\left\{0,1\right\}^{k}\), PPT adversary \(\mathcal{A}\), and \(i\in[k]\), we analyze the probability that \(\mathsf{Dec}_{\mathsf{l},\lambda}^{\tilde{C}}(i)\in\{x_{i},\bot\}\) for \(\tilde{C}\leftarrow\mathcal{A}(\mathsf{Enc}_{\mathsf{l},\lambda}(x))\) such that \(\mathsf{ED}(\tilde{C},\mathsf{Enc}_{\mathsf{l},\lambda}(x))\leqslant\rho\). The proof proceeds identically to the Hamming \(\mathsf{crLDC}\) with the following key changes. First, when recovering the public key, we must consider the success probability of \(\mathsf{BlockDec}\) in our Chernoff bound to ensure \(\mathsf{pk}^{*}=\mathsf{pk}\) with high probability. Second, we must consider the success probability of \(\mathsf{NBS}\) when recovering block \(j\) that contains \(x_{i}\). Careful selection of parameters and the guarantees of \(\mathsf{BlockDec}\) and \(\mathsf{NBS}\) ensures Item 3 holds. Towards Item 4, the proof again proceeds nearly identically to the Hamming \(\mathsf{crLDC}\) case, except again we must take into consideration the recovery of public key \(\mathsf{pk}^{*}=\mathsf{pk}\) via \(\mathsf{BlockDec}\) and the recovery of block \(j\) with \(\mathsf{NBS}\). The noisy binary search algorithm recovers any block that is \(\gamma\)-good with probability greater than \(2/3\) (under suitable parameter choices), except with negligible probability. This directly translates to the fraction \(\delta=1-\Theta(\rho)\) of indices we are able to decode from for Item 4. ## III Preliminaries We let \(\lambda\in\mathbb{N}\) denote the security parameter. For any \(n\in\mathbb{Z}^{+}\) we let \([n]:=\{1,\ldots,n\}\). A function \(\mu\colon\mathbb{N}\to\mathbb{R}_{\geqslant 0}\) is said to be negligible if \(\mu(n)=o(1/|p(n)|)\) for any fixed non-zero polynomial \(p\). We write PPT to denote probabilistic polynomial time. For any randomized algorithm \(A\), we let \(y\gets A(x)\) denote the process of obtaining output \(y\) from algorithm \(A\) on input \(x\). For a finite set \(S\), we let \(s\xleftarrow{\mathsf{L}}S\) denote the process of sampling elements of \(S\) uniformly at random. We use "\(\|\)" to denote the string concatenation operation. For bitstring \(x\in\left\{0,1\right\}^{*}\), we use subscripts to denote individual bits of \(x\); e.g., \(x_{i}\in\left\{0,1\right\}\) is the \(i\)-th bit of \(x\). Additionally, we often partition a bitstring \(x\in\left\{0,1\right\}^{k}\) into some number of blocks \(d\) of equal length; e.g., \(x=(x^{(1)}\|\cdots\|x^{(d)})\) where \(x^{(j)}\in\left\{0,1\right\}^{k/d}\) for all \(j\in[d]\). We also utilize array notation when convenient: e.g., for bitstring \(x\in\left\{0,1\right\}^{k}\) and indices \(a,b\in[k]\) such that \(a\leqslant b\), we let \(x[a,b]:=(x_{a}\|x_{a+1}\|\cdots\|x_{b})\). For two strings \(x\in\left\{0,1\right\}^{k}\) and \(y\in\left\{0,1\right\}^{*}\), we define \(\mathsf{ED}(x,y)\) as the minimum number of insertions and deletions required to transform \(x\) into \(y\) (or vice versa), normalized by \(2k\). In this work, we utilize _digital signatures_ and give the formal definition below. **Definition 2** (Digital Signature Scheme).: _A digital signature scheme with signatures of length \(r(\cdot)\) is a tuple of PPT algorithms \(\Pi=(\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) satisfying the following properties:_ 1. \(\mathsf{Gen}\) _is the key generation algorithm and takes as input a security parameter_ \(1^{\lambda}\) _and outputs a pair of keys_ \((\mathsf{pk},\mathsf{sk})\in\left\{0,1\right\}^{*}\times\left\{0,1\right\}^{*}\)_, where_ \(\mathsf{pk}\) _is the public key and_ \(\mathsf{sk}\) _is the secret/private key. It is assumed that_ \(|\mathsf{pk}|,|\mathsf{sk}|\geqslant\lambda\) _are polynomial in_ \(\lambda\)_, and that_ \(\lambda\) _can be efficiently determined from_ \(\mathsf{pk}\) _or_ \(\mathsf{sk}\)_. Without loss of generality, we assume that_ \(|\mathsf{pk}|=r(\lambda)\)_._ 2. \(\mathsf{Sign}\) _is the signing algorithm and takes as input secret key_ \(\mathsf{sk}\) _and message_ \(m\in\left\{0,1\right\}^{*}\) _of arbitrary length and outputs a signature_ \(\sigma\leftarrow\mathsf{Sign}_{\mathsf{sk}}(m)\in\left\{0,1\right\}^{r(\lambda)}\)_, where_ \(\mathsf{Sign}\) _runs in time_ \(\mathrm{poly}(|\mathsf{sk}|,|m|)\)_._ 3. \(\mathsf{Ver}\) _is the deterministic verification algorithm that takes as input public key_ \(\mathsf{pk}\)_, message_ \(m\)_, and signature_ \(\sigma\)_, and outputs a bit_ \(b=\mathsf{Ver}_{\mathsf{pk}}(m,\sigma)\in\left\{0,1\right\}\)_. Moreover_ \(\mathsf{Ver}\) _run in time_ \(\mathrm{poly}(r(\lambda),|m|)\)_._ _Additionally, we require the following two properties:_ 1. _Completeness:_ _For all messages_ \(m\in\left\{0,1\right\}^{*}\) _and all_ \((\mathsf{pk},\mathsf{sk})\in\mathrm{supp}(\mathsf{Gen}(1^{\lambda}))\)_, we have_1__ Footnote 1: Other definitions (e.g., [KL.14]) require this condition to hold except with negligible probability over \((\mathsf{pk},\mathsf{sk})\leftarrow\mathsf{Gen}(1^{\lambda})\)_._ 2. _Security:_ _For all PPT adversaries_ \(\mathcal{A}\)_, there exists a negligible function_ \(\varepsilon_{\Pi}(\cdot)\) _such that for all_ \(\lambda\in\mathbb{N}\) _we have_ \[\Pr[\mathsf{Sign}\text{-}\mathsf{forge}_{\Pi,\mathcal{A}}(\lambda)=1]\leqslant \varepsilon_{\Pi}(\lambda),\] _where the experiment_ \(\mathsf{Sign}\)_-_\(\mathsf{forge}\) _is defined in Fig._ 1_._ For completeness, we also include the classical definition of an error-correcting code. **Definition 3**.: _A coding scheme \(C[K,k,q_{1},q_{2}]=(\mathsf{Enc},\mathsf{Dec})\) is a pair of encoding and decoding algorithms \(\mathsf{Enc}\colon\Sigma_{1}^{k}\to\Sigma_{2}^{K}\) and \(\mathsf{Dec}\colon\Sigma_{2}^{*}\to\Sigma_{1}^{k}\), where \(|\Sigma_{i}|=q_{i}\). A code \(C[K,k,q_{1},q_{2}]\) is a \((\rho,\mathrm{dist})\) error-correcting code for \(\rho\in[0,1]\) and fractional distance \(\mathrm{dist}\) if for all \(x\in\Sigma_{1}^{k}\) and \(y\in\Sigma_{2}^{*}\) such that \(\mathrm{dist}(\mathsf{Enc}(x),y)\leqslant\rho\), we have that \(\mathsf{Dec}(y)=x\). Here, \(\rho\) is the error rate of \(C\). If \(q_{1}=q_{2}\), we simply denote this by \(C[K,k,q_{1}]\). If \(\mathrm{dist}=\mathsf{HAM}\), then \(C\) is a Hamming code; if \(\mathrm{dist}=\mathsf{ED}\), then \(C\) is an insertion-deletion code (InsDel code)._ Key to our construction is the so-called "\(\mathsf{SZ}\)-code", which is an insertion-deletion error-correcting code with constant rate and constant error-tolerance. **Lemma 1** (Sz-code [11]).: _There exists positive constants \(\beta_{\mathsf{z}}\leqslant 1\) and \(\rho_{\mathsf{z}}>0\) such that for large enough values of \(t\in\mathbb{Z}^{+}\), there exists a \((\rho_{\mathsf{z}},\mathsf{ED})\) code \(\mathsf{SZ}(t)=(\mathsf{SZ}.\mathsf{Enc},\mathsf{SZ}.\mathsf{Dec})\) where \(\mathsf{SZ}.\mathsf{Enc}\colon\{0,1\}^{t}\to\{0,1\}^{(1/\beta_{\mathsf{z}}) \cdot t}\) and \(\mathsf{SZ}.\mathsf{Dec}\colon\{0,1\}^{*}\to\{0,1\}^{t}\cup\{\bot\}\) with the following properties:_ 1. \(\mathsf{SZ}.\mathsf{Enc}\) _and_ \(\mathsf{SZ}.\mathsf{Dec}\) _run in time_ \(\mathrm{poly}(t)\)_; and_ 2. _For all_ \(x\in\{0,1\}^{t}\)_, every interval of length_ \(2\log(t)\) _in_ \(\mathsf{Enc}(x)\) _has fractional Hamming weight_ \(\geqslant 2/5\)_._ _We omit the parameter \(t\) when it is clear from context._ ## IV Proof of Theorem 1 We dedicate this section to showing that \(\mathcal{C}_{\mathsf{H},\lambda}=\{(\mathsf{Enc}_{\mathsf{H},\lambda}, \mathsf{Dec}_{\mathsf{H},\lambda})\}_{\lambda\in\mathbb{N}}\) satisfies Theorem 1, where \(\mathsf{Enc}_{\mathsf{H},\lambda}\) and \(\mathsf{Dec}_{\mathsf{H},\lambda}\) are defined in Algorithms 1 and 2, respectively. We recall the theorem below. **Theorem 1**.: _Let \(\Pi\) be a \(r:=r(\lambda)\) length signature scheme. Let \(C_{in}\) be a binary Hamming code with rate \(\beta_{in}\) and error-rate \(\rho_{in}\). Then for every positive polynomial \(k(\cdot)\) and constant \(c\in(0,1/2)\), there exists a code family \(\mathcal{C}_{\mathsf{H}}:=\{C_{\mathsf{H},\lambda}[K,k(\lambda),2]\}_{\lambda\in \mathbb{N}}\) and function \(\mu:=\mu(\lambda)\) such that \(\mathcal{C}_{\mathsf{H}}\) is a \((\ell,\rho,p,\delta)\)-Hamming \(\mathsf{crLDC}\) with_ * \(K=O((1/\beta_{in})\max\{k(1+\log(k)/r),r\})\)_,_ * \(\ell=O((\mu/\beta_{in})\cdot(r+\log(k)))\)_,_ * \(\rho=c\cdot\rho_{in}\)_,_ * \(p=1-\exp(-\mu(1/2-c)^{2}/2(1-c))>2/3\)_, and_ * \(\delta=1/2\)_,_ _where \(k:=k(\lambda)\)._ Proof.: First note that \(x^{(j)},\sigma^{(j)}\in\{0,1\}^{r(\lambda)}\), and \(j\in\{0,1\}^{\log(d)}\) by construction. Furthermore, without loss of generality we assume that \(\mathsf{pk}\in\{0,1\}^{\lambda}\) and \(r(\lambda)\geqslant\lambda\). Note we also have \(\log(d)=O(\log(k))\). Therefore if rate of \(C_{in}\) is \(\beta_{in}\), then block length \(K\) of our code is exactly \[K =d\cdot(1/\beta_{in})\cdot(2r(\lambda)+\lambda+\log(d))\] \[=(\lceil k/r(\lambda)\rceil)\cdot(1/\beta_{in})\cdot(2r(\lambda)+ \lambda+\log(\lceil k/r(\lambda)\rceil))\] \[=O((1/\beta_{in})\cdot k\cdot(3+\log(k)/r(\lambda)))\] \[=O((1/\beta_{in})\cdot k(1+\log(k)/r(\lambda)))\] whenever \(k\geqslant r(\lambda)\). When \(r(\lambda)>k\), then we pad the input message \(x\) with \(k-r(\lambda)\) number of \(0\)'s at the end to get a string of length \(r(\lambda)\), which gives a single codeword block of length \[K =(1/\beta_{in})\cdot(2r(\lambda)+\lambda+1)\] \[=O((1/\beta_{in})\cdot r(\lambda)).\] Thus we have our specified block length \[K=O((1/\beta_{in})\cdot\max\{k\cdot(1+\log(k)/r(\lambda)),r(\lambda)\}).\] For the locality \(\ell\), by construction we know that any block \(j\) has length \[(1/\beta_{in})\cdot(2r(\lambda)+\lambda+\log(d))\] \[=(1/\beta_{in})\cdot(2r(\lambda)+\lambda+\log(k)-\log(r(\lambda)))\] \[=O((1/\beta_{in})\cdot(r(\lambda)+\log(k))).\] Since we decode \(\mu+1\) blocks, our overall locality is \[\ell=O((\mu/\beta_{in})\cdot(r(\lambda)+\log(k))).\] Moreover, \(\mathsf{Dec}_{\mathsf{H},\lambda}\) makes \(O((\mu/\beta_{in})\cdot(r(\lambda)+\log(k)))\) queries to its oracle on any input \(i\), satisfying Item 1 of Definition 1. For item Item 2, assume that \(\mathsf{Dec}_{\mathsf{H},\lambda}\) is given oracle access to \(\tilde{C}=C\) for some \(C=\mathsf{Enc}_{\mathsf{H},\lambda}(x)\) and \(x\in\{0,1\}^{k}\). We then analyze the probability that \(\mathsf{Dec}_{\mathsf{H},\lambda}^{c}(i)=x_{i}\) for any \(i\in[k]\). First, since \(C\) is a correct codeword, recovery of the public key succeeds with probability \(1\). That is, for every \(\kappa\in[\mu]\), the string \(\tilde{m}^{(j_{\kappa})}\leftarrow\mathsf{Dec}_{in}(C[(j_{\kappa}-1)\cdot\mathsf{ bl}+1,j_{\kappa}\cdot\mathsf{bl}])\) recovered in Line 5 is equal to \(x^{(j_{\kappa})}\|\sigma^{(j_{\kappa})}\|\mathsf{pk}\|j_{k}\) (i.e., everything is correct). Here, \(\mathsf{bl}=n/d\) is the length of each block \(C^{(j)}\) in \(C=C^{(1)}\|\cdots\|C^{(d)}\). Thus \(\mathsf{pk}^{*}=\mathsf{pk}\) with probability \(1\). Now fixing \(j\in[d]\) to be the block such that bit \(x_{i}\) resides in \(x^{(j)}\), by the above discussion we know that \(\tilde{m}^{(j)}\) recovered in Line 10 is correct and is parsed as \(x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j\). This along with the fact that \(\mathsf{pk}^{*}=\mathsf{pk}\) implies that Line 12 is true with probability \(0\) (i.e., \(\mathsf{Ver}_{\mathsf{pk}^{*}}(x^{(j)}\|j,\sigma^{(j)})=1\) with probability \(1\)). This implies that \(\mathsf{Dec}_{\mathsf{H},\lambda}^{c}(i)=x_{i}\) with probability \(1\). For error-rate, let \(\rho_{in}\in(0,1)\) be the error-rate of \(C_{in}\). Intuitively, we want to set our final error-rate \(\rho\) such that for any \(\rho\)-fraction of corruptions, less than half of the \(d\) blocks contain more than \(\rho_{in}\) faction of errors each. Equivalently, more than half of the \(d\) blocks contain at most \(\rho_{in}\)-fraction Fig. 1: Description of the signature forgery experiment Sig-forge. of errors. Let \(\mathsf{bl}\) denote the length of a block in the codeword \(C\). Then \(\mathsf{Dec}_{in}\) cannot correctly decode any block \(j\in[d]\) if it has at least \(\rho_{in}\mathsf{bl}+1\) errors. Let \(\widetilde{J}\subset[d]\) be the set of indices such that block \(j\in\widetilde{J}\) has at least \(\rho_{in}\mathsf{bl}+1\) Hamming errors. Then we have \[|\widetilde{J}|\cdot(\rho_{in}\mathsf{bl}+1)\leqslant\rho\cdot d\cdot\mathsf{ bl}\ (=\rho K),\] which implies that \[|\widetilde{J}| \leqslant(\rho\cdot d\cdot\mathsf{bl})/(\rho_{in}\mathsf{bl}+1)\] \[<(\rho\cdot d\cdot\mathsf{bl})/(\rho_{in}\mathsf{bl})\] \[=\rho d/\rho_{in}.\] We want \(|\widetilde{J}|<d/2\) to ensure that more than half of the blocks contain at most \(\rho_{in}\mathsf{bl}\) errors. Setting \((d\rho)/\rho_{in}<d/2\) implies \(\rho<\rho_{in}/2\). Thus we set \(\rho=c\cdot\rho_{in}=\Theta(\rho_{in})\) for any constant \(c\in(0,1/2)\). Next we analyze the success probability \(p\) and Item 3. For predicate \(\mathsf{Fool}\), fix message \(x\) and let \(C=\mathsf{Enc}_{\mathsf{H},\lambda}(x)\). We want to show that for all PPT \(\mathcal{A}\) there exists a negligible function \(\varepsilon_{\mathsf{F}}\) such that for all \(\lambda\) and \(x\in\left\{0,1\right\}^{k}\) we have \[\Pr[\mathsf{Fool}(\tilde{C},\rho,p,x,C,\lambda)=1]\leqslant\varepsilon_{ \mathsf{F}}(\lambda)\] for \(\tilde{C}\leftarrow\mathcal{A}(C)\). Equivalently, we want to show that \[\Pr[\exists i\in[k]\colon\Pr[\mathsf{Dec}^{\tilde{C}}_{\mathsf{H},\lambda}(i )\in\{x_{i},\bot\}]<p]\leqslant\varepsilon_{\mathsf{F}}(\lambda)\] for \(\tilde{C}\leftarrow\mathcal{A}(C)\) such that \(\mathsf{HAM}(C,\tilde{C})\leqslant\rho\). Restated again, we want to show \[\Pr[\forall i\in[k]\colon\Pr[\mathsf{Dec}^{\tilde{C}}_{\mathsf{H},\lambda}(i) \in\{x_{i},\bot\}]\geqslant p]\geqslant 1-\varepsilon_{\mathsf{F}}(\lambda).\] We analyze the probability that \(\mathsf{Dec}^{\tilde{C}}_{\mathsf{H},\lambda}(i)\in\{x_{i},\bot\}\), and let \(E_{i}\) denote this event, and also let \(\mathsf{Frg}_{i}\) denote the event that \(\mathcal{A}\) produces a signature for \((\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})\), where \(j\in[d]\) such that \((j-1)\cdot r(\lambda)<i\leqslant j\cdot r(\lambda)\) and \(\tilde{x}^{(j)}\) and \(\tilde{\sigma}^{(j)}\) are recovered by \(\mathsf{Dec}_{\mathsf{H},\lambda}\). Then we have that \(\Pr[E_{i}]=\Pr[E_{i}|\overline{\mathsf{Frg}_{i}}]\). Note that the decoder can never output \(\bot\) and \(x_{i}\) simultaneously. Letting \(E_{i}(x)\) be the event that \(\mathsf{Dec}^{\tilde{C}}_{\mathsf{H},\lambda}(i)=x\) for symbol \(x\), we have that \[\Pr[E_{i}\ |\ \overline{\mathsf{Frg}_{i}}]=\Pr[E_{i}(x_{i})\ |\ \overline{ \mathsf{Frg}_{i}}]+\Pr[E_{i}(\bot)\ |\ \overline{\mathsf{Frg}_{i}}].\] Analyzing \(\Pr[E_{i}(x_{i})|\overline{\mathsf{Frg}_{i}}]\), since we assume that \((\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})\) is not a forgery it must be the case that 1. \(\tilde{x}^{(j)}=x^{(j)}\) and \(\tilde{\sigma}^{(j)}=\sigma^{(j)}\); and 2. \(\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})=1\). Now this verification only succeeds if \(\mathsf{pk}^{*}=\mathsf{pk}\), which implies \[\Pr[E_{i}(x_{i})\ |\ \overline{\mathsf{Frg}_{i}}]=\Pr[\mathsf{pk}^{*}=\mathsf{ pk}].\] Next we analyze \(\Pr[E_{i}(\bot)|\overline{\mathsf{Frg}_{i}}]\). Note that \(\mathsf{Dec}_{\mathsf{H},\lambda}\) only outputs \(\bot\) if \(\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})=0\). Let \(\mathcal{E}_{1}\) denote the event \(\tilde{x}^{(j)}\neq x^{(j)}\), \(\mathcal{E}_{3}\) denote the event \(\tilde{\sigma}^{(j)}\neq\sigma^{(j)}\), and \(\mathcal{E}_{4}\) denote the event \(\mathsf{pk}^{*}=\mathsf{pk}\). Then we lower bound the above probability as \[\Pr[E_{i}(\bot)|\overline{\mathsf{Frg}_{i}}]\geqslant\] \[\Pr\left[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde {\sigma}^{(j)})=0\begin{array}{ccc}\mathcal{E}_{1}&\wedge&\\ (\mathcal{E}_{2}&\vee&\mathcal{E}_{3})&\wedge\\ \mathcal{E}_{4}&&\end{array}\right]\cdot\Pr[\mathcal{E}_{4}].\] Since we assume \(\overline{\mathsf{Frg}_{i}}\) is true (i.e., \(\mathcal{E}_{1}\) is true), we know that \[\Pr\left[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j) })=0\begin{array}{ccc}\mathcal{E}_{1}&\wedge&\\ (\mathcal{E}_{2}&\vee&\mathcal{E}_{3})&\wedge\\ \mathcal{E}_{4}&&\end{array}\right]=1.\] This implies \[\Pr[E_{i}(\bot)\ |\ \mathcal{E}_{1}]\geqslant\Pr[\mathcal{E}_{4}]=\Pr[\mathsf{ pk}^{*}=\mathsf{pk}],\] which implies \[\Pr[E_{i}\ |\ \overline{\mathsf{Frg}_{i}}]\geqslant 2\cdot\Pr[\mathsf{pk}^{*}= \mathsf{pk}]\geqslant\Pr[\mathsf{pk}^{*}=\mathsf{pk}].\] We now analyze \(\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\). Let \(\mathsf{pk}\) be the public key sampled by \(\mathsf{Enc}_{\mathsf{H},\lambda}(x)\) to generate \(C\). By our parameter choice \(c\in(0,1/2)\), at least \((1-c)d>d/2\) blocks of \(C^{\prime}\) contain at most \(\rho_{in}\)-fraction of Hamming errors. Let \(\mathsf{bl}=K/d\) denote the length of any block of \(C\) and let \(\mathcal{J}\) denote the set of blocks with at most \(\rho_{in}\)-fraction of Hamming errors. Then \[x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j=\mathsf{Dec}_{in}(\tilde{C}[(j-1) \cdot\mathsf{bl}+1,j\cdot\mathsf{bl}])\] for any \(j\in\mathcal{J}\); i.e., it is a correct decoding since corrupt codeword \(\tilde{C}^{(j)}=\tilde{C}[(j-1)\cdot\mathsf{bl}+1,j\cdot\mathsf{bl}]\) is within the unique decoding radius of \(C_{in}\). Define random variable \(X_{\kappa}\) for \(\kappa\in[\mu]\) as \[X_{\kappa}:=\begin{cases}1&x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j=\mathsf{ Dec}_{in}(\tilde{C}[(j-1)\mathsf{bl}+1,j\cdot\mathsf{bl}])\\ 0&\text{otherwise}\end{cases}.\] Then \[\Pr[X_{\kappa}=1]\] \[=\Pr\left[x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j=\mathsf{ Dec}_{in}(\tilde{C}[(j-1)\cdot\mathsf{bl}+1,j\cdot\mathsf{bl}])\right]\] \[=\Pr[j_{\kappa}\in\mathcal{J}_{\mathsf{Good}}]\] \[\geqslant(1-c)>1/2.\] Let \(q=(1-c)>1/2\). By Chernoff bound, we have that \[\Pr\left[\sum_{\kappa\in[\mu]}X_{\kappa}>\frac{\mu}{2}\right]\geqslant 1-\exp(- \mu\cdot(q-1/2)^{2}/(2q)).\] This implies that with probability at least \[p:=1-\exp(-\mu\cdot(q-1/2)^{2}/(2q)),\] we have \(\mathsf{pk}^{*}=\mathsf{pk}\). Thus \[\Pr[E_{i}\ |\ \overline{\mathsf{Frg}_{i}}]\geqslant p.\] Throughout the above analysis, we only assumed that not forgery occurred. For any arbitrary \(i\), the probability that \(\overline{\mathsf{Frg}_{i}}\) occurs is at least \(1-\varepsilon_{\Pi}(\lambda)\), where \(\varepsilon_{\Pi}(\cdot)\) is a negligible function. that depends on the security of the digital signature scheme II. Thus by union bound over all \(i\), we have \[\Pr[\exists i\in[k]\colon\Pr[\mathsf{Dec}_{\mathsf{H},\lambda}^{\tilde{C}}(i)\in \{x_{i},\bot\}]<p]\geqslant k\cdot\varepsilon_{\Pi}(\lambda).\] Setting \(\varepsilon_{\mathsf{F}}(\lambda):=k\cdot\varepsilon_{\Pi}(\lambda)\), we have that \(\varepsilon_{\mathsf{F}}(\lambda)\) is negligible in \(\lambda\) since \(k(\lambda)\) is a polynomial, showing Item 3. Finally, we analyze \(\delta\) and Item 4. By our choice of \(\rho=c\cdot\rho_{in}\), we know that for _any_\(\tilde{C}\in\{0,1\}^{n}\) such that \(\mathsf{HAM}(\tilde{C},C)\leqslant\rho\), at least \((1-c)\cdot d\) blocks of \(\tilde{C}\) contain at most \(\rho_{in}\)-fraction of Hamming errors. Again letting \(\mathcal{J}\subset[d]\) denote the indices of these blocks, we have \(|\mathcal{J}|>d/2\). For \(\mathsf{bl}=K/d\), this again implies that \[x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j=\mathsf{Dec}_{in}(\tilde{C}[(j-1)\cdot \mathsf{bl}+1,j\cdot\mathsf{bl}])\] for any \(j\in\mathcal{J}\), which implies for any \(j\in\mathcal{J}\) we have that \[\Pr[\mathsf{Dec}_{\mathsf{H},\lambda}^{\tilde{C}}(i)=x_{i}|\mathsf{pk}^{*}= \mathsf{pk}]=1\] whenever \((j-1)\cdot r(\lambda)<i\leqslant j\cdot r(\lambda)\). Thus for any \(j\in\mathcal{J}\) and \(i\in[k]\) such that \((j-1)\cdot r(\lambda)<i\leqslant j\cdot r(\lambda)\), by Chernoff we have that \[\Pr[\mathsf{Dec}_{\mathsf{H},\lambda}^{\tilde{C}}(i)=x_{i}] =\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\] \[\geqslant 1-\exp(-\mu\cdot(1/2-c)^{2}\big{/}2(1-c)).\] By appropriate choice of \(\mu\), we can ensure that \[1-\exp(-\mu\cdot(1/2-c)^{2}\big{/}2(1-c))>2/3.\] Finally, we note that the set \[\mathcal{I}:=\{i\in[k]\colon j\in\mathcal{J}\ \wedge\ (j-1)\cdot r(\lambda)<i \leqslant j\cdot r(\lambda)\}\] has size \(|\mathcal{I}|>k/2\). Thus we can set \(\delta=1/2\). Here we have shown that for _any_\(\tilde{C}\in\{0,1\}^{n}\) such that \(\mathsf{HAM}(\tilde{C},C)\leqslant\rho\), there exists a set \(\mathsf{Good}(\tilde{C}):=\mathcal{I}\) such that \(|\mathcal{I}|\geqslant\delta\cdot k\). This is for any \(\tilde{C}\), and in particular, any corrupt codeword that a PPT adversary could produce. Thus we have that for any PPT adversary \(\mathcal{A}\), any \(x\in\{0,1\}^{k}\), and \(C=\mathsf{Enc}_{\mathsf{H},\lambda}(x)\), we have \[\Pr[\mathsf{Limit}(\mathcal{A}(C),\rho,\delta,x,C,\lambda)=1]=0.\] ## V Proof of Theorem 2 We dedicate this section to showing that \(\mathcal{C}_{\mathsf{L},\lambda}=\{(\mathsf{Enc}_{\mathsf{L},\lambda}, \mathsf{Dec}_{\mathsf{L},\lambda})\}_{\lambda\in\mathbb{N}}\) satisfies Theorem 2, where \(\mathsf{Enc}_{\mathsf{L},\lambda}\) and \(\mathsf{Dec}_{\mathsf{L},\lambda}\) are defined in, Algorithms 3 and 4, respectively. We recall the theorem below. **Theorem 2**.: _Let \(\Pi\) be a \(r:=r(\lambda)\) length signature scheme. There exists a constant \(c\in(0,1/2)\) such that for every positive polynomial \(k(\cdot)\) and constant \(\rho^{*}\in(0,1/3)\), there exists a code family \(\mathcal{C}_{\mathsf{Ins}}:=\{C_{\Delta}[K,k(\lambda),2]\}_{\lambda\in\mathbb{N}}\) and a function \(\mu:=\mu(\lambda)\) such that \(\mathcal{C}_{\mathsf{Ins}}\) is a \((\ell,\rho,p,q)\)-InsDel crLDC with_ * \(K=O(\max\{k(1+\log(k)/r),r\})\)_,_ * \(\ell=O((\log^{3}(n)+\mu)\cdot(r+\log(k)))\)_,_ * \(\rho=\Theta(1)\)_,_ * \(p=1-\rho^{*}-\exp(-\mu(1/2-c)^{2}/2(1-c))>2/3\)_, and_ * \(\delta=1-\Theta(\rho)\)_,_ _where \(k:=k(\lambda)\)._ Proof.: Fix any \(x\in\{0,1\}^{k}\) and let \(C=\mathsf{Enc}_{\mathsf{L},\lambda}(x)\). In the definition of \(\mathsf{Enc}_{\mathsf{L},\lambda}\), we know that \(C=C^{(1)}\|\cdots\|C^{(d)}\) for \(d=\lceil k/r(\lambda)\rceil\). First assume that \(k\geqslant r(\lambda)\). Each block \(C^{(j)}\) is the \(\mathsf{SZ}\) encoding of \(x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j\), appended at the front and back with zero-buffers. Note that the bit-length of \(j\) is \(\log(d)=\log(k)-\log(r(\lambda))\); for simplicity, we assume \(j\in\{0,1\}^{\log(k)}\) (we can pad to length \(\log(k)\) otherwise). Then we have \(\tau:=|x^{(j)}|\sigma^{(j)}\|\mathsf{pk}\|j|=3\cdot r(\lambda)+\log(k)\). This gives us that \(|c^{(j)}|=\tau/\beta_{\mathsf{sz}}\), and that \[|C^{(j)}| =2\alpha\tau+\tau/\beta_{\mathsf{sz}}\] \[=(2\alpha+(1/\beta_{\mathsf{sz}}))\tau.\] Finally, this gives \[|C|=K =d\cdot(2\alpha+(1/\beta_{\mathsf{sz}}))\cdot\tau\] \[=\left(\frac{k}{r(\lambda)}\right)\cdot(2\alpha+(1/\beta_{ \mathsf{sz}}))\cdot(3\cdot r(\lambda)+\log(k))\] \[=O\bigg{(}k\cdot\left(1+\frac{\log(k)}{r(\lambda)}\right)\bigg{)},\] where the last equality holds since \(\alpha,\beta_{\mathsf{sz}}\) are constants. Note here that \(k\) is sufficiently large whenever \(\mathsf{SZ}(t)\) exists for all \(t\geqslant\log(k)\). Now whenever \(r(\lambda)>k\), we have that \(d=1\) and we simply sign and encode a single block of length \(\tau\), which yields a single block of length \[(2\alpha+(1/\beta_{\mathsf{sz}}))\tau =(2\alpha+(1/\beta_{\mathsf{sz}}))\cdot(3r(\lambda)+\log(k))\] \[=\Theta(r(\lambda))\] since \(r(\lambda)>k\) and \(\alpha,\beta_{\mathsf{sz}}\) are constants. Thus we have \[K=O(\max\{k(1+\log(k)/r(\lambda)),r(\lambda)\}).\] For the remainder of the proof, let \(\beta:=(2\alpha+(1/\beta_{\mathsf{sz}}))\) and \(\tilde{C}\in\{0,1\}^{K^{\prime}}\) such that \(\mathsf{ED}(C,\tilde{C})\leqslant\rho\). We introduce some preliminary definitions (Definitions 4 to 5) and lemmas Lemmas 2 to 5), due to Block et al. [2], needed for the proof. **Definition 4**.: _A block decomposition of a (corrupt) codeword \(\tilde{C}\in\{0,1\}^{K^{\prime}}\) is a non-decreasing map \(\phi\colon[K^{\prime}]\to[d]\) for \(K^{\prime},d\in\mathbb{N}\)._ For any block decomposition \(\phi\), since \(\phi\) is a non-decreasing map we have that \(\phi^{-1}(j)\) for any \(j\in[d]\) is an interval. That is, \(\phi^{-1}(j)=\{l_{j},l_{j}+1,\ldots,r_{j}\}\) for integers \(l_{j},r_{j}\in[K^{\prime}]\) and \(l_{j}\leqslant r_{j}\). Thus \(\phi\) induces a partition of \([K^{\prime}]\) into \(d\) intervals of the form \(\{\phi^{-1}(j)\colon j\in[d]\}\). Recalling that \(C=\mathsf{Enc}_{\mathsf{L},\lambda}(x)\) is of the form \(C^{(1)}\|\cdots\|C^{(d)}\), we have the following. **Lemma 2**.: _There exists a block decomposition \(\phi_{0}\colon[K^{\prime}]\to[d]\) such that_ \[\sum_{j\in[d]}\mathsf{ED}\Big{(}\tilde{C}[\phi_{0}^{-1}(j)],C^{(j)}\Big{)} \leqslant\rho\;. \tag{1}\] Intuitively, Lemma 2 says there exists a block decomposition such that the total edit distance between \(C^{\prime}\) and \(C^{\prime}\) is exactly given by the sum of edit distances between the (possibly corrupt) blocks \(\tilde{C}[\phi_{0}^{-1}(j)]\) and blocks \(C^{(j)}\). Next we define the notion of a \(\gamma\)-good block. **Definition 5**.: _For \(\gamma\in(0,1)\) and \(j\in[d]\), we say that block \(j\) is \(\gamma\)-good with respect to a block decomposition \(\phi\) if \(\mathsf{ED}(\tilde{C}[\phi^{-1}(j)],C^{(j)})\leqslant\gamma\). Otherwise, we say block \(j\) is \(\gamma\)-bad._ With respect to block decomposition \(\phi_{0}\), the number of \(\gamma\)-bad blocks is bounded, and the length of the intervals \(\phi_{0}^{-1}(j)\) is bounded for every \(\gamma\)-good block \(j\). **Lemma 3**.: _Let \(\alpha\) be the constant given by Lemma 4, let \(\beta_{\mathbf{sz}}\) be the constant given by Lemma 1, and let \(\beta=(2\alpha+(1/\beta_{\mathbf{sz}}))\)._ 1. _The total fraction of_ \(\gamma\)_-bad blocks in_ \(\tilde{C}\) _is at most_ \(2\cdot\beta\cdot\rho/(\gamma\cdot\alpha)\)_._ 2. _For any_ \(\gamma\)_-good block_ \(j\)_, we have that_ \((\beta-\alpha\gamma)\cdot\tau\leqslant|\phi_{0}^{-1}(j)|\leqslant(\beta+\alpha \gamma)\cdot\tau\)_, where_ \(\tau=|x^{(j)}|\varphi^{(j)}\|\mathsf{pk}\|j|\)_._ Given the notion of \(\gamma\)-good, we can now formally introduce the algorithms \(\mathsf{NBS}\) and \(\mathsf{BlockDec}\) along with their guarantees. **Lemma 4**.: _Let \(\rho_{\mathbf{sz}}\) be the constant given by Lemma 1. There exists constant \(\alpha=\Omega(\rho_{\mathbf{sz}})\) and a randomized oracle algorithm \(\mathsf{NBS}\) with the following property. Let \(\rho^{*}\in(0,1/2)\) be a fixed constant, let \(t\) be sufficiently large, \(d\) be a parameter, let \(b=(b^{(1)},\ldots,b^{(d)})\in\{0,1\}^{t}\) be any string where \(b^{(i)}\in\{0,1\}^{t/d}\) for all \(i\in[d]\), and let \(c=(c^{(1)},\ldots,c^{(d)})\) for_ \[c^{(i)}=0^{\alpha(t/d)}\|\mathsf{SZ}.\mathsf{Enc}(b^{(i)}\|i)\|0^{\alpha(t/d)}\] _for all \(i\in[d]\). Then there exists a negligible function \(\vartheta(\cdot)\) such that for any \(c^{\prime}\in\{0,1\}^{K^{\prime}}\) satisfying \(\mathsf{ED}(c,c^{\prime})\leqslant\rho=\Theta(\rho^{*}\cdot\rho_{\mathbf{sz}})\), we have that_ \[\Pr[\mathsf{NPB}^{c^{\prime}}(j)\neq b^{(j)}\ |\ j\text{ is $\gamma$-good}]\geqslant\rho^{*}] \leqslant\vartheta(n^{\prime}), \tag{2}\] _where the probability is taken over the random coins of \(\mathsf{NBS}\) and \(j\stackrel{{ t}}{{\leftarrow}}[d]\). Furthermore, the algorithm \(\mathsf{NBS}\) makes \(O(\log^{3}(K^{\prime})\cdot(t/d+\log(d)))\) oracle queries for any input \(j\in[d]\), and if \(c=c^{\prime}\) then the above probability is \(0\)._ **Lemma 5**.: _Let \(\rho_{\mathbf{sz}}\) be the constant given by Lemma 1. There exists constant \(\alpha=\Omega(\rho_{\mathbf{sz}})\) and randomized oracle algorithm \(\mathsf{BlockDec}\) with the following properties. Let \(\rho^{*}\in(0,1/2)\) be a fixed constant, let \(t\) be sufficiently large, let \(d\) be a parameter, let \(b=(b^{(1)},\ldots,b^{(d)})\in\{0,1\}^{t}\) be any string where \(b^{(i)}\in\{0,1\}^{t/d}\) for all \(i\in[d]\), and let \(c=(c^{(1)},\ldots,c^{(d)})\) for_ \[c^{(i)}=0^{\alpha(t/d)}\|\mathsf{SZ}.\mathsf{Enc}(b^{(i)}\|i)\|0^{\alpha(t/d)}\] _for all \(i\in[d]\). Then for any \(c^{\prime}\in\{0,1\}^{n^{\prime}}\) satisfying \(\mathsf{ED}(c,c^{\prime})\leqslant\rho=\Theta(\rho^{*}\cdot\rho_{\mathbf{sz}})\), we have that_ 1. _For any_ \(\gamma\)_-good block_ \(j\in[d]\)_:_ \[\Pr_{i\in\phi_{0}^{-1}(j)}\Bigl{[}\mathsf{BlockDec}^{c^{\prime}}(i)\neq b^{( j)}\Bigr{]}\leqslant\gamma\;.\] (3) _Furthermore, this probability is equal to zero if \(c^{\prime}=c\)._ 2. \(\mathsf{BlockDec}\) _has query complexity_ \(O(t/d+\log(d))\)_._ We now have the necessary components to finish proving Theorem 2. We begin by showing Item 2 of Definition 1. Let \(x\in\{0,1\}^{k}\), \(C=\mathsf{Enc}_{\mathsf{I},\lambda}(x)\), and let \((\mathsf{pk},\mathsf{sk})\) be the public and private key pair sampled by \(\mathsf{Enc}_{\mathsf{I},\lambda}\) during the encoding of \(x\) as \(C\). Suppose that \(\tilde{C}=C\) and let \(i\in[k]\) be any index. We want to argue that \(\Pr[\mathsf{Dec}^{\tilde{C}}_{\mathsf{I},\lambda}(i)=x_{i}]=1\). To see this, first observe that for \((j-1)\cdot r(\lambda)<i\leqslant j\cdot r(\lambda)\), we have \[\Pr[\mathsf{Dec}^{\tilde{C}}_{\mathsf{I},\lambda}(i)=x_{i}]\] \[=\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\cdot\Pr[\tilde{x}^{(j)}=x^{(j)}] \cdot\Pr[\tilde{\sigma}^{(j)}=\sigma^{(j)}],\] where \(x^{(j)}\) is the \(j\)-th block of \(x\), \(\sigma^{(j)}\) is the signature for \(x^{(j)}\|j\), \(\mathsf{pk}^{*}\) is the public key recovered via majority, and \(\tilde{x}^{(j)},\tilde{\sigma}^{(j)}\) are parsed from the output of \(\mathsf{NBS}(j)\). First notice that since \(\tilde{C}=C\), every block \(j\in[d]\) of \(\tilde{C}\) is 0-good with respect to \(C\). By Lemma 5, for every \(j\in[d]\) we have that \[\Pr_{i\in\phi_{0}(j)}\Bigl{[}\mathsf{BlockDec}^{\tilde{C}}(i)=C^{(j)}\Bigr{]}=1.\] This implies that for every \(\kappa\in[\mu]\), we have \[\Pr[\mathsf{pk}^{(i_{\kappa})}=\mathsf{pk}]=1,\] which implies \[\Pr[\mathsf{pk}^{*}=\mathsf{pk}]=1.\] Next, since \(\tilde{C}=C\) by Lemma 4 we have that \[\Pr_{\hat{m}^{(j)}\leftarrow\mathsf{NBS}^{\tilde{C}}(j)}\Bigl{[}\tilde{m}^{(j) }=x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j\Bigr{]}=1.\] This implies that \(\mathsf{Ver}_{\mathsf{pk}^{*}}(x^{(j)}\|j,\sigma^{(j)})=1\) with probability \(1\), yielding \(\Pr[\mathsf{Dec}^{\tilde{C}}_{\mathsf{I},\lambda}(i)=x_{i}]=1\). We now work towards proving Items 3 and 4. Let \(\mathcal{A}\) be a PPT adversary and let \(\tilde{C}\leftarrow\mathcal{A}(C)\) such that \(\tilde{C}\in\{0,1\}^{K^{\prime}}\) for some \(K^{\prime}\) and \(\mathsf{ED}(\tilde{C},C)\leqslant\rho\). We begin with Item 3. Let \(D_{i}:=\mathsf{Dec}^{\tilde{C}}_{\mathsf{I},\lambda}(i)\) denote the random variable of running the decoder with input \(i\) and oracle \(\tilde{C}\). Our goal is to show that \[\Pr[\exists i\in[k]\colon\ \Pr[D_{i}\in\{x_{i},\bot\}]<p]\leqslant\varepsilon_{ \mathsf{F}}(\lambda),\] where \(\varepsilon_{\mathsf{F}}(\lambda)\) is some negligible function. Equivalently stated, we want to show that \[\Pr[\forall i\in[k]\colon\ \Pr[D_{i}\in\{x_{i},\bot\}]\geqslant p] \geqslant 1-\varepsilon_{\mathsf{F}}(\lambda).\] We now directly analyze \(\Pr[D_{i}\in\{x_{i},\bot\}]\). The analysis here is almost identical to the analysis of Theorem 1, except now we must take into consideration the algorithms \(\mathsf{BlockDec}\) and \(\mathsf{NBS}\). Let \(\mathsf{Fr}_{\mathsf{g}_{i}}\) denote the event that \(\mathcal{A}\) produces a signature for \(j\in[d]\) satisfying \((j-1)\cdot r(\lambda)<i\leqslant j\cdot r(\lambda)\) and \(\tilde{x}^{(j)}\) are recovered from the output of \(\mathsf{NBS}(j)\). Then we have that \[\Pr[D_{i}\in\{x_{i},\bot\}]=\Pr[D_{i}\in\{x_{i},\bot\}\ |\ \overline{\mathsf{Fr}_{\mathsf{g}_{i}}}].\] Since the decoder can never output \(\bot\) and \(x_{i}\) simultaneously, we have that \[\Pr[D_{ We first analyze \(\Pr[D_{i}=x_{i}\mid\mathsf{Frg}_{i}]\). First notice that in this case, we have 1. \(\mathsf{pk}^{*}\neq\bot\) and \(\tilde{m}^{(j)}\neq\bot\); and 2. \(\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})=1\). Since we assume that \((\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})\) is not a forgery, given the above it must be the case that 1. \(\tilde{x}^{(j)}=x^{(j)}\) and \(\tilde{\sigma}^{(j)}=\sigma^{(j)}\); and 2. \(\mathsf{pk}^{*}=\mathsf{pk}\). Independent runs of \(\mathsf{NBS}\) and \(\mathsf{BlockDec}\) implies \[\Pr[D_{i}=x_{i}\mid\overline{\mathsf{Frg}_{i}}] =\Pr[\mathsf{pk}^{*}=\mathsf{pk}\ \wedge\ \tilde{m}^{(j)}\neq\bot]\] \[=\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\cdot\Pr[\tilde{m}^{(j)}\neq\bot].\] Next we analyze \(\Pr[D_{i}=\bot|\overline{\mathsf{Frg}_{i}}]\). The decoder outputs bot if \(\mathsf{pk}^{*}=\bot\) or \(\tilde{m}^{(j)}=\bot\) or \(\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})=0\), noting that the final verification is only checked conditioned on \(\mathsf{pk}^{*}\neq\bot\) and \(\tilde{m}^{(j)}\neq\bot\). Let \(\mathcal{E}_{1}\) denote the event that \(\mathsf{pk}^{*}=\bot\) and let \(\mathcal{E}_{2}\) denote the event that \(\tilde{m}^{(j)}=\bot\). Then we have \[\Pr[D_{i}=\bot\mid\overline{\mathsf{Frg}_{i}}]\] \[=\Pr[\mathcal{E}_{1}]+\Pr[\mathcal{E}_{2}]-\Pr[\mathcal{E}_{1}\ \wedge\ \mathcal{E}_{2}]\] \[+\Pr\!\left[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j, \tilde{\sigma}^{(j)})=0\right|\frac{\overline{\mathcal{E}}_{1}\ \wedge\ }{\mathsf{Frg}_{i}}\cdot\Pr[\overline{\mathcal{E}}_{1}\ \wedge\ \overline{\mathcal{E}}_{2}].\] Since we assume no forgery has occurred, the last summation term of the above probability can be lower bounded as \[\Pr\!\left[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j, \tilde{\sigma}^{(j)})=0\right|\frac{\overline{\mathcal{E}}_{1}\ \wedge\ }{\overline{\mathcal{E}}_{2}\ \wedge\ }\cdot\Pr[\overline{\mathcal{E}}_{1}\ \wedge\ \overline{\mathcal{E}}_{2}]\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \overline{\mathsf{Frg}_{i}}\ \wedge\] \[\geqslant\Pr\!\left[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)} \|j,\tilde{\sigma}^{(j)})=0\right|\begin{array}{ccc}(\tilde{x}^{(j)}\neq x^{ (j)}&\vee\\ \tilde{\sigma}^{(j)}\neq\sigma^{(j)})&\wedge\\ \mathsf{pk}^{*}=\mathsf{pk}\end{array}\right]\] \[\quad\quad\quad\quad\cdot\Pr[\overline{\mathcal{E}}_{2}\ \wedge\ \mathsf{pk}^{*}=\mathsf{pk}]\] \[=\Pr[\overline{\mathcal{E}}_{2}\ \wedge\ \mathsf{pk}^{*}=\mathsf{pk}],\] where for the lower bound we ignore the event that \(\mathsf{pk}^{*}\neq\mathsf{pk}\ \wedge\ \mathsf{pk}^{*}\neq\bot\). Thus we have that \(\Pr[D_{i}\in\{x_{i},\bot\}|\overline{\mathsf{Frg}_{i}}]\geqslant\Pr[\mathsf{pk} ^{*}=\bot]+\Pr[\tilde{m}^{(j)}=\bot]-\Pr[\mathsf{pk}^{*}=\bot]\cdot\Pr[\tilde {m}^{(j)}=\bot]+\Pr[\tilde{m}^{(j)}\neq\bot]\cdot\Pr[\mathsf{pk}^{*}=\mathsf{ pk}]\), \[\Pr[D_{i}\in\{x_{i},\bot\}\mid\overline{\mathsf{Frg}_{i}}] \geqslant\Pr[\mathcal{E}_{1}]+\Pr[\mathcal{E}_{2}]-\Pr[\mathcal{E}_{1}]\cdot \Pr[\mathcal{E}_{2}]\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \Pr[\overline{\mathcal{E}}_{2}]\cdot\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\] where the inequality holds since \(\mathsf{NBS}\) and \(\mathsf{BlockDec}\) are run independently of each other. With the above lower bound established, we turn to analyzing \(\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\). Let \(\mathsf{pk}\) be the public key sampled by \(\mathsf{Enc}_{\mathsf{L},\lambda}(x)\) to generate codeword \(C\). Recovery of \(\mathsf{pk}\) is performed by uniformly sampling \(\mu\) indices of \(\tilde{C}\), running \(\mathsf{BlockDec}\) on these indices, and taking the majority of the public keys we recover. Intuitively, we want to recover \(\mathsf{pk}^{*}\) with high probability via a Chernoff bound. Define random variable \(X_{\kappa}\) for \(\kappa\in[\mu]\) as \[X_{\kappa}:=\begin{cases}1&x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j=\mathsf{Block Dec}(i_{\kappa})\text{ for }j\in[d]\\ 0&\text{otherwise}\end{cases}.\] Thus we need to ensure that \(\Pr[X_{\kappa}=1]>1/2\). By Lemma 5, we know that as long as the index \(i_{\kappa}\) lies within the bounds of some \(\gamma\)-good block \(j\), then \(X_{\kappa}=1\) with probability at least \(1-\gamma\). Let \(\mathcal{J}\subset[d]\) be the indices of the \(\gamma\)-good blocks in \(\tilde{C}\). Then we have \[\Pr[X_{\kappa}=1]\geqslant\Pr[i_{\kappa}\in\phi_{0}^{-1}(j)\mid j\in \mathcal{J}_{\mathsf{Good}}]\cdot(1-\gamma)\] By Lemma 3, we know that there are at least \((1-2\beta\rho/(\gamma\alpha))\)-fraction of blocks which are \(\gamma\)-good, which implies \(|\mathcal{J}|\geqslant d\cdot(1-2\beta\rho/(\gamma\alpha))\). Since we want \(\Pr[X_{\kappa}=1]>1/2\), by Lemma 2 we have \[(d/n)\cdot(1-(2\beta\rho)/(\gamma\alpha))\cdot(\beta-\alpha\gamma)\cdot\tau \cdot(1-\gamma)>1/2.\] Solving for \(\rho\) above yields \[\rho<\frac{\gamma\cdot\alpha}{2\cdot\beta}\cdot\bigg{(}1-\frac{\beta}{2\cdot(1- \gamma)\cdot(\beta-\alpha\cdot\gamma)}\bigg{)}.\] Borrowing the parameters of Block et al. [2, Proposition 22], we set \(\gamma=1/12\) and \(\alpha=2\gamma\rho_{in}/(\gamma+6)\). Recall also that \(\beta=2\alpha+(1/\beta_{\mathsf{zz}})\). Then we have \[\alpha=(2/73)\cdot\rho_{in}\] \[\beta=(4/73)\cdot\rho_{in}+(1/\beta_{\mathsf{zz}})\] \[\frac{\gamma\cdot\alpha}{2\cdot\beta}=\frac{(1/12)\cdot(2/73)\cdot\rho _{in}}{2\cdot(4/73)\cdot\rho_{in}+2/\beta_{\mathsf{zz}}}<1\] \[\frac{\beta}{2(1-\gamma)(\beta-\alpha\cdot\gamma)}=\frac{6}{11} \cdot\frac{(4/73)\rho_{in}+(1/\beta_{\mathsf{zz}})}{(23/438)\rho_{in}+(1/\beta_{ \mathsf{zz}})}<1.\] Given these above parameters, since \(\rho_{in}<1\) we also have \[\frac{\gamma\cdot\alpha}{2\cdot\beta}\geqslant\frac{(2/876)\cdot\rho_{in}}{(8/73) +2/\beta_{\mathsf{zz}}}.\] Thus setting \[\rho:=\] \[\bigg{(}\frac{(2/876)\rho_{in}}{(8/73)+2/\beta_{\mathsf{zz}}}\bigg{)} \bigg{(}1-\frac{6}{11}\cdot\frac{(4/73)\rho_{in}+(1/\beta_{\mathsf{zz}})}{(23/438 )\cdot\rho_{in}+(1/\beta_{\mathsf{zz}})}\bigg{)}\] \[=\Theta(1)\] ensures that \(\Pr[X_{\kappa}=1]>1/2\). Now let \(q:=\Pr[X_{\kappa}=1]>1/2\). By a Chernoff bound we have that \[\Pr\!\left[\sum_{\kappa\in[\mu]}X_{\kappa}>\frac{\mu}{2}\right]\geqslant 1-\exp(-\mu\cdot(q-1/2)^{2}/(2q)).\] This implies \(\mathsf{pk}^{*}=\mathsf{pk}\) with probability at least \(1-\exp(-\mu(q-1/2)^{2}/(2q))\), which implies \[\Pr[\mathsf{pk}^{*}=\bot]\leqslant\exp(-\mu(q-1/2)^{2}/(2q)).\] Given that \[\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\geqslant 1-\exp(-\mu(q-1/2)^{2}/(2q)),\] we now analyze \(\Pr[\tilde{m}^{(j)}\neq\bot]\). Note that by our noisy binary search algorithm in Lemma 4, if block some constant and \(\vartheta\) is a negligible function. Conditioning on the case where we recover any \(\gamma\)-good block with probability at least \((1-\rho^{*})\) we have \[\Pr[\tilde{m}^{(j)}\neq\bot\ |\ j\ \text{is}\ \gamma\text{-good}]\geqslant(1-\rho^{*}).\] Equivalently, we have \[\Pr[\tilde{m}^{(j)}=\bot\ |\ j\ \text{is}\ \gamma\text{-good}]\leqslant\rho^{*}.\] Putting it all together we have that \[\Pr[D_{i}\in\{x_{i},\bot\}\ |\ \overline{\mathsf{Frg}_{i}}]\] \[\geqslant(1-\rho^{*})\cdot\bigg{(}1-\exp\biggl{(}-\mu\cdot\frac{ (q-1/2)^{2}}{2q}\biggr{)}\biggr{)}\] \[-\rho^{*}\cdot\exp\biggl{(}-\mu\cdot\frac{(q-1/2)^{2}}{2q}\biggr{)}\] \[=1-\rho^{*}-\exp\biggl{(}-\mu\cdot\frac{(q-1/2)^{2}}{2q}\biggr{)},\] where \(\rho^{*}\in(0,1/2)\) is a fixed constant we are free to choose. In particular, we choose \(\rho^{*}<1/3\) to help ensure \(p>2/3\) for appropriate choice of \(\mu\). In the above analysis, we conditioned on \(\overline{\mathsf{Frg}_{i}}\) and on \(\mathsf{NBS}\) succeeding. By the security of the digital signature scheme, \(\overline{\mathsf{Frg}_{i}}\) occurs with probability at least \(1-\varepsilon_{\Pi}(\lambda)\), where \(\varepsilon_{\Pi}(\lambda)\) is a negligible function for the security of the digital signature scheme. By Lemma 4, with probability at least \(1-\vartheta(n^{\prime})\), \(\mathsf{NBS}\) outputs correctly. Finally, by union bound over \(i\in[k]\), we set \(\varepsilon_{\mathsf{F}}(\lambda,n):=k\cdot\varepsilon_{\Pi}(\lambda)\cdot \vartheta(n)\), which is negligible in \(\lambda\) since \(k(\lambda)=\operatorname{poly}(\lambda)\). Next we work towards proving Item 4. Again let \(\tilde{C}\leftarrow\mathcal{A}(C)\) for a PPT adversary \(\mathcal{A}\) and \(C=\mathsf{Enc}_{\mathsf{I},\lambda}(x)\) for \(x\in\{0,1\}^{k}\). Our goal is to show that there exists a negligible function \(\varepsilon_{\mathsf{L}}(\cdot)\) such that for all \(\lambda\in\mathbb{N}\) and all \(x\in\{0,1\}^{k}\), we have \[\Pr[\mathsf{Limit}(\mathcal{A}(C),\rho,\delta,x,y)=1]\leqslant\varepsilon_{ \mathsf{L}}(\lambda).\] We directly analyze the size of the set \(\mathsf{Good}(\tilde{C})\). As before, let \(D_{i}\) be the random variable denoting the output of \(\mathsf{Dec}^{\tilde{C}}_{\mathsf{I},\lambda}(i)\). Then we are interested in lower bounding \(\Pr[D_{i}=x_{i}]\) for a fixed \(i\in[k]\). By definition of \(\mathsf{Dec}_{\mathsf{I},\lambda}\), the decoder only outputs a bit if 1. \(\mathsf{pk}^{*}\neq\bot\); 2. \(\tilde{m}^{(j)}\neq\bot\); and 3. \(\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})=1\). As before, let \(\mathcal{E}_{1}\) denote the event that \(\mathsf{pk}^{*}=\bot\) and \(\mathcal{E}_{2}\) denote the event that \(\tilde{m}^{(j)}=\bot\). Then by definition we have \[\Pr[D_{i}=x_{i}]= \Pr[\overline{\mathcal{E}}_{1}]\cdot\Pr[\overline{\mathcal{E}}_{ 2}|\overline{\mathcal{E}}_{1}]\] \[\cdot\Pr[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j, \tilde{\sigma}^{(j)})=1\ |\ \overline{\mathcal{E}}_{1}\ \wedge\ \overline{\mathcal{E}}_{2}].\] Since \(\mathsf{pk}^{*}\) and \(\tilde{m}^{(j)}\) are computed independently, we have the above equals \[\Pr[D_{i}=x_{i}]\] \[= \Pr[\overline{\mathcal{E}}_{1}]\cdot\Pr[\overline{\mathcal{E}}_{2 }]\cdot\Pr[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{ (j)})=1\ |\ \overline{\mathcal{E}}_{1}\ \wedge\ \overline{\mathcal{E}}_{2}]\] \[\geqslant \Pr[\mathsf{pk}^{*}=\mathsf{pk}]\cdot\Pr[\tilde{m}^{(j)}=m^{(j)}]\] \[\cdot \Pr[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{ \sigma}^{(j)})=1\ |\ \mathsf{pk}^{*}=\mathsf{pk}\ \wedge\ \tilde{m}^{(j)}=m^{(j)}],\] where the inequality follows since it is a subset of all possible events in consideration. and \(m^{(j)}:=x^{(j)}\|\sigma^{(j)}\|\mathsf{pk}\|j\). Now we analyze each of the terms in the above lower bound. First note that the final term \[\Pr[\mathsf{Ver}_{\mathsf{pk}^{*}}(\tilde{x}^{(j)}\|j,\tilde{\sigma}^{(j)})=1 \ |\ \mathsf{pk}^{*}=\mathsf{pk}\ \wedge\ \tilde{m}^{(j)}=m^{(j)}]=1\] by definition. Thus we analyze the other two probability terms. By our prior work, we know that \[\Pr[\mathsf{pk}^{*}=\mathsf{pk}]\geqslant 1-\exp(-\mu(q-1/2)^{2}/(2q)).\] We now lower bound \(\Pr[\tilde{m}^{(j)}=m^{(j)}]\). Recall that \(\tilde{m}^{(j)}=\mathsf{NBS}^{\tilde{C}}(j)\). By Lemma 4 we have \[\Pr\biggl{[}\Pr\biggl{[}\tilde{m}^{(j)}\neq C^{(j)}\ |\ j\ \text{is}\ \gamma\text{-good}\biggr{]}\geqslant\rho^{*}\biggr{]} \leqslant\vartheta(K^{\prime}).\] Fix \(j\) to be a \(\gamma\)-good block. Then we have \[\Pr\Bigl{[}\Pr[\tilde{m}^{(j)}\neq C^{(j)}]\geqslant\rho^{*}\Bigr{]}\leqslant \vartheta(K^{\prime});\] or equivalently \[\Pr\Bigl{[}\Pr[\tilde{m}^{(j)}=C^{(j)}]\leqslant 1-\rho^{*}\Bigr{]} \leqslant\vartheta(K^{\prime}).\] This implies \[\Pr\Bigl{[}\Pr[\tilde{m}^{(j)}=C^{(j)}]\geqslant 1-\rho^{*}\Bigr{]} \geqslant 1-\vartheta(K^{\prime}).\] Thus with probability at least \(1-\vartheta(K^{\prime})\), we have \[\Pr[\tilde{m}^{(j)}=C^{(j)}]\geqslant 1-\rho^{*}.\] By Lemma 3, we know at least \(1-(2\beta\rho)/(\gamma\alpha)\)-fraction of blocks in \(\tilde{C}\) are \(\gamma\)-good. Since for every \(i\in[k]\) there is a unique block \(j\in[d]\) such that \((j-1)r(\lambda)<i\leqslant jr(\lambda)\), for the set \[\mathcal{G}:=\{i\in[k]\colon i\ \text{is in a}\ \gamma\text{-good block}\},\] we have \[|\mathcal{G}|\geqslant r(\lambda)\cdot\bigg{(}1-\frac{2\beta\rho}{\gamma\alpha} \bigg{)}\cdot d=\bigg{(}1-\frac{2\beta\rho}{\gamma\alpha}\bigg{)}\cdot k.\] Therefore at least \(1-(2\beta\rho)/(\gamma\alpha)\)-fraction of indices \(i\in[k]\) lie within a \(\gamma\)-good block. Putting it all together, for any \(\gamma\)-good block we have that with probability at least \(1-\vartheta(K^{\prime})\): \[\Pr[D_{i}=x_{i}]\geqslant(1-\exp(-\mu\cdot(q-1/2)^{2}/(2q)))\cdot(1-\rho^{*}).\] Choosing \(\mu\) and \(\rho^{*}\in(0,1/3)\) appropriately such that \[(1-\exp(-\mu(q-1/2)^{2}/(2q)))\cdot(1-\rho^{*})>2/3,\] along with the fact that at least \(1-(2\beta\rho)/(\gamma\alpha)\)-fraction of indices \(i\in[k]\) lie within a \(\gamma\)-good block, we have \[\Pr[|\mathsf{Good}(\tilde{C})|\geqslant(1-(2\beta\rho)/(\gamma\alpha))\cdot k] \geqslant 1-\vartheta(K^{\prime}),\] which implies \[\Pr[\mathsf{Limit}(\tilde{C},\rho,\delta,x,C)=1] \leqslant\vartheta(K^{\prime})\] \[=\vartheta(\Theta(k))=:\varepsilon_{\mathsf{L}}(\lambda)\] for \(\delta=1-(2\beta\rho)/(\gamma\alpha)=1-\Theta(\rho)\). ## VI Acknowledgments Alexander R. Block was supported in part by the National Science Foundation under NSF CCF #1910659 and in part by DARPA under agreement No. HR00112020022 and No. HR00112020025. Jeremiah Blocki was supported in part by the National Science Foundation under awards CNS #2047272, CNS #1931443 and CCF #1910659. The views, opinions, findings, conclusions and/or recommendations expressed in this material are those of the author and should not be interpreted as reflecting the position or policy of the Department of Defense or the U.S. Government, and no official endorsement should be inferred.
2310.13631
Fluctuating parametric drive of coupled classical oscillators can simulate dissipative qubits
We investigate a system composed of two coupled oscillators subject to stochastic fluctuations in its internal parameters. In particular, we answer the question whether the well-known classical analogy of the quantum dynamics of two-level systems (TLS), i.e. qubits, provided by two coupled oscillators can be extended to simulate the dynamics of dissipative quantum systems. In the context of nanomechanics, the analogy in the dissipation free case has already been tested in multiple experimental setups, e.g., doubly clamped or cantilever string resonators and optically levitated particles. A well-known result of this classical analogy is that the relaxation and decoherence times of the analog quantum system must be equal, i.e. $T_1=T_2$, in contrast to the general case of quantum TLS. We show that this fundamentally quantum feature, i.e. $T_1\neq T_2$, can be implemented as well in the aforementioned classical systems by adding stochastic fluctuations in their internal parameters. Moreover, we show that these stochastic contributions can be engineered in the control apparatus of those systems, discussing, in particular, the application of this theory to levitated nanoparticles and to nanostring resonators.
Lorenzo Bernazzani, Guido Burkard
2023-10-20T16:29:47Z
http://arxiv.org/abs/2310.13631v3
# Fluctuating parametric driving of coupled classical oscillators can simulate dissipative qubits ###### Abstract We investigate a system composed of two coupled oscillators subject to stochastic fluctuations in its internal parameters. In particular, we answer the question whether the well-known classical analogy of the quantum dynamics of two-level systems (TLS), i.e. qubits, provided by two coupled oscillators can be extended to simulate the dynamics of dissipative quantum systems. In the context of nanomechanics, the analogy in the dissipation free case has already been tested in multiple experimental setups, e.g., doubly clamped or cantilever string resonators and optically levitated particles. A well-known result of this classical analogy is that the relaxation and decoherence times of the analog quantum system must be equal, i.e. \(T_{1}=T_{2}\), in contrast to the general case of quantum TLS. We show that this fundamentally quantum feature, i.e. \(T_{1}\neq T_{2}\), can be implemented as well in the aforementioned classical systems by adding stochastic fluctuations in their internal parameters. Moreover, we show that these stochastic contributions can be engineered in the control apparatus of those systems. ## I Introduction Enquiries on quantum-classical analogies have attracted much interest [1; 2; 3; 4], and it is well established that the dynamics of a quantum N-level system described by Schrodinger's equation can be simulated by classical systems, e.g. coupled classical harmonic oscillators [5; 6; 7; 8; 9; 10; 11; 12]. To date, several papers have outlined that this similarity between the two dynamics leads to the observation of purely quantum mechanical effects in classical systems, e.g. Rabi oscillations [5; 13; 14], Landau-Zener transitions [15], and also Stuckelberg interferometry [16; 17; 18; 11]. Quite surprisingly, some of these effects have been reported also in macroscopic systems [12], pushing this analogy even out to the macroscopic world. As a consequence, classical coupled oscillators constitute the simplest platform to study how a classical system can mimic quantum dynamics. Mechanics was indeed the first playground of physics. Nowadays, it is, maybe surprisingly, still investigated in the modern quest for the nanoscale miniaturization of physical devices [19; 20]. An astounding degree of control has been achieved in mechanical systems over the past years [21; 22; 23]. This has fostered the hope [24] to reach mesoscopic quantum superposition of massive objects and to study quantum effects of gravity in the lab [25; 26; 22], with the aim of uncovering some of the elusive aspects of the quantum-classical border [27]. This work is meant to give some new insights about the quantum-classical analogy, drawing from the aforementioned mapping of quantum evolution by means of classical oscillators. In particular, we address the issue of the trivial form of the relaxation term [11; 13; 8; 14] in the Schrodinger equation for the simulated quantum TLS, i.e., the fact that all the components of the Bloch vector (BV) decay with the same characteristic time. It is well known instead, that in the quantum case there are two relaxation times, the longitudinal \(T_{1}\) and the transverse \(T_{2}\). These are linked to the relaxation of the populations and of the coherences of the TLS state. Furthermore, they are related by the equation \(T_{2}^{-1}=(2T_{1})^{-1}+T_{\phi}^{-1}\), which also defines the phase relaxation time \(T_{\phi}\)[28]. Our aim is to show how noise can induce dissipation mechanisms with such quantum features, that were not grasped by the previous classical model. The addition of stochastic fluctuations and the solution of the associated Langevin dynamics [29] adds quantumness to the system, meaning that it provides the phenomenology of this model with a pure phase relaxation time, dependent on the noise strength. The quantum phase is the defining concept of quantum mechanics, giving rise to quantum superpositions, interference phenomena, and many-body entanglement. The perturbation of microscopic quantum systems leads to the loss of phase coherence, and, on a macroscopic scale, the emergence of our classical reality. Decoherence is then a detrimental aspect for quantum information systems [28]. However, the interaction of quantum systems with the external degrees of freedom of the environment is unavoidable. The attempt to suppress the coupling between the system and environment has been complemented with the modelling of the effects of these interaction on the reduced system dynamics. While an accurate model can be achieved in simple systems, the computational complexity scales exponentially with the system size. To cope with this, simulation of dissipation is a valuable tool [30]. The common approach consists in adding classical noise to the analog system. In this way the open-quantum-system, that one is interested in simulating, is mapped onto another system, more controllable, the dynamics of which is proven to be analog. Our work also follows this direction by providing a very simple classical analog system, in which aspects of quantum dissipation can be simulated and even visualized [12]. The approach to analog simulation of quantum dissipation via classical noise has a precedent in the context of many-body quantum systems [31], and the relation between classical and quantum noise in open systems dynamics has been thoroughly investigated and described [32; 33; 34]. The novelty of this paper lies in the fact that the system is completely classical. Therefore it constitutes a furthest simplification of the aforementioned results that, moreover, can be readily tested in the lab. ## II The classical two-level atom Let us start from a system of coupled classical mechanical oscillators consisting of two bodies with equal mass \(m\) connected with springs to each other and to the adjacent fixed walls, as depicted in Fig. 1. This system is described by the following linear coupled classical equations of motion for the positions \(x_{1}\) and \(x_{2}\) of the two masses, \[m\ddot{x}_{1}+m\gamma\dot{x}_{1}+\bigg{[}k-\frac{\Delta k(t)}{2} \bigg{]}x_{1}+h\big{(}x_{1}-x_{2}\big{)}=f(t), \tag{1}\] \[m\ddot{x}_{2}+m\gamma\dot{x}_{2}+\bigg{[}k+\frac{\Delta k(t)}{2} \bigg{]}x_{2}+h\big{(}x_{2}-x_{1}\big{)}=0\,, \tag{2}\] where we assumed that the masses \(m\) and the damping coefficient \(\gamma\) of the two oscillators are equal for convenience. The time dependent spring constants \(k_{1,2}(t)=k\mp\Delta k(t)/2\) here account for the parametric driving mechanism that we are going to discuss extensively later. The (linear) coupling between the two oscillators is given by \(h\). The inhomogeneous term on the right hand side of the first equation describes the effect of an external time-dependent deterministic or fluctuating force. This term has the purpose of injecting energy into the system by displacing the oscillator \(x_{1}\) from its equilibrium position. We discuss later what happens when we let the system evolve once it has been initialized in a certain state. Therefore, in the following we consider the time-evolution of the system after the initial time \(t=0\) at which the inhomogeneous force stops. In the rest of the paper, we set then the inhomogeneus term \(f(t)=0\). Nonetheless, since we have introduced friction through the \(\gamma\) coefficient, it would generally be the case that a fluctuating force of thermal origin will also appear in \(\delta f(t)\). For simplicity, we delegate the treatment of the more general case \(\delta f(t)\neq 0\) to Appendix A. In any case, the suppression of this thermal noise term can be accomplished with little effort in real systems by lowering the temperature of the thermal bath or by using feedback mechanisms [21; 35]. Following Ref. [8], let us now divide Eq. (2) by \(m\) and introduce the quantities \(\omega_{0}^{2}\equiv\frac{k+h}{m}\,,\,\Omega_{c}^{2}\equiv\frac{h}{m}\,,\, \Omega_{d}^{2}(t)\equiv\frac{\Delta k(t)}{2m}\,.\) After this relabeling we can rewrite the Eq. (2) in the following matrix form, \[\Big{(}\frac{d^{2}}{dt^{2}}+\gamma\frac{d}{dt}+\omega_{0}^{2}\Big{)}\begin{bmatrix} x_{1}\\ x_{2}\end{bmatrix}+\begin{bmatrix}-\Omega_{d}^{2}&-\Omega_{c}^{2}\\ -\Omega_{c}^{2}&\Omega_{d}^{2}\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}. \tag{3}\] Now we are going to make the ansatz \(x_{1,2}(t)=\text{Re}\big{[}\psi_{1,2}(t)\,e^{i\omega_{0}t}\big{]}\). Basically this consists in factorising the oscillation into an oscillating component at the carrier frequency \(\omega_{0}\) times an amplitude modulation \(\psi_{1,2}(t)\). If we make the further assumption that the envelope \(\psi_{1,2}\) is a slowly varying function of time we can drop the second derivative in our equations of motion. This has been called the slowly varying envelope approximation (SVEA) [8]. Furthermore, we assumed \(\gamma\ll\omega_{0}\)[8; 11], which is typically the case in state of the art micro/nanomechanics since \(\gamma=\omega_{0}/Q\), where \(Q\) is the quality factor, which usually varies between \(10^{3}\) and \(10^{6}\) for most of the systems of interest [21]. These modifications lead to a matrix equation for the complex valued amplitudes alone, written in a vectorial form, i.e., utilizing the vector \(\psi=\big{[}\psi_{1}\,,\,\psi_{2}\big{]}^{T}\), \[i\dot{\psi}=\text{H}(t)\psi-i\frac{\gamma}{2}\psi=\,\frac{1}{2}\big{[}\Delta \sigma_{x}+\varepsilon(t)\sigma_{z}\big{]}\psi-i\frac{\gamma}{2}\psi\,, \tag{4}\] where \(\Delta\equiv\Omega_{c}^{2}/\omega_{0}\) and \(\varepsilon\equiv\Omega_{d}^{2}/\omega_{0}\). In the absence of friction \(\gamma=0\), the two-component-amplitude equation (4) is an analogy of the Schrodinger equation for a driven TLS (taken with \(\hbar=1\)) [37]. It is customary now to introduce a harmonic form for the driving term and we will specifically use \(\varepsilon(t)=\varepsilon_{0}+D\cos\omega t\,\). It has already been shown that this parametric driving induces a coherent dynamics that is quite faithfully resembling a quantum TLS. In fact, we could introduce either density matrix or Bloch vector (BV) equivalent representations [38; 39; 40], with the definitions, \[\rho=\begin{bmatrix}|\psi_{1}|^{2}&\psi_{1}^{*}\psi_{2}\\ \psi_{2}^{*}\psi_{1}&|\psi_{2}|^{2}\end{bmatrix},\quad\mathbf{r}=\begin{bmatrix} \psi_{1}^{*}\psi_{2}+\psi_{2}^{*}\psi_{1}\\ i\big{(}\psi_{2}^{*}\psi_{1}-\psi_{1}^{*}\psi_{2}\big{)}\\ |\psi_{2}|^{2}-|\psi_{1}|^{2}.\end{bmatrix} \tag{5}\] It must be noted however that neither, in the general case, \(\psi\) and \(\rho\) have modulus or trace equal to \(1\) nor the dynamical map of Eq. 4 is trace preserving. The modulus or trace of the state would be dependent on the energy Figure 1: Schematics of the classical-mechanical model under consideration. Two masses \(m\) are connected through a spring with the time-independent spring constant \(h\), and each of them is connected to the neighbouring wall by springs with time-dependent spring constants \(k_{1}(t)\) and \(k_{2}(t)\), with \(k_{2}(t)-k_{1}(t)=\Delta k(t)\,\). We also include friction by means of a damping coefficient \(\gamma\) equal for both masses. Moreover, we include an external driving force \(f(t)\). This external driving has the purpose of initially feeding energy into the system that is then driven parametrically [23; 36], via the modulation of the spring constants. fed into the system by the inhomogeneous driving \(f(t)\), for \(t<0\). From the point \(t=0\) onward anyway, the only source of damping of energy (probability in the quantum analogy) will be the non-hermitian trivial part of the Hamiltonian. We are going to get rid of this part in the next section by a simple coordinate transformation. Notwithstanding, the Hamiltonian with \(\gamma=0\) is conservative in the classical language and Hermitian in the quantum analogy, therefore that part is energy/trace conserving. Standing these _caveat_, we call our \(\psi\) and \(\rho\) states nonetheless. Since the Hamiltonian in Eq. (4) can be written in the form \(\mathrm{H}(t)=\mathbf{B}(t)\cdot\mathbf{\sigma}\) this quantum classical analogy leads to the so-called classical Bloch equations (BE) [8], i.e., \[\dot{\mathbf{r}}=\mathbf{B}(t)\times\mathbf{r}-\gamma\mathbf{r}\,. \tag{6}\] Here \(\mathbf{B}(t)=\left[\,\Delta\,,\,0\,,\,\varepsilon(t)\right]^{T}\) is the magnetic field vector and its modulus \(B\), is the angular velocity with which the Bloch vector precesses. In a magnetic resonance description this describes the Larmor precession of a spin with Bloch vector \(\mathbf{r}\) around a magnetic field along \(\mathbf{B}\). The resemblance of quantum dynamics shown by Eq. (6) is striking considered that they have been derived by purely classical means. However, there is one evident issue in these equations, which makes the classical simulation less faithful, i.e., that the relaxation terms are not equal to the general relaxation present, e.g., in dissipative spin dynamics. More precisely, we know from the theory of either light-matter interaction or magnetic resonance that the components of the BV relax with two characteristic times [28; 39; 40]: \(T_{2}\) is the relaxation time of the \(x\) and \(y\) components of the BV, hence the ones containing the coherences, while \(T_{1}\) is the relaxation time for the \(z\) component, related to level populations in the quantum language. In the following, we will explain how this hallmark quantum feature can be implemented in the aforementioned analogy. ## III Fluctuations in the driving We make a change of coordinates to go into the _diabatic_ basis, i.e. \(\psi^{\prime}=\begin{bmatrix}\psi_{+}\\ \psi_{-}\end{bmatrix}\equiv\exp\Bigl{(}\frac{i\theta}{2}\sigma_{y}\Bigr{)} \begin{bmatrix}\psi_{1}\\ \psi_{2}\end{bmatrix}\). After this rotation by the angle \(\theta=\arctan\left[\Delta/\varepsilon_{0}\right]\), the Schrodinger equation takes the form \[i\dot{\psi}^{\prime}=\frac{1}{2}\biggl{[}\Omega\sigma_{z}+\frac{\varepsilon_{ 0}}{\Omega}\varepsilon^{\prime}(t)\sigma_{z}-\frac{\Delta}{\Omega}\varepsilon ^{\prime}(t)\sigma_{x}\biggr{]}\psi^{\prime}-i\frac{\gamma}{2}\psi^{\prime}\,, \tag{7}\] where \(\varepsilon^{\prime}(t)=D\cos(\omega t)\) and \(\Omega=\sqrt{\varepsilon_{0}^{2}+\Delta^{2}}\). It is also useful to renormalize the state vector in order to get rid of the trivial decay with rate \(\gamma/2\) and thus to ensure a normalized state vector, as costumarily required by the quantum analogy. Therefore, we rename \(\psi_{\pm}\to\mathcal{N}\psi_{\pm}^{\prime}\,e^{-\gamma t/2}\), where \(\mathcal{N}[E(0)]\) depends on \(E(0)=\int_{-\infty}^{0}f(t)\dot{x}_{1}(t)\,dt\), i.e., the energy fed into the system before our simulation has started. Our aim is to add fluctuations to the dynamics of the coupled oscillators system. We accomplish this by adding a stochastic term to the driving. Along this route, we write the driving term \(\varepsilon(t)\) with an additive Langevin force (which actually is a multiplicative noise term for the stochastics problem [41], since it multiplies a random variable), \[\varepsilon^{\prime}(t)=D\cos\left(\omega t\right)+\Gamma_{d}(t)\,. \tag{8}\] Here, \(\Gamma_{d}\) has the dimension of a frequency. We consider for the noise term a stationary and Gaussian Ornstein-Uhlenbeck process [29; 41; 42] such that: \[\langle\Gamma_{d}(t)\rangle=0\,,\qquad\langle\Gamma_{d}(0)\Gamma_{d}(t)\rangle =\frac{G}{\tau_{c}}\exp\left(-\frac{|t|}{\tau_{c}}\right),\] where \(G\) is the noise strength. Therefore, we have to solve the following coupled stochastic differential equations [43], \[i\dot{\psi} =\Bigl{[}\mathrm{H}_{0}+\mathrm{H}_{1}(t)\Bigr{]}\psi \tag{9}\] \[=\frac{1}{2}\biggl{[}\Omega\sigma_{z}+\frac{D\cos\left(\omega t \right)+\Gamma_{d}(t)}{\Omega}\bigl{(}\varepsilon_{0}\sigma_{z}-\Delta\sigma_ {x}\bigr{)}\biggr{]}\psi\,,\] where \(\mathrm{H}_{0}=\frac{1}{2}\Omega\sigma_{z}\) and \(\mathrm{H}_{1}(t)=\mathrm{H}_{d}(t)+\mathrm{H}_{s}(t)\) contains the time-dependent parts of the Hamiltonian, both deterministic \(\mathrm{H}_{d}(t)\propto\cos(\omega t)\) and stochastic \(\mathrm{H}_{s}(t)\propto\Gamma_{d}(t)\). In order to solve this system of stochastic equations we rely on the cumulant expansion method. This method was pioneered by Kubo [44] and then refined by others [45; 46; 47; 48; 49; 50; 51]. These tools have been applied to magnetic resonance in molecular samples [52]. Particularly useful will be also the extension to coloured noise with correlations between the longitudinal and transverse component [53; 54]. The aforementioned stochastic formalism allows one to replace the full stochastic differential equation with a differential equation for the averages of the stochastic variables, or of their higher moments. Since we are interested in the motion of the BV we will need the second moments of \(\psi_{\pm}\). This leads us straightforwardly to the stochastic analog of the BE. Since we need to get to an equation of motion for the avarage of the BV, this will contain the second moments of the stochastic variables \(\psi_{\pm}\). The effective density matrix, with obvious changes from Eq. 5 is now \(\rho(t)=\sum_{\pm}\psi_{+}^{*}\psi_{j}\,|i\rangle\langle j|\). ### Magnetic resonance analogy Now we are going to exploit the formal analogy between Eq. 6 and the equations of magnetic resonance of a single spin in radio-frequency magnetic spectroscopy. This theory has been outlined by [52; 53; 54; 55; 40; 56]. It has been lately applied to studies of decoherence problems in qubits [57; 58; 59; 60]. The approach goes as follows. From Eq. 4 we can write a von Neumann equation for the density matrix defined above, i.e. \(i\dot{\rho}(t)=\bigl{[}\mathrm{H}(t),\rho(t)\bigr{]}=\mathcal{L}(t)\rho(t)\) Switching to the interaction picture we get rid of the deterministic part of the Hamiltonian for the time being. Thus we set \(\rho^{\prime}(t)=\mathrm{U}^{\dagger}(t)\rho(t)\mathrm{U}(t)\,,\)\(\mathrm{H}^{\prime}(t)=\mathrm{H}^{\prime}_{s}(t)\), where \(\mathrm{H}_{s}(t)\) is the stochastic-only part of the starting Hamiltonian (see Eq. 9) and \(\mathrm{U}(t)\) is defined by \(\frac{\mathrm{d}\mathrm{U}(t)}{dt}=-i(\mathrm{H}(t))\mathrm{U}(t)\) with \(\mathrm{U}(0)=\openone\)[40]. The previous equation then becomes [61] \[i\dot{\rho}^{\prime}(t)=\mathcal{L}^{\prime}(t)\rho^{\prime}(t)=\big{[} \mathrm{H}^{\prime}_{s}(t),\rho^{\prime}(t)\big{]}\,. \tag{10}\] Now we seek a solution by iteration [43; 50] we write \[\rho^{\prime}(t) =\rho(0)-i\int_{0}^{t}\mathcal{L}^{\prime}(t_{1})\rho(0)dt_{1}- \tag{11}\] \[-\int_{0}^{t}\int_{0}^{t_{1}}\mathcal{L}^{\prime}(t_{1})\mathcal{ L}^{\prime}(t_{2})\rho^{\prime}(t_{2})dt_{1}dt_{2} \tag{12}\] Going on like this we get to [43; 50] \[\rho^{\prime}(t)=\mathbb{Y}(t|0)\rho(0)\implies\langle\rho^{\prime}(t)\rangle =(\mathbb{Y}(t|0))\rho(0), \tag{13}\] since \(\rho(0)=\rho^{\prime}(0)\) is not random, and where we have introduced the non-local kernel \(\mathbb{Y}(t|0)=\openone+\sum_{n=1}^{+\infty}(-i)^{n}\int\cdots\int\mathcal{L }^{\prime}(t_{1})\ldots\mathcal{L}^{\prime}(t_{n})dt_{1}\ldots dt_{n}\). Differentiating and assuming that \(\{\mathbb{Y}(t|0)\}\) is invertible then \[\langle\dot{\rho}^{\prime}(t)\rangle=\langle\dot{\mathbb{Y}}(t|0)\rangle\rho( 0)=\langle\dot{\mathbb{Y}}(t|0)\rangle\langle\mathbb{Y}(t|0)\rangle^{-1} \langle\rho^{\prime}(t)\rangle, \tag{14}\] where \(\mathbb{K}^{\prime}(t)\equiv\langle\dot{\mathbb{Y}}(t|0)\rangle\langle\dot{ \mathbb{Y}}(t|0)\rangle^{-1}\) is a non-stochastic superoperator by construction, since it connects averaged quantities. We expand \(\mathbb{K}^{\prime}(t)\) in orders of \(G\) and truncate this series at the second order. Utilizing \(\{\mathcal{L}_{s}(t)\}=0\) we see that the cumulants simplify indeed to the moments of \(\mathcal{L}_{s}(t)\), \[\langle\dot{\rho}^{\prime}(t)\rangle =\mathbb{K}^{\mathrm{II}}(t)\langle\rho^{\prime}(t)\rangle\,, \tag{15}\] \[\mathbb{K}^{\mathrm{II}}(t) =-\int_{0}^{t}dt^{\prime}(\mathcal{L}^{\prime}_{s}(t)\mathcal{L}^{ \prime}_{s}(t-t^{\prime})).\] This is equivalent to the Born approximation in open quantum systems. For that to be a good approximation it is sufficient that \(|\mathcal{L}_{s}(t)|\tau_{c}\ll 1\), since this is the relative error between successive orders. The above requirement is equivalent to demanding that the time scales of the noise and of the deterministic evolution are well separated, i.e., that \(\langle\mathcal{L}(t)\rangle\) varies significantly on a time scale which is much slower than the noise memory time. Therefore, we will say that the truncated series is a coarse grained description of the full dynamics of \(\rho(t)\). Since the time scales are well separated and the noise spectrum exponentially decays we can extend the limit of integration to \(+\infty\), therefore we can see that our expansion is in fact an expansion in \(G\tau_{c}\ll 1\), since \(\tau_{c}\) is the width of the interval where the integrand gives an important contribution. The extension of the integration interval leads to: \[\langle\dot{\rho}^{\prime}(t)\rangle =\mathbb{K}^{\mathrm{II}}(+\infty)(\rho^{\prime}(t))\] \[=-\int_{0}^{+\infty}dt^{\prime}(\mathcal{L}^{\prime}_{s}(t) \mathcal{L}^{\prime}_{s}(t-t^{\prime}))\langle\rho^{\prime}(t)\rangle\,. \tag{16}\] This is the Markov approximation of our non-Markovian process. We arrive at: \[\langle\dot{\rho}(t)\rangle=-i\big{[}(\mathrm{H}(t)),\langle\rho(t )\rangle\big{]}-\int_{0}^{+\infty}\langle\big{[}\mathrm{H}_{s}(t), \tag{17}\] \[\big{[}\mathrm{U}^{\dagger}(t-t^{\prime},t)\mathrm{H}_{s}(t-t^{ \prime})\mathrm{U}(t-t^{\prime},t),\langle\rho(t)\rangle\big{]}\big{]}\big{]} dt^{\prime}\,,\] where \(\mathrm{U}(t-t^{\prime},t)\equiv\mathrm{U}(t-t^{\prime})\mathrm{U}^{\dagger}(t)\). We approximate \(\mathrm{U}(t-t^{\prime},t)\) with \(\exp\big{(}i\mathrm{H}_{0}\,t^{\prime}\big{)}\), which is justified if \(D\tau_{c}\ll 1\)[40]. That means basically that in the relaxation term of the second order cumulant equation the time evolution operator is \(\mathrm{U}(t)=\exp\!\big{(}-i\mathrm{H}_{0}t\big{)}\) (to show this it is sufficient to plug in \(t^{\prime}=t\) in the previous relation and invert). This procedure is customary in radio frequency magnetic resonance in liquids and it is usually called the nonviscous-liquid approximation [40]. To compute the relaxation times we will make use of [40]: \[\langle\dot{\rho}(t)\rangle+i\big{[}(\mathrm{H}(t)),\langle\rho(t )\rangle\big{]}\approx -\int_{0}^{+\infty}\langle\big{[}\mathrm{H}_{s}(t),\big{[}e^{-i \mathrm{H}_{0}t^{\prime}}\mathrm{H}_{s}(t-t^{\prime})e^{i\mathrm{H}_{0}t^{ \prime}},\langle\rho(t)\rangle\big{]}\big{]}\big{]}dt^{\prime} \tag{18}\] \[= -e^{-i\mathrm{H}_{0}t}\int_{0}^{+\infty}dt^{\prime}\langle\big{[} \mathrm{H}_{s}^{*}(t),\big{[}\mathrm{H}_{s}^{*}(t-t^{\prime}),\langle\rho^{*}(t )\rangle\big{]}\big{]}\rangle e^{i\mathrm{H}_{0}t}\approx\mathrm{U}(t) \langle\dot{\rho}^{\prime}(t)\rangle\mathrm{U}^{\dagger}(t)\,.\] This basically means that \(\langle\dot{\rho}^{\prime}(t)\rangle(t)\approx\langle\dot{\rho}^{*}(t)\rangle(t)\) if we remember to sum the correct first order term when we turn back to the Schrodinger picture. Therefore, our approximation is equivalent to assuming \(\mathbb{K}^{\mathrm{II}}(+\infty)\approx-\int_{0}^{+\infty}dt^{\prime}(\mathcal{L }^{*}_{s}(t)\mathcal{L}^{*}_{s}(t-t^{\prime}))\) (please note that this expression still depends on \(t\)) in Eq. (15), the superoperator \(\mathcal{L}^{*}_{s}=\big{[}e^{i\mathrm{H}_{0}t}\mathrm{H}_{s}(t)e^{-i\mathrm{ H}_{0}t},\,\circ\big{]}\) being \[\mathcal{L}^{*}_{s}(t)= \frac{\Gamma_{d}(t)}{2\Omega}\big{\{}\varepsilon_{0}\big{[} \sigma_{z},\,\circ\big{]}-\Delta e^{-i\Omega t}\big{[}\sigma_{z},\,\circ\big{]}\] \[-\Delta e^{i\Omega t}\big{[}\sigma_{-},\,\circ\big{]}\big{\}}\,. \tag{19}\] Now we split the density matrix in its spin components \(\{\sigma_{z},\sigma_{+},\sigma_{-}\}\), i.e. \(\mathbf{r}=\mathrm{Tr}\big{[}\rho^{\mathrm{S}}(\mathrm{t})\mathbf{\sigma}\big{]}= \mathrm{Tr}\big{[}\rho^{\mathrm{H}}(0)\mathbf{\sigma}(\mathrm{t})\big{]}\). The equation of motion for \(\mathrm{r}_{\alpha}(t)\) with \(\alpha=+,-,0\), where we renamed \(\mathrm{r}_{z}\) as \(\mathrm{r}_{0}\), are then (averages of these components are intended, but we do not write them to ease the notation), \[2\Omega^{2}\,\dot{r}_{+}^{\prime}(t)= -\big{[}2\varepsilon_{0}^{2}k_{0}+\Delta^{2}k_{-}\big{]}\mathsf{r }_{+}^{\prime} \tag{20}\] \[+\Delta^{2}k_{+}e^{-i2\Omega t}\mathsf{r}_{-}^{\prime}-2\Delta \varepsilon_{0}k_{+}e^{-i\Omega t}\mathsf{r}_{0}^{\prime}\,,\] \[2\Omega^{2}\,\dot{r}_{-}^{\prime}(t)= -\big{[}2\varepsilon_{0}^{2}k_{0}+\Delta^{2}k_{+}\big{]}\mathsf{r }_{-}^{\prime}\] (21) \[+\Delta^{2}k_{-}e^{i2\Omega t}\mathsf{r}_{+}^{\prime}-2\Delta \varepsilon_{0}k_{-}e^{i\Omega t}\mathsf{r}_{0}^{\prime}\,,\] \[2\Omega^{2}\,\dot{r}_{0}^{\prime}(t)= -\Delta^{2}\big{[}k_{-}+k_{+}\big{]}\mathsf{r}_{0}^{\prime}\] (22) \[-\Delta\varepsilon_{0}k_{0}e^{i\Omega t}\mathsf{r}_{+}^{\prime}- \Delta\varepsilon_{0}k_{0}e^{-i\Omega t}\mathsf{r}_{-}^{\prime}\,.\] Here we wrote \(k_{\alpha}=\int_{0}^{+\infty}\langle\Gamma_{d}(t)\Gamma_{d}(t-t^{\prime}) \rangle e^{i\alpha\Omega\tau}dt^{\prime}=G\times\frac{1+i\alpha\Omega\tau}{ \alpha^{2}\Omega^{2}\tau_{+}^{2}+1}\) and \(\mp\) is a shorthand for \(\{-1,\,1\}\), thus \(\alpha\in\{-1,\,0\,,\,1\}\). Furthermore, we dropped the subscript of \(\varepsilon_{0}\). We rename these terms using Redfield's notation [55]. \[\langle\mathsf{r}_{\alpha}^{\prime}(t)\rangle=-\sum_{\beta}\exp\!\big{[}i( \beta-\alpha)\Omega t\big{]}\mathcal{R}_{\alpha,\beta}\,\langle\mathsf{r}_{ \beta}^{\prime}(t)\rangle. \tag{23}\] Now Eqs. (20)-(23) can be solved perturbatively after Laplace-transforming them. In this way we finally obtain the relaxation times and the Lamb's shift \(\delta\Omega\), in closed form. These are found to be: \[T_{1}^{-1}=\mathcal{R}_{0,0}-2\,\mathrm{Re}\Bigg{(}\frac{\mathcal{ R}_{0,+}\mathcal{R}_{+,0}}{i\Omega+\mathcal{R}_{+,+}}\Bigg{)}, \tag{24}\] \[T_{2}^{-1}=\mathrm{Re}\Bigg{(}\mathcal{R}_{+,+}+\frac{\mathcal{ R}_{0,+}\mathcal{R}_{+,0}}{i\Omega-\mathcal{R}_{0,0}}\Bigg{)},\] (25) \[\delta\Omega=\mathrm{Im}\Bigg{(}\mathcal{R}_{+,+}+\frac{\mathcal{ R}_{0,+}\mathcal{R}_{+,0}}{i\Omega-\mathcal{R}_{0,0}}\Bigg{)}-\frac{\big{|} \mathcal{R}_{+,-}\big{|}^{2}}{2\Omega}. \tag{26}\] These results agree with the ones reported by Refs. [53; 54]. Here, the \(\mathcal{R}_{i,j}\) constants are given by: \[\mathcal{R}_{0,0}=\frac{G\,\Delta^{2}}{\Omega^{2}(1+\Omega^{2}\tau _{c}^{2})}, \tag{27}\] \[\mathcal{R}_{0,+}=\frac{G\,\Delta\varepsilon_{0}}{2\Omega^{2}},\] (28) \[\mathcal{R}_{+,0}=\frac{G\,\Delta\varepsilon_{0}(1+i\Omega\tau_{ c})}{\Omega^{2}(1+\Omega^{2}\tau_{c}^{2})},\] (29) \[\mathcal{R}_{+,+}=\frac{G}{2}\left[\frac{2\varepsilon_{0}^{2}}{ \Omega^{2}}+\frac{\Delta^{2}(1-i\Omega\tau_{c})}{\Omega^{2}(1+\Omega^{2}\tau_{ c}^{2})}\right],\] (30) \[\mathcal{R}_{+,-}=-\frac{G\,\Delta^{2}(1+i\Omega\tau_{c})}{2\Omega ^{2}(1+\Omega^{2}\tau_{c}^{2})}. \tag{31}\] Now substituting these results into Eqs. (24), (25), and (26), we see that correlations turn out to be fourth order corrections in those expressions for the relaxation times and the frequency shift. In Appendix B we show that the fourth-order term of the cumulant expansion (15) is vanishing due to the Gaussianity of the noise process and its zero avarage. In Fig. 2 we plot \(\mathrm{Re}[\mathsf{r}_{+}^{\prime}]\) (purple) and \(\mathsf{r}_{z}^{\prime}\) (orange) in the interaction picture as a function of time. The solid lines in the plot are the numerical solution of Eq. (20) while the dashed exponentials are the result of the perturbative solution given by Eqs. (24) and (25). The gray and black dotted lines are the solution obtained by discarding the correlations from the beginning, or by applying the secular approximation to Eq. (23). Equations (24), (25), and (23) imply that \(T_{2}^{-1}\neq(2T_{1})^{-1}\), the difference amounting to the pure dephasing rate \[T_{\phi}^{-1}=(T_{2})^{-1}-(2T_{1})^{-1}\approx\frac{G\varepsilon_{0}^{2}}{ \Omega^{2}}+\frac{G^{2}\Delta^{2}\varepsilon_{0}^{2}}{\Omega^{5}(1+\Omega^{2} \tau_{c}^{2})}\,, \tag{32}\] where we neglected \(\mathcal{R}_{0,0}\) and \(\mathcal{R}_{+,+}\) in the denominator of Eqs. (24) and (25). This analytically approximate result and the graphs of Fig. 2 showing exponential decay with different characteristic times demonstrate that the addition of noise to the parametric driving of this coupled classical oscillator system induce a dephasing dynamics, resolving the problem of equal relaxation times pointed out in the previous models by Refs. [8; 11]. In Fig. 3 we show the BV dynamics on the unit Bloch sphere. These are plotted using Eq. (20) after transforming to the lab frame (x,y,z) and to the Schrodinger Figure 2: Decay of the polarization \(\mathsf{r}_{z}^{\prime}=\mathrm{Tr}(\sigma_{z}\rho)\) of the Bloch vector (orange), and of the coherences \(\mathrm{Re}(\mathsf{r}_{+}^{\prime})=\mathrm{Tr}(\sigma_{+}\rho)\) (purple), for \(\varepsilon_{0}=1.5\Delta,G=0.5\Delta\). Solid lines are the plot of the numerical solution to the Redfield equation (23). Dashed colored lines picture the exponential decay with times given by Eqs. (24),(25) (\(\mathsf{r}_{i}^{\prime}(0)=\mathsf{r}_{i}(0)\)). Dotted gray and black lines represent the corresponding plot neglecting the oscillating terms on the right hand side of Eq. (20) (secular approximation). (a) Correlated noise, \(G\tau_{c}=0.25\), and (b) white noise limit, \(G\tau_{c}=0\). picture. Both panels represent the time evolution of the BV for the resonantly driven TLS. The three different initial conditions give rise to three trajectories on the sphere: \(\mathbf{r}(0)=\left[1/\sqrt{3}\,,\,1/\sqrt{3}\,,\,1/\sqrt{3}\right]^{T}\) (purple), \(\mathbf{r}(0)=\left[1/\sqrt{2}\,,\,1/\sqrt{2}\,,\,0\right]^{T}\) (yellow), \(\mathbf{r}(0)=\left[0\,,\,0\,,\,-1\right]^{T}\) (blue). Both panels represent the time evolution of the BV for the resonantly driven TLS. The three different initial conditions give rise to three trajectories on the sphere: \(\mathbf{r}(0)=\left[1/\sqrt{3}\,,\,1/\sqrt{3}\,,\,1/\sqrt{3}\right]^{T}\) (purple), \(\mathbf{r}(0)=\left[1/\sqrt{2}\,,\,1/\sqrt{2}\,,\,0\right]^{T}\) (yellow), \(\mathbf{r}(0)=\left[0\,,\,0\,,\,-1\right]^{T}\) (blue). Fig. 3(a) shows the system subject to noise with a memory time \(G\tau_{c}=0.15\) and Fig. 3(b) for memoryless noise, i.e. \(G\tau_{c}=0\). ### Why infinite temperature? It has already been shown by Kubo [62] that the stationary states of systems governed by stochastic Hamiltonians are the infinite temperature states, i.e. \(\rho\left(t\rightarrow+\infty\right)\propto\openone\). As it is evident from Fig. 3 the BV of our system always collapses to the center of the sphere, i.e. infinite temperature state, so the previous result applies here as well. Hereafter we explain why this is the case for our system, following [33]. We recast the Redfield equation in a Kossakowski-Lindblad form [63]. Since we have seen in this section that the oscillating terms do not give an important contribution to the decay of the BV, we ignore those terms here, therefore applying a secular approximation. the resulting master equation, under the Born-Markov approximation, from Eqs. (15) and (19) is then, \[(\dot{\rho}^{\prime}(t)) =\mathbb{K}^{\mathrm{II}}(t)(\rho^{\prime}(t))\approx-\frac{G^{2} }{4\Omega^{2}}\Big{(}\varepsilon_{0}^{2}k_{0}[\sigma_{z},[\sigma_{z},[\rho^{ \prime}(t)]]\Big{)} \tag{33}\] \[+\Delta^{2}k_{+}[\sigma_{+},[\sigma_{-},\langle\rho^{\prime} \rangle]]+\Delta^{2}k_{-}[\sigma_{-},[\sigma_{+},\langle\rho^{\prime}\rangle]] \Big{)}.\] Then, we transform back to the Schrodinger picture and apply the secular approximation. \[(\dot{\rho})\approx-i\Big{[}(\mathrm{H}),(\rho)\Big{]}-\frac{G^{2}}{4\Omega^{ 2}}\Big{[}2\varepsilon_{0}^{2}k_{0}((\rho)-\sigma_{z}(\rho)\sigma_{z})+\Delta ^{2}\mathrm{Re}(k_{+})((\rho)-2\sigma_{+}(\rho)\sigma_{-}-2\sigma_{-}(\rho) \sigma_{+})+i\Delta^{2}\mathrm{Im}(k_{+})[\sigma_{z},\langle\rho\rangle]\Big{]}. \tag{34}\] Here, the first term in the square brackets describes dephasing, while the second term leads to quantum jumps, and the third term is the Lamb shift. Now since the rates for upward and downward transitions are the same, i.e. \(\gamma_{\pm}=G^{2}\Delta^{2}\mathrm{Re}(k_{+})/2\Omega^{2}\), we can ascribe an infinite effective temperature to the system, since we do not have detailed balance [64; 33]. Clearly this is not a true temperature since we do not describe a thermal bath, with which the system can thermalize. Nonetheless, it is a nontrivial consequence of our model and indeed an interesting limit of the discussed quantum-classical analogy. ## IV Proposals for experimental tests We now briefly analyze some experimental setups in which the discussed quantum analog could be implemented. Among the many possible experimental test-beds we chose levitated nanoparticles and nanomechanical string resonators, since these systems are currently attracting much interest thanks to the great degree of control and isolation from the environment that they guarantee [65; 66; 22]. Nonetheless, a realization with macroscopic setups could also be within reach [12], favouring the possibility of a macroscopic quantum analogy of unprecedented depth. ### Levitated nanoparticles In levitation experiments dieletric particles are trapped in the focus of a laser beam, using radiation pressure. The harmonic modes of oscillation are the the center of mass coordinate of the particle, with the equilibrium position being the focus of the laser. The oscillation along the laser beam can be frozen so that the motion is effectively restricted to the two modes belonging to the focal plane. The two modes have different eigenfrequencies due to the polarization of the laser which creates an elliptically-shaped potential well. These modes can be coupled by harmonically varying the polarization direction of the trapping laser [14; 9]. Therefore, periodically varying the polarization angle according to \(\theta=\delta\cos(\omega t+\varphi)\) realizes a parametric driving of the coupled oscillator system. When the frequency of this harmonic driving is in resonance with the mode splitting, the strong coupling regime can be reached and Rabi oscillations can be implemented in which the energy can be exchanged coherently between the modes. Nonetheless, in order to obtain a dephasing dynamics in our system it is essential to have an off-set parameter such as \(\varepsilon_{0}\) in Eq. (9) that shifts the working point of the system slightly off the center of the avoided crossing. This makes the driving term appear as \(\theta=\delta\cos(\omega t+\varphi)+\delta_{0}\), with \(\delta_{0}\sim\delta\). Thus, one needs to be able to induce some asymmetry between the mode eigenfrequencies independently of the coupling/driving mechanism. Otherwise a shift of the polarization angle of the trapping laser in this system will only lead to a redefinition of the mode frequencies. However, in these system the modes eigenfrequency are defined by the polarization of the laser [67]. Therefore, we think that in this setup the effect described here cannot be observed. We cannot exclude that more complicated configurations exist, in which this mode frequency asymmetry can be adiabatically tuned, independently from the driving. In this way, taking the equation (8) from Ref. [14] we can map it to an equation similar to ours by the coordinate transformation \(\begin{bmatrix}a^{\prime}\\ b^{\prime}\end{bmatrix}=\exp(i\frac{\theta}{2}\sigma_{y})\begin{bmatrix}a\\ b\end{bmatrix}\), where \(\theta=\arctan(\delta_{0}/\omega_{\delta})\). ### Nanomechanical string resonators Considering nanoscale string resonators we propose exploiting the dielectric protocols for coupling and driving the in-plane and out-of-plane modes of a nanomechanical doubly clamped beam [13; 15; 16; 18; 68; 69; 70]. On the other hand, similar physics can be implemented in nanomechanical systems with a piezoelectrical driving setup [71; 72; 73]. Considering a doubly clamped string resonator, we have that the two modes (in-plane and out-of-plane) have different eigenfrequencies, that is \(k_{1}\neq k_{2}\), due to the rectangular cross-section. The oscillations of these two modes will be driven dielectrically. The parametric driving term can be written as \(\Delta k\approx-\frac{\epsilon_{0}(\varepsilon_{d}-1)}{2\pi}\alpha^{2}LF(x) \big{[}V_{dc}^{2}+2V_{dc}V_{ac}(t)\big{]}=C\big{[}V_{dc}^{2}+2V_{dc}A\cos( \omega t)\big{]}\)[69], where \(\epsilon_{0}\) and \(\epsilon_{d}\) are the dielectric permittivity of vacuum and of the dielectric composing the beam (e.g. SiN), \(L\) is the total length of the beam, \(\alpha\) is the attenuation parameter of the electric field inside the dielectric and \(F(x)\) depends on the geometry of the device. In this way the system is governed by equations of motion completely analogous to Eq. (2), where, in the language of Ref. [13], the force \(f(t)\) is the radiofrequency drive that initializes the system, \(h\) is the coupling provided by the cross derivatives of the electric field generated by the gold electrodes, and \(\omega_{1}=\sqrt{2k_{1}/m}\) and \(\omega_{2}=\sqrt{2k_{2}/m}\) are the frequencies of the in-plane and out-of-plane modes, where \(m\) is the total mass of the beam. In our language then \(\varepsilon_{0}\approx C(V_{dc}-V_{0})^{2}\) is proportional to the dc part of the voltage driving, where \(V_{0}\) is the voltage corresponding to the centre of the level splitting, and \(\varepsilon^{\prime}(t)\) is given by the ac part \(CA(V_{dc}-V_{0})V_{ac}\cos(\omega t)\). In this way the coupled modes of these nanomechanical string resonators are described by the same equations that we used and we think they can simulate the dephasing dynamics here described. ## V Conclusions We showed how the analogy between classical coupled oscillators and quantum TLS can be extended to include a decoherent dynamics of the TLS. We computed the relaxation times for BV components and demonstrated that a pure dephasing \(T_{\phi}\) (Eq. (32)) time appears in the TLS dynamics, as a consequence of the addition of noise to the parametric driving of the classical system. This analogy can be exploited to effectively simulate or probe quantum decoherence dynamics and dissipation via classical means, resulting in an effective simulation of decoherence. However, the simulation in its present form cannot grasp decay processes such as spontaneous emission. Simulation of open-quantum-system dynamics by addition of classical noise to analog, more controllable, physical system has spurred much interest lately [31; 33]. In particular, Ref. [33] studied how classical noise can mimic quantum dissipation and derived very clear analogies between the two frameworks, considering for instance the spin-boson problem. In a more complicated framework Ref. [31] studied how a wide class of master equations for many-body systems can be simulated in easily controllable systems subject to classical noise. Our analysis constitutes a simplification of these earlier attempts, since it does not require any quantum system at all, therefore enabling a purely classical simulation of quantum dissipation problems, that in principle can be implemented in systems even beyond the nanoscale. Albeit some other dissipative phenomena, e.g. spontaneous emission, are still out of reach for these kinds of simulation due to fundamental issues [32; 33]. Also in the system discussed here, the equality between the upward and downward transition rate in Eq. (34) withhold the possibility to simulate spontaneous decay processes. Notably, the system investigated here can be valuable also for frequency noise detection in the aforementioned mechanical systems. Usually, this kind of noise is hard to investigate because of the interplay of the thermal fluctuations. However, we see that in the system treated here these fluctuations are fully decoupled from the frequency noise and do not affect the relaxation times that we found (see Appendix A). Therefore one could measure frequency noise features from the expected decay times. ## Acknowledgements We gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft Project No. 425217212, SFB 1432. ## Appendix A Non-homogeneus forcing case We now analyze what happens considering the full case, i.e. the one with inhomogeneous forcing. We will be assuming that in this case Eq. 4 can be easily modified by adding a vector with two noisy complex components, i.e. \(\mathbf{f}(t)=\left[\begin{smallmatrix}f_{1}(t)&f_{2}(t)\end{smallmatrix}\right]^{T}\). This is not too far from the treatment of Ref. [8], even if we should also account for the non-stationarity of the time factors introduced by the SVEA, that we will nonetheless neglect for simplicity. Therefore we will consider \(f_{1,2}(t)\) as a noise realisation of a stationary Gaussian process. Let us consider then the larger vectorial (let us call it a combined Hilbert-Liovillean) space where the following vector lives: \[\mathbf{\Psi}=\begin{bmatrix}\psi_{1}\\ \psi_{2}\\ \psi_{1}^{*}\psi_{1}\\ \psi_{1}^{*}\psi_{2}\\ \psi_{2}^{*}\psi_{1}\\ \psi_{2}^{*}\psi_{2}\end{bmatrix}\] So that we can treat everything on the same foot. Assuming that both \(D\) and \(G\) are small parameters with respect to \(\Omega=\sqrt{\Delta^{2}+\epsilon_{0}^{2}}\), the dynamics of this large vector can be described in the following way [74]: \[\left(\dot{\mathbf{\Psi}}(t)\right)= \,\mathcal{K}(t)(\mathbf{\Psi}(t)) \tag{10}\] \[= \begin{bmatrix}\mathrm{K}(t)&\varnothing_{2\times 4}\\ \mathbb{F}_{4\times 2}(t)&\mathbb{K}(t)\end{bmatrix}\langle\mathbf{\Psi}(t)\rangle\] \[+ \begin{bmatrix}(\mathbf{f}(t))\\ \int dt^{\prime}(\tilde{\mathbf{f}}(t)e^{t^{\prime}\mathrm{H}_{0}}\mathbf{f}(t-t^{ \prime}))\end{bmatrix},\] where the minor \(\mathbb{K}(t)\) is an operator that governs the time evolution of the second moments in the Liouvillean space. This is the analog of Eq. 15. Let us now focus on the lower left block this is a \(4\times 2\) matrix with the following form: \[\mathbb{F}(t)=\langle\tilde{\mathbf{f}}(t)\rangle=\langle\mathbf{f}^{*}(t)\otimes \mathrm{Id}_{2}+\mathrm{Id}_{2}\otimes\mathbf{f}(t)\rangle=\] Note that if \(\langle\mathbf{f}\rangle=0\), i.e. if the fluctuation has zero average, that is the case considered in the main text, then the super-superoperator is \[\mathcal{K}(t)=\begin{bmatrix}\mathrm{K}(t)&\varnothing_{2\times 4}\\ \varnothing_{2\times 4}&\mathbb{K}(t)\end{bmatrix} \tag{11}\] and notably the dynamics of vector \(\mathbf{\Psi}\) factorizes Eq. (10) into the dynamics restricted to the two subspaces. In other words, if \(\mathbf{\Psi}\in V\) and \(V_{1}^{\prime}\equiv\mathrm{span}\{\mathbf{e}_{1},\mathbf{e}_{2}\}\) and \(V_{2}^{\prime}\equiv\mathrm{span}\{\mathbf{e}_{3},\mathbf{e}_{4},\mathbf{e}_{5},\mathbf{e}_{6}\}\) are subspaces of \(V=V_{1}^{\prime}\oplus V_{2}^{\prime}\) preserved by the dynamics. It is useful then to decompose the space \(V\) as a direct product \(V=V_{1}\otimes V_{2}\), where \(V_{1}\equiv\mathrm{span}\{\mathbf{e}_{1},\mathbf{e}_{2}\}\) and \(V_{2}\equiv\mathrm{span}\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3},\mathbf{e}_{4}\}\). The space \(V_{2}\) is the tetradic space. Now the equation restricted to subspace \(V_{1}\) is just the equation for the mean values of the stochastic variables, i.e. our state vector, which we will write making use of the cumulant expansion, \[\langle\dot{\mathbf{\psi}}\rangle=\mathrm{K}(t)\langle\mathbf{\psi}\rangle, \tag{12}\] where \[\mathrm{K}(t)= -i\big{\langle}\mathrm{H}(t)\big{\rangle} \tag{13}\] \[- \int_{0}^{+\infty}\!\!\left\langle\left(\mathrm{H}_{1}(t)\,e^{it ^{\prime}\mathrm{H}_{0}}\mathrm{H}_{1}(t-t^{\prime})\right)\right\rangle e^{- it^{\prime}\mathrm{H}_{0}}dt^{\prime},\] and where \(\langle(\diamond)\rangle\) is the cumulant. Here the cumulant sign is actually important because the perturbation is \(\mathrm{H}_{1}(t)=\frac{D\cos{(\omega t)}+G\,\Gamma_{d}(t)}{2\Omega}\big{(} \varepsilon_{0}\sigma_{z}-\Delta\sigma_{x}\big{)}\) with \(\langle\mathrm{H}_{1}(t)\rangle\neq 0\). The dynamics in \(V_{2}\) space is described by the following dynamic operator \[\mathbb{K}(t)= -i\big{\langle}\mathcal{L}(t)\big{\rangle} \tag{14}\] \[- \int_{0}^{+\infty}\!\!\left\langle\left(\mathcal{L}_{1}(t)\,e^{ it^{\prime}\mathcal{L}_{0}}\mathcal{L}_{1}(t-t^{\prime})\right)\right\rangle e^{- it^{\prime}\mathcal{L}_{0}}dt^{\prime}\] where \(\mathcal{L}_{1}(t)=\mathcal{L}_{d}(t)+\mathcal{L}_{s}(t)\). This is the analog of Eq. (15) [43]. We can show that in the perturbative regime, this dynamics actually reduces to the previous result (18) by considering that the deterministic part of the driving cancels in the cumulants. To see this, let us expand the cumulant as \[\left\langle\left(\mathcal{L}_{1}(t)\,e^{it^{\prime}\mathcal{L}_ {0}}\mathcal{L}_{1}(t-t^{\prime})\right)\right\rangle= \tag{15}\] \[\left\langle\mathcal{L}_{1}(t)\,e^{it^{\prime}\mathcal{L}_{0}} \mathcal{L}_{1}(t-t^{\prime})\right\rangle-\!\left\langle\mathcal{L}_{1}(t) \right\rangle\left\langle e^{it^{\prime}\mathcal{L}_{0}}\mathcal{L}_{1}(t-t^{ \prime})\right\rangle\] that follows from the definition of the cumulant and the fact that \(e^{it^{\prime}\mathcal{L}_{0}}\) is non-stochastic. Now we write \(\mathcal{L}_{1}(t)=\mathcal{L}_{d}(t)+\mathcal{L}_{s}(t)\), where \(\left\langle\mathcal{L}_{s}(t)\right\rangle=0\), which yields (16) since the terms containing the deterministic driving part in the RHS cancel with each other. With this we have for the second order cumulant expansion, \[\mathbb{K}(t)=-i\big{(}\mathcal{L}(t)\big{)}-\frac{1}{4\Omega^{2}} \begin{bmatrix}\Delta^{2}(k_{+}+k_{-})&-2\Delta\varepsilon_{0}k_{0}&-2\Delta \varepsilon_{0}k_{0}&-2\Delta^{2}(k_{+}+k_{-})\\ -2\Delta\varepsilon_{0}k_{+}&2\Delta^{2}k_{-}+4\varepsilon_{0}^{2}k_{0}&2 \Delta^{2}k_{+}&2\Delta\varepsilon_{0}k_{+}\\ -2\Delta\varepsilon_{0}k_{-}&-2\Delta\varepsilon_{0}k_{-}&2\Delta^{2}k_{-}+4 \varepsilon_{0}^{2}k_{0}&2\Delta\varepsilon_{0}k_{-}\\ -\Delta^{2}(k_{+}+k_{-})&2\Delta\varepsilon_{0}k_{0}&2\Delta\varepsilon_{0}k_ {0}&\Delta^{2}(k_{+}+k_{-})\end{bmatrix}, \tag{10}\] with no dependence on the driving in the dissipator. Note that in this way we cannot be assured that the higher orders vanish because the perturbation has non-zero stochastic average. However, if we take their expressions from [46], we can see by inspection that third order and fourth order terms vanish nonetheless. Considering now that \(\mathcal{L}_{1}^{*}(t)=\frac{D\cos(\omega t)+\Gamma(t)}{\Omega}\tilde{ \mathcal{L}}_{1}^{*}(t)\), we see that both third order and fourth order vanish. ## Appendix B Fourth order calculation To obtain the correction due to the correlations we should then complement our analysis by a fourth order expansion of Eq. 14. Thus refining our coarse-grain description by another order. Since the noise is Gaussian and stationary, odd orders in the perturbation \(\mathrm{H}_{s}\) vanish. Using again the non-viscous liquid approximation we get, \[\mathbb{K}^{\mathrm{IV}}(+\infty)\approx\iiint_{0}^{+\infty} \!\!dt_{1}dt_{2}dt_{3}\big{[}(\mathcal{L}_{s}^{*}(t)\mathcal{L}_{s}^{*}(t_{1 })\mathcal{L}_{s}^{*}(t_{2})\mathcal{L}_{s}^{*}(t_{3}))-(\mathcal{L}_{s}^{*}( t)\mathcal{L}_{s}^{*}(t_{1}))(\mathcal{L}_{s}^{*}(t_{2})\mathcal{L}_{s}^{*}(t_{3}))\] \[-\langle\mathcal{L}_{s}^{*}(t)\mathcal{L}_{s}^{*}(t_{2})\rangle( \mathcal{L}_{s}^{*}(t_{1})\mathcal{L}_{s}^{*}(t_{3}))-\langle\mathcal{L}_{s}^ {*}(t)\mathcal{L}_{s}^{*}(t_{3})\rangle(\mathcal{L}_{s}^{*}(t_{1})\mathcal{L}_ {s}^{*}(t_{2}))\big{]}. \tag{11}\] However, the time dependence of \(\mathcal{L}_{s}(t)=\frac{G\Gamma(t)}{\Omega}\big{[}\epsilon_{0}\sigma_{z}- \Delta\sigma_{x},\circ\big{]}\) is restricted to the noise realization that is the only part relevant for the integral. Nonetheless, since the process is Gaussian, \(\langle\Gamma(t)\Gamma(t_{1})\Gamma(t_{2})\Gamma(t_{3})\rangle=\langle\Gamma (t)\Gamma(t_{1})\rangle\langle\Gamma(t_{2})\Gamma(t_{3})\rangle+\langle\Gamma (t)\Gamma(t_{2})\rangle\langle\Gamma(t_{1})\Gamma(t_{3})\rangle+\langle\Gamma (t)\Gamma(t_{3})\rangle\langle\Gamma(t_{1})\Gamma(t_{2})\rangle\) and the operator part being all commuting we find that the fourth order contribution vanishes altogether.
2308.09423
Overconvergent prismatic cohomology
In this note I define an overconvergent version of prisms and prismatic cohomology as introduced by Bhatt and Scholze and show that overconvergent prismatic cohomology specialises to $p$-adic cohomologies, like Monsky-Washnitzer resp. rigid cohomology for smooth varieties over a perfect field, the de Rham cohomology of smooth weak formal schemes over a perfectoid ring and the \'{e}tale cohomology of its generic fibre. Besides, I give an overconvergent version of the complex $A\Omega$ of Bhatt-Morrow-Scholze and relate it to overconvergent prismatic cohomology.
Andreas Langer
2023-08-18T09:45:08Z
http://arxiv.org/abs/2308.09423v1
# Overconvergent prismatic cohomology ###### Abstract. In this note I define an overconvergent version of prisms and prismatic cohomology as introduced by Bhatt and Scholze and show that overconvergent prismatic cohomology specialises to \(p\)-adic cohomologies, like Monsky-Washnitzer resp. rigid cohomology for smooth varieties over a perfect field, the de Rham cohomology of smooth weak formal schemes over a perfectoid ring and the etale cohomology of its generic fibre. Besides, I give an overconvergent version of the complex \(A\Omega\) of Bhatt-Morrow-Scholze and relate it to overconvergent prismatic cohomology. This research was supported by EPSRC grant EP/T005351/1. ## 0. Introduction In their fundamental work [1] Bhatt and Scholze introduced a new \(p\)-adic cohomology theory, called prismatic cohomology, which has a universal character and is based on their notion of prisms and the prismatic site. Fix a prime number \(p\). A prism is a pair \((A,I)\) that consists of a \(\delta\)-ring \(A\) and an ideal \(I\subset A\) satisfying some properties ([1, Definition 1.1]) such that \(A\) is \((p,I)\)-adically complete. The \(\delta\)-structure on \(A\) induces a Frobenius lift \(\varphi:A\to A\). In all cases that we consider \(I\) is a principal ideal generated by a non-zero divisor \(d\). The main examples in [1] are the perfect prisms (1) and (2) and the non-perfect prism (3): 1. \((W(k),(p))\) for a perfect field \(k\) of characteristic \(p\). Then \(\varphi\) is the Witt vector Frobenius and \(d=p\). (It is called the crystalline prism). 2. Let \(\mathbb{C}_{p}\) be the completion of an algebraic closure of \(\mathbb{Q}_{p}\) and \(\mathcal{O}=\mathcal{O}_{\mathbb{C}_{p}}\) its ring of integers with tilt \(\mathcal{O}^{\flat}=\varprojlim_{x\mapsto x^{p}}\mathcal{O}/p\). There is a natural surjection \[\theta:A_{\inf}(\mathcal{O}):=W(\mathcal{O}^{\flat})\to\mathcal{O}\] with \(\ker\theta\) a principal ideal, and let \(\varphi\) be the Witt vector Frobenius. Then \((A_{\inf}(\mathcal{O}),\ker\theta)\) is a perfect prism. An important example of a non-perfect prism is the Breuil-Kisin prism: 1. Let \(K\) be a finite totally ramified extension of \(W(k)[1/p]\). Fix a uniformiser \(\pi\) and let \(E(u)\in W(k)[u]\) denote the Eisenstein polynomial such that \(E(\pi)=0\). Then \((W(k)[\![u]\!],(E(u)))\) is a prism with \(\varphi\) sending \(u\) to \(u^{p}\). Now let \((A,I)\) be a prism which is bounded, i.e. \((A/I)[p^{\infty}]\cong(A/I)[p^{n}]\) for some \(n\). For a smooth \(p\)-adic formal scheme \(X\) over \(\operatorname{Spf}(A/I)\) Bhatt and Scholze introduced the prismatic site \((X/A)_{\mathbb{A}}\) with structure sheaf \(\mathcal{O}_{\mathbb{A}}\). An object of the prismatic site is a bounded prism \((B,I)\) over \((A,I)\) with a map \(\operatorname{Spf}(B/IB)\to X\) satisfying a compatibility condition. Then the prismatic cohomology is defined as \[R\Gamma_{\mathbb{A}}(X/A):=R\Gamma((X/A)_{\mathbb{A}},\mathcal{O}_{\mathbb{A}})\,.\] The main comparison results, demonstrating the universal character of prismatic cohomology, are stated in [1, Theorem 1.8]. In particular, Bhatt and Scholze show that if \(X\) is smooth and proper, in the above examples (1)-(3) \(R\Gamma_{\mathbb{A}}(X/A)\) is a perfect \((p,d)\)-complete complex in \(D(A)\), equipped with a \(\varphi_{A}\)-linear operator \(\varphi\), and * a Frobenius descent of crystalline cohomology \(R\Gamma_{\mathrm{cris}}(X/W(k))\) in case (1) * a Frobenius descent of the \(A_{\mathrm{inf}}\)-cohomology \(R\Gamma_{\mathrm{inf}}(X)\) defined in [1] in case (2) * is isomorphic to Breuil-Kisin cohomology \(R\Gamma_{\mathrm{BK}}(X)\) as defined in [1] in case (3). Then Koshikawa and Yao [14], [15] introduced a logarithmic version of prismatic cohomology, by extending the notion of \(\delta\)-ring to the log context, leading to the definition of logarithmic prisms and the logarithmic prismatic site. Then they establish, in the log-smooth case, analogous comparison results for logarithmic prismatic cohomology (log-crystalline, log-de Rham, etale, Hodge-Tate) see [15, Theorem 2]. They also recover the \(A_{\mathrm{inf}}\)-cohomology in the semistable case constructed by Cesnavicius and Koshikawa [13] and construct Breuil-Kisin cohomology in the semistable case by Breuil-Kisin descent along the canonical map of prisms \[(W(k)[\![u]\!],(E(u)))\to(A_{\mathrm{inf}}(\mathcal{O}),\ker\theta);\ \ u\mapsto[\![\pi^{\flat}]\!]\] for a compatible system \(\pi,\pi^{1/p},\ldots\) of \(p\)-power roots of \(\pi\) in \(\mathcal{O}\). The original motivation in our work was to impose an overconvergence condition on the prismatic site such that the hypercohomology of the corresponding overconvergent structure sheaf rationally recovers - when the base prism is the crystalline prism - the rigid cohomology of smooth varieties over a perfect field \(k\). This naturally led to the notion of dagger prisms defined below. Although we only treat the smooth case it is probably straightforward to extend the comparison with rigid cohomology to the semistable case as well, by introducing dagger versions of the logarithmic prisms and logarithmic prismatic site of Koshikawa-Yao, which will lead to a more general definition of overconvergent logarithmic prismatic cohomology. If one replaces \(p\)-adically formal \(\mathrm{Spf}\,\mathcal{O}\)-schemes by weak formal schemes over \(\mathrm{Spf}\,\mathcal{O}\), then our construction of the overconvergent prismatic site leads to dagger versions of the de Rham and etale comparison results as stated in [1, Theorem 1.8], see Theorem 0.11 and Theorem 0.12 below. Using the pro-etale site on affinoid dagger varieties we will construct a dagger version of \(A_{\mathrm{inf}}\)-cohomology and compare it with overconvergent prismatic cohomology. The proof relies on an almost purity property for overconvergent Witt vectors on perfectoid dagger algebras. I consider this work as an another endeavour to introduce the notion of overconvergence into the constructions of relative \(p\)-adic Hodge theory. It was also inspired by the recent progress on the study of prismatic \(F\)-crystals and their close connection to \(p\)-adic Galois representations. To explain this, let \(K\) be a \(p\)-adic field, \(K_{\infty}\) the completion of the infinite cyclotomic extension \(K(\mu_{p^{\infty}})\) with \(K_{\infty}^{\flat}\) its tilt. Let \(\Gamma_{K}=\mathrm{Gal}(K_{\infty}/K)\). Fontaine established an equivalence between the category of etale \((\varphi,\Gamma_{K})\)-modules \(\mathrm{Mod}_{W(K_{\infty}^{\flat})}^{\varphi,\Gamma_{K},\dot{\epsilon}t}\) and the category \(\mathrm{Rep}_{\mathbb{Z}_{p}}(G_{K})\) of finite free \(\mathbb{Z}_{p}\)-representations of the absolute Galois group. Wu [21] gave a prismatic approach to this equivalence by comparing both categories with the category of prismatic \(F\)-crystals in \(\mathcal{O}_{\mathbb{A}}[1/I]_{p}^{\wedge}\)-modules over the absolute prismatic site \((\mathcal{O}_{K})_{\mathbb{A}}\) Then using results of Bhatt-Scholze [1], Marks [14] generalised the result of Wu and gave a geometric relativisation by replacing \(\operatorname{Rep}_{\mathbb{Z}_{p}}(G_{K})\) by \(\mathcal{O}_{K}\)-local systems on the generic fibre of a formal scheme over \(\operatorname{Spf}\mathcal{O}_{L}\) (\(L\) a finite extension of \(\mathbb{Q}_{p}\)) and relating this to the category of Laurent \(F\)-crystals on the absolute prismatic site, i.e. vector bundles over a certain structure sheaf \(\mathcal{O}_{\mathbb{A}}[1/I]_{\pi}^{\wedge}\), equipped with an isomorphism \(\varphi^{*}\mathcal{M}\cong\mathcal{M}\), for a fixed uniformiser \(\pi\in\mathcal{O}_{L}\). To \(\pi\) one can associate a Lubin-Tate formal group \(\mathcal{G}\) over \(\mathcal{O}_{L}\). Then, for a \(p\)-adic field \(K\supset L\), let \(K_{\infty}\) be the \(p\)-adic completion of \(K(\mathcal{G}[\pi^{\infty}])\), \(K_{\infty}^{\flat}\) its tilt and \(\Gamma_{K}=\operatorname{Gal}(K_{\infty}/K)\). Then Kisin and Ren [13] construct a period ring \(A_{K}\subset W(K_{\infty}^{\flat})\otimes_{W(\mathbb{F}_{q})}\mathcal{O}_{L}\) and extend Fontaine's theory to Lubin-Tate \((\varphi_{q},\Gamma_{K})\)-modules to obtain an equivalence between the category of etale \((\varphi_{q},\Gamma_{K})\)-modules \(\operatorname{Mod}_{A_{K}}^{\varphi_{q},\Gamma_{K},\operatorname{\acute{e}t}}\) and the category \(\operatorname{Rep}_{\mathcal{O}_{K}}(G_{L})\) of finite free \(\mathcal{O}_{K}\)-representations of \(G_{L}\). Marks shows that the result of Kisin-Ren is a special case of his result on Laurent \(F\)-crystals on the absolute prismatic site ([14, Theorem 1.6]). Following ideas of Cherbonnier and Colmez [15], Forquaux and Xie [16] introduced the notion of overconvergence to \(K\)-linear representations of \(G_{L}\) and established an equivalence between the category of overconvergent \(K\)-representations of \(G_{L}\) and etale \((\varphi_{q},\Gamma_{K})\)-modules over the Robba ring ([16, Proposition 1.5]). It is therefore natural to search for a category of overconvergent Laurent \(F\)-crystals on the (overconvergent) prismatic site of weak formal schemes and to relate it to etale local systems on the generic fibre in a way that recovers the result of Forquaux-Xie. This will be a future project. In the following, we give the basic definitions of dagger prisms and overconvergent prismatic cohomology and will state the main results of the paper. We fix a perfect base prism \((A,I)\)[1, Definition 1.1 and Example 1.3] and assume that \(I=(d)\) is principal. Our main examples are \((A,I)=(W(k),(p))\), \(k\) a perfect field of characteristic \(p>0\), and \((A,I)=(A_{\inf}(\mathcal{O})=W(\mathcal{O}^{\flat}),I=(d))\) where \(\mathcal{O}=\mathcal{O}_{\mathbb{C}_{p}}\), \(\mathcal{O}^{\flat}\) its tilt and \(d\) a generator of the kernel of the ghost map \(\theta:W(\mathcal{O}^{\flat})\to\mathcal{O}\) (for example, let \(\epsilon=(1,\zeta_{p},\zeta_{p^{2}},\ldots)\in\mathcal{O}^{\flat}\) and \(d=1+[\epsilon^{1/p}]+\cdots+[\epsilon^{1/p}]^{p-1}\)). **Definition 0.1**.: A dagger prism of finite type over \(A\) is a pair \((S,\varphi)\) where \(S\) is an \(A\)-algebra of finite type and \(\varphi:S\to S\) is a Frobenius lift such that the following properties hold: 1. \(S\) is \(J=(p,d)\)-adically separated. 2. The \(J=(p,d)\)-adic completion \(\widehat{S}\) of \(S\), together with the induced lifting \(\widehat{\varphi}\) of the Frobenius, is a bounded prism and \(A\to\widehat{S}\) is a map of prisms. 3. There exists finitely many elements \(x_{1},\ldots,x_{r}\in S\) such that for any \(s\in\widehat{S}\) we have: - \[s=\sum_{\kappa\in\mathbb{N}^{r}}a_{\kappa}\underline{x}^{\kappa}\] and for \(m\in\mathbb{N}\), \(a_{\kappa}\in J^{m}\) up to finitely many \(\kappa\). - \(s\in S\) if and only if there exists \(\epsilon>0\) and \(C\in\mathbb{R}\) such that \[\gamma_{\epsilon}(s):=\inf_{\kappa\in\mathbb{N}^{r}}(v_{J}(a_{\kappa})- \epsilon|\kappa|)>C\,.\] Here \(v_{J}(a_{\kappa}):=n\) if \(a_{\kappa}\in J^{n}\) and \(a_{\kappa}\notin J^{n+1}\). _Remark 0.2_.: 1. \(S\) is weakly complete with respect to the ideal \(J=(p,d)\) in the sense of [11, Definition 1.2] 2. If \(\overline{S}=S/J\) is smooth over \(\overline{A}=A/J\) and \(S\) is a flat \(A\)-algebra, then \(S\) is a weak formalisation of \(\overline{S}\) in the sense of [13, Definition 3.2]. 3. Assume \(\overline{S}\) is smooth over \(\overline{A}\) or a complete transversal intersection and \(S\) is a weak formalisation. Then \(S\) is a very smooth lifting of \(\overline{S}\) to \((A,I)\) in the sense of [13, Definition 2.5]. In particular, given a diagram where \(\overline{\varphi}\) is the Frobenius, there exists a lifting of Frobenius \(\varphi\) on \(S\) and \((\widehat{S},\varphi)\) is a prism [13, Theorem 3.6 and Definition 2.4]. 4. \(S/dS\) is a weakly complete \(A/d\)-algebra with respect to the \(p\)-adic topology. **Definition 0.3**.: An \(A\)-algebra is called a dagger prism if it is a direct limit \(\varinjlim S_{I}\) of dagger prisms of finite type \(S_{I}\) with generators \(I=(x_{1},\ldots,x_{r})\). _Example 0.4_.: 1. Let \((A,(d))=(W(k),(p))\) and \(\overline{S}\) a smooth \(k\)-algebra. Then a Monsky-Washnitzer lift \(S^{\dagger}\) of \(\overline{S}\) defines a dagger prism over \((W(k),(p))\). 2. Let \(S\) be a dagger prism of finite type. The perfection \(\varinjlim_{\varphi}S=\varinjlim(S\xrightarrow{\varphi}S\xrightarrow{\varphi}S \xrightarrow{\varphi}\cdots)\) is a dagger prism. A variant of these perfect prisms which we will call dagger perfections will be very important. For a dagger prism \(S\) of finite type, we can, by Lemma 1.1 below, write \(S=\varinjlim_{\epsilon}S_{\epsilon}\) where \(S_{\epsilon}\) is a \((p,d)\)-adically complete subring of \(S\). Let \(\widehat{S}\) be the associated prism. Let \((\widehat{S_{\epsilon}/p})^{\mathrm{perf}}\) be the completion of the colimit perfection of \(S_{\epsilon}/p\). Define \[W^{\dagger}((\widehat{S_{\epsilon}/p})^{\mathrm{perf}})=\left\{(z_{0},z_{1}, \ldots)\in W((\widehat{S_{\epsilon}/p})^{\mathrm{perf}})\,:\,\varinjlim_{i}( i+\gamma_{\epsilon}(z_{i})/p^{i})=\infty\right\}.\] **Definition 0.5**.: With the above notation, \(S^{\dagger,\mathrm{perf}}:=\varinjlim_{\epsilon}W^{\dagger}((\widehat{S_{ \epsilon}/p})^{\mathrm{perf}})\) is called the dagger perfection of \(S\). _Remark 0.6_.: Let \((\widehat{S^{\mathrm{perf}}})\) be the \((p,d)\)-completed perfection of the prism \(\widehat{S}\). By [1, Lemma 3.9 and Theorem 3.10], \((\widehat{S^{\mathrm{perf}}})=W\left((\widehat{S/p})^{\mathrm{perf}}\right)\) and we have a commutative diagram Since \(S^{\dagger,\mathrm{perf}}\) is weakly complete with respect to the \((p,d)\)-adic topology and perfect, the map \(S\to\widehat{S}\to W\left((\widehat{S/p})^{\mathrm{perf}}\right)\) factors through \(S^{\dagger,\mathrm{perf}}\) and \(S^{\dagger,\mathrm{perf}}\) contains the perfection \(\varinjlim_{\varphi}S\) of \(S\). In analogy to the prismatic site [1, SS4], we now define the dagger prismatic site. **Definition 0.7**.: Let \((A,(d))\) be a perfect prism and \(X\) a \(p\)-adic weak formal \(\operatorname{Spf}(A/d)\)-scheme. The dagger prismatic site \((X/A)^{\dagger}_{\mathbb{A}}\) is the opposite category of the category of dagger prisms \((B,(d))\) over \((A,(d))\) with a map \(\operatorname{Spf}^{\dagger}(B/d)\to X\), where \(\operatorname{Spf}^{\dagger}(B/d)\) denotes the affine weak formal \(p\)-adic scheme associated to the weakly complete \(A/d\)-algebra \(B/d\). We endow an object \((B,(d))\) with faithfully flat covers \((B,(d))\to(C,(d))\) and write an object in \((X/A)^{\dagger}_{\mathbb{A}}\) as \[\operatorname{Spf}^{\dagger}B\leftarrow\operatorname{Spf}^{\dagger}(B/d)\to X\,.\] Let \((X/A)^{\dagger,\operatorname{perf}}_{\mathbb{A}}\subset(X/A)^{\dagger}_{ \mathbb{A}}\) be the full subcategory of objects \(\operatorname{Spf}^{\dagger}B\leftarrow\operatorname{Spf}^{\dagger}(B/d)\to X\) for which \(B\) is a dagger perfection, as in Definition 0.5. Note that the weak formal scheme \(\operatorname{Spf}^{\dagger}B\) is well-defined: The dagger prism \(B\) defines a presheaf on the underlying formal scheme \(\operatorname{Spf}\widehat{B}\) in the obvious way and since the ideal of definition is finitely generated (generated by \(p\) and \(d\)) one can follow Meredith's proof [10] to show that the presheaf is in fact a sheaf, defining \(\operatorname{Spf}^{\dagger}B\). Our main example is as follows: _Example 0.8_.: Let \(A=W(\mathcal{O}^{\flat})\), \(S=A^{\dagger}\langle U^{\pm 1}_{1},\dots,U^{\pm 1}_{d}\rangle\). Then define \(A^{\dagger}\langle U^{\pm 1/p^{\infty}}_{1},\dots,U^{\pm 1/p^{\infty}}_{d}\rangle\) as \(S^{\dagger\operatorname{perf}}\). Note that there is a canonical isomorphism of the \((p,d)\)-completed \(A\)-algebra \(A(\underline{U}^{\pm 1/p^{\infty}})\xrightarrow{\sim}W(\mathcal{O} \langle\underline{T}^{\pm 1/p^{\infty}}\rangle^{\flat})\) given by \(U^{1/p^{r}}_{i}\mapsto[(T^{1/p^{r}}_{i},T^{1/p^{r+1}}_{i},\dots,)]\)[1, p. 70]. We define \(W^{\dagger}(\mathcal{O}^{\dagger}\langle T^{\pm 1/p^{\infty}}_{1},\dots,T^{\pm 1/p^{ \infty}}_{d}\rangle^{\flat})\) as the image of \(A^{\dagger}\langle U^{\pm 1/p^{\infty}}_{1},\dots,U^{\pm 1/p^{\infty}}_{d}\rangle\) under this isomorphism. It is a perfect \((p,d)\)-weakly complete subalgebra of \(W(\mathcal{O}\langle\underline{T}^{\pm 1/p^{\infty}}\rangle^{\flat})\). In the following we define structure sheaves on the dagger prismatic site. We first consider the affine case, so let \(X=\operatorname{Spf}^{\dagger}R\) be a weak formal \(\mathcal{O}\)-scheme. We define two presheaves on \((R/A)^{\dagger}_{\mathbb{A}}\) by \[\mathcal{O}^{\dagger}_{\mathbb{A}}(R\to B/dB\gets B)=B\] and \[\overline{\mathcal{O}}^{\dagger}_{\mathbb{A}}(R\to B/dB\gets B)=B/d\,.\] Both presheaves are equipped with an action of \(\varphi\). We will show that \(\mathcal{O}^{\dagger}_{\mathbb{A}}\) and \(\overline{\mathcal{O}}^{\dagger}_{\mathbb{A}}\) are sheaves, and we point out that in the affine case presheaf cohomology computes the correct cohomology (see for example [1, Lecture V]). **Definition 0.9**.: 1. We define overconvergent prismatic cohomology by \[\mathbb{A}^{\dagger}_{R/A}:=R\Gamma((R/A)^{\dagger}_{\mathbb{A}},\mathcal{O}^{ \dagger}_{\mathbb{A}})\,.\] It is represented by a complex of dagger prisms over \(A\) (this follows from the discussion after Lemma 1.5 below). 2. Dagger-Hodge-Tate cohomology is defined as \[\overline{\mathbb{A}}^{\dagger}_{R/A}:=R\Gamma((R/A)^{\dagger}_{\mathbb{A}}, \overline{\mathcal{O}}^{\dagger}_{\mathbb{A}})\,.\] This is represented by a complex of \(p\)-adically weakly complete \(\mathcal{O}\)-algebras. Due to the sheaf property the definition extends to weak formal \(\mathcal{O}\)-schemes \(X\) to get complexes \(R\Gamma((X/A)^{\dagger}_{\mathbb{A}},\mathcal{O}^{\dagger}_{\mathbb{A}})\) and likewise \(R\Gamma((X/A)^{\dagger}_{\mathbb{A}},\overline{\mathcal{O}}^{\dagger}_{\mathbb{ A}})\). Then we have the following main results. **Theorem 0.10**.: _(= Theorem 2.2, Corollary 2.3)_ 1. _Let_ \(\overline{R}\) _be a smooth_ \(k\)_-algebra with Monsky-Washnitzer lift_ \(R^{\dagger}\)_. Then we have a canonical isomorphism_ \[\mathbb{A}_{\overline{R}/W(k)}^{\dagger}\otimes\mathbb{Q}\simeq\Omega_{R^{ \dagger}/W(k)}^{\bullet}\otimes\mathbb{Q}\,.\] 2. _Let_ \(X\) _be a smooth_ \(k\)_-scheme. Then we have an isomorphism_ \[R\Gamma((X/A)_{\mathbb{A}}^{\dagger},\mathcal{O}_{\mathbb{A}}^{\dagger}) \otimes\mathbb{Q}\simeq R\Gamma_{\mathrm{rig}}(X/W(k)\otimes\mathbb{Q})\,.\] **Theorem 0.11**.: _(= Theorem 3.1) Let \((A,d)\) be a perfect prism, \(A/d=\mathcal{O}\) and \(X/\mathcal{O}\) a smooth weak formal scheme with generic fibre \(X_{\eta}=X\times_{\mathrm{Spec}\,\mathcal{O}}\mathrm{Spec}\,\mathcal{O}[1/p]\). Let \(\mu:(X_{\eta})_{\mathrm{et}}\to X_{\mathrm{et}}\). Then we have a canonical isomorphism_ \[R\mu_{*}\mathbb{Z}/p^{n}\simeq(\mathbb{A}_{X/A}^{\dagger}[1/d]/p^{n})^{\varphi =1}\,.\] _If \(X=\mathrm{Spf}^{\dagger}S\) is an affine weak formal scheme with generic fibre \(\mathrm{Spec}\,S[1/p]\), then we have_ \[R\Gamma(\mathrm{Spec}\,S[1/p],\mathbb{Z}/p^{n})\simeq(\mathbb{A}_{S/A}^{ \dagger}[1/d]/p^{n})^{\varphi=1}\,.\] **Theorem 0.12**.: _(= Theorem 5.14) Let \(A=A_{\mathrm{inf}}(\mathcal{O})\), \(X/\mathcal{O}\) a smooth weak formal scheme. Then we have an isomorphism_ \[\varphi^{*}R\Gamma((X/A)_{\mathbb{A}}^{\dagger},\mathcal{O}_{\mathbb{A}}^{ \dagger})\otimes^{L}A/[p]_{q}A\cong\Omega_{X/\mathcal{O}}^{\dagger\bullet}\,.\] _Remark 0.13_.: Thus theorems 0.11 and 0.12 are the dagger analogues of [1, Theorem 9.1] and [1, Theorem 1.8(3)]. Finally we define an overconvergent version \(A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}}\) of the complex \(A\Omega_{\mathcal{X}/\mathcal{O}}\) of Bhatt-Morrow-Scholze, for a smooth weak formal \(\mathcal{O}\)-scheme \(\mathcal{X}\) (for the definition see SS5) and prove the following comparison result. **Theorem 0.14**.: _(= Theorem 5.3) Let \(\mathcal{X}\) be a smooth weak formal scheme over \(\mathrm{Spf}\,\mathcal{O}\). Then we have a \(\varphi\)-equivariant quasi-isomorphism_ \[R\Gamma(\mathcal{X},A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}})\otimes_{A_{ \mathrm{inf}}}^{L\dagger}B_{\mathrm{cris}}^{+}\cong\varphi^{*}\left(R\Gamma(( \mathcal{X}/A_{\mathrm{inf}})_{\mathbb{A}},\mathcal{O}_{\mathbb{A}}^{\dagger })\right)\otimes_{A_{\mathrm{inf}}}^{L\dagger}B_{\mathrm{cris}}^{+}\,.\] In the last section we define an overconvergent version \(W^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}}^{\bullet}\) of the relative de Rham-Witt complex \(W\Omega_{\mathcal{X}/\mathcal{O}}^{\bullet}\), generalising the construction in [1], and compare it with \(A\Omega_{\mathcal{X}/\mathcal{O}}^{\dagger}\). **Theorem 0.15**.: _(=Theorem 6.5) Let \(\mathcal{X}\) be a weak formal smooth \(\mathcal{O}\)-scheme. Then_ \[A\Omega_{\mathcal{X}/\mathcal{O}}^{\dagger}\otimes_{A_{\mathrm{inf}}}^{L}W( \mathcal{O})\cong W^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}}^{\bullet}\,.\] Perfect dagger prisms, perfectoid dagger algebras and the structure sheaf of the overconvergent prismatic site In this section we prove some properties of dagger prisms and in particular give a dagger version of an equivalence of categories due to Bhatt-Scholze between perfect prisms and perfectoid algebras. Finally we prove the sheaf property of the structure (pre-)sheaves \(\mathcal{O}_{\mathbb{A}}^{\dagger}\) and \(\overline{\mathcal{O}}_{\mathbb{A}}^{\dagger}\). **Lemma 1.1**.: _Let \(S\) be a dagger prism of finite type with generators \(x_{1},\dots,x_{r}\). Then \(S=\varinjlim_{\mathcal{X}}S_{\epsilon}\) where \(S_{\epsilon}\) is a \((p,d)\)-adically complete subring of \(S\)._ Proof.: Define \(A^{\dagger}\langle T_{1},\ldots,T_{r}\rangle\) to be the weak completion of \(A[T_{1},\ldots,T_{r}]\) with respect to the \((p,d)\)-adic topology. Then \[\lambda\,:\,A^{\dagger}\langle T_{1},\ldots,T_{r}\rangle\to S\,;\,T_{i}\mapsto x _{i}\] is surjective. Let \[A_{\epsilon}\langle T_{1},\ldots,T_{r}\rangle=\{f\in A\langle T_{1},\ldots,T_{ r}\rangle\,|\,f\text{ has radius of convergence }p^{\epsilon},\epsilon>0,\text{ in }A[1/p]\langle T_{1},\ldots,T_{r}\rangle\}\,.\] This is a \((p,d)\)-adically complete subring of \(A\langle T_{1},\ldots,T_{r}\rangle\) and we have \(f\in A_{\epsilon}\langle T_{1},\ldots,T_{r}\rangle\) for \(f=\sum_{I}a_{I}\underline{\underline{\varGamma}}^{I}\), \({}_{I}\in A\) if and only if \(\varinjlim_{I}(v_{J}(a_{I})-\epsilon|I|)=\infty\). \(A_{\epsilon}\langle T_{1},\ldots,T_{r}\rangle=\{f\in A\langle T_{1},\ldots,T_{ r}\rangle\text{ is equipped with the Gauss norm }\gamma_{\epsilon}(f)=\min_{I}\{v_{J}(a_{I})-\epsilon|I|\}>-\infty\) and \(A^{\dagger}\langle T_{1},\ldots,T_{r}\rangle=\varinjlim_{\epsilon}A_{ \epsilon}\langle T_{1},\ldots,T_{r}\rangle\). Then \(S_{\epsilon}=\lambda(A_{\epsilon}\langle T_{1},\ldots,T_{r}\rangle)\) is a \((p,d)\)-adically complete subring of \(S\) and \(S=\varinjlim_{\epsilon}S_{\epsilon}\). _Remark_.: Note that \(S_{\epsilon}\) is not a prism because it does not admit a lifting of Frobenius (in general). Note that the perfection of dagger prisms are already weakly complete with respect to the \((p,d)\)-adic topology. Indeed, let \(S=\varinjlim_{\epsilon}S_{\epsilon}\) as in Lemma 1.1. Let \(\varphi(x_{i})=x_{i}^{p}+p\delta(x_{i})\) for \(i=1,\ldots,r\). Choose \(\epsilon\) such that \(\gamma_{\epsilon}(p\delta(x_{i}))>0\) for all \(i\). This is possible since \(\gamma_{\epsilon}(p)=1\). Then for any \(s\in S_{\epsilon}\) we have \(\varphi(s)\in S_{\epsilon/p}\) and hence \[\varinjlim_{\varphi}S=\varinjlim_{\epsilon}\varinjlim(S_{\epsilon}\xrightarrow {\varphi}S_{\epsilon/p}\xrightarrow{\varphi}S_{\epsilon/p^{2}}\xrightarrow {\varphi}\cdots)\] is weakly complete. **Definition 1.2**.: An \(\mathcal{O}\)-algebra \(R\) is a perfectoid dagger algebra if it is weakly complete with respect to the \(\pi\)-adic topology for some element \(\pi\in R\) with \(p\in\pi^{p}R\), the Frobenius \(\varphi:R/p\to R/p\) is surjective and the \(\pi\)-adic completion \(\widehat{R}\) is perfectoid. Moreover, we require that \(\widehat{R}\) is equipped with a family of Gauss norms \(\gamma_{\epsilon}\) such that \(R_{\epsilon}:=\{r\in\widehat{R}\,:\,\gamma_{\epsilon}(r)\text{ is finite}\}\) is a perfectoid \(\mathcal{O}\)-algebra and such that \(R=\varinjlim_{\epsilon}R_{\epsilon}\). Then we have a dagger version of [1, Theorem 3.10]: **Proposition 1.3**.: _There is an equivalence of categories between dagger perfect prisms \(B\) and perfectoid dagger \(\mathcal{O}\)-algebras \(R\). The two functors are given as_ \[G\,:\,B\mapsto B/d\] _and_ \[H\,:\,R\mapsto A^{\dagger}_{\inf}(R)=W^{\dagger}(R^{\flat})\] _where the definition of \(W^{\dagger}(R^{\flat})\) will be given below. It is a generalisation of \(W^{\dagger}(\mathcal{O}^{\dagger}\langle\underline{\varGamma}^{\pm 1/p^{\infty}} \rangle^{\flat})=H(R)\) for \(R=\mathcal{O}^{\dagger}\langle\underline{U}^{\pm 1/p^{\infty}}\rangle\)._ Proof.: We assume that a perfectoid dagger \(\mathcal{O}\)-algebra \(R\) can be written as a direct limit \(R=\varinjlim_{\epsilon}R_{\epsilon}\) where \(R_{\epsilon}\) is a perfectoid \(\mathcal{O}\)-algebra. We omit here the exact description of \(R_{\epsilon}\) which will become clear when we define an equivalence of categories between perfectoid dagger \(K\)-algebras and perfectoid dagger \(K^{\flat}\)-algebras for a perfectoid field \(K\) and its tilt \(K^{\flat}\). Let \(R^{\flat}_{\epsilon}\) be the tilt of \(R_{\epsilon}\), so \(R^{\flat}_{\epsilon}=\varinjlim R_{\epsilon}/p\), and define \[B=H(R)=\varinjlim W^{\dagger}(R^{\flat}_{\epsilon})=:A^{\dagger}_{\inf}(R)\,.\] It is the smallest \((p,d)\)-weakly complete subring of \(W(\widehat{R}^{\flat})\) containing \([x]\) for \(x\in R^{\flat}:=\varinjlim_{\epsilon}R^{\flat}_{\epsilon}\). Conversely, let \(B\) be a dagger perfect prism obtained as \(B=S^{\dagger\mathrm{perf}}\) for a dagger prism \(S\). Then let \((S/d)^{\mathrm{perf}}=\varinjlim_{\epsilon}(S_{\epsilon}/d)^{\mathrm{perf}}\) be the dagger perfectoidisation of \(S/d\) and \(((S/d)^{\mathrm{perf}})^{\flat}\) its tilt. Then we have \[(S/p)^{\mathrm{perf}}=\varinjlim_{\epsilon}(\widehat{S_{\epsilon}/p)^{\mathrm{ perf}}}=((S/d)^{\mathrm{perf}})^{\flat}\] and \(\varinjlim_{\epsilon}\widehat{S_{\epsilon}^{\mathrm{perf}}}\) is the smallest \((p,d)\)-weakly complete perfect \(A\)-subalgebra of \(W(((\widehat{S/d})^{\mathrm{perf}})^{\flat})\) containing \(S\). By definition, this consists of elements \(\sum p^{i}[x_{i}]^{1/p^{i}}\), \(x_{i}\in(S/p)^{\mathrm{perf}}\) such that there exists \(\epsilon>0\) with \(\inf_{i}\{i+\frac{1}{p^{i}}\gamma_{\epsilon}(x_{i})\}>-\infty\), where \(\gamma_{\epsilon}\) is a Gauss norm with respect to the \(d\)-adic topology. We define \[W^{\dagger}((S/d)^{\mathrm{perf}})^{\flat}):=W^{\dagger}((S/p)^{\mathrm{perf} })=\varinjlim_{\epsilon}\widehat{S_{\epsilon}^{\mathrm{perf}}}=B\] and \(G(B)=B/d=R\). This is \(p\)-adically weakly complete. Let \(\pi\in\widehat{R}\) be the element satisfying \(\pi^{p}\neq p\) in the perfectoid ring \(\widehat{R}\). It can be chosen to being in \(R\): Let \(d=a_{0}\mod p\) for \(a_{0}\in R^{\flat}\). Define \(\pi\) to be the image of \([a_{0}^{1/p}]\) in \(R\). Then \(\pi^{p}\) divides \(p\) in \(R\). Evidently \(\varphi:R/p\to R/p\) is surjective and hence \(R\) is dagger perfectoid. After taking \((p,d)\)-, resp. \(\pi\)-, adic completion, the functors \(G\circ H\) and \(H\circ G\) are identities. Since \(G\) maps perfect dagger prisms to perfectoid dagger \(\mathcal{O}\)-algebras and \(H\) maps perfectoid dagger \(\mathcal{O}\)-algebras to perfect dagger prisms, \(G\circ H\) and \(H\circ G\) are again identities as functors. The proposition follows. We will compute the (pre-)sheaf cohomology by Cech-Alexander complexes, explained in the next section. Now we show that \(\mathcal{O}_{\mathbb{A}}^{\dagger}\) and \(\overline{\mathcal{O}}_{\mathbb{A}}^{\dagger}\) are in fact sheaves on \((R/A)_{\mathbb{A}}^{\dagger}\) with respect to the Grothendieck topology generated by faithfully flat covers. Let \((A,d)\) be a prism, and \(S\) an overconvergent prism over \((A,d)\). As in Lemma 1.1, write \(S=\varinjlim_{\epsilon}S_{\epsilon}\) where \(S_{\epsilon}\) is a \((p,d)\)-complete \(A\)-algebra. We consider the site of all \((p,d)\)-complete \(A\)-algebras with the topology where covers are faithfully flat \((p,d)\)-complete coverings. Now let \(S\to B\) be a faithfully flat cover of dagger prisms. Writing \(B=\varinjlim_{\delta\to 0}B_{\delta}\) as in Lemma 1.1, we can assume that there exists \(\delta=\phi(\epsilon)\) for a continuous function \(\phi:]0,\epsilon_{0}[\rightarrow]0,\phi(\epsilon_{0})[\) with \(\varinjlim_{\epsilon\to 0}\phi(\epsilon)=0\), such that \(B_{\phi(\epsilon)}\) is faithfully flat over \(S_{\epsilon}\). After renaming indices \(B_{\epsilon}\) is faithfully flat over \(S_{\epsilon}\). By [1, Corollary 3.12] the functor that sends \((S,d)\to S\) (resp. \(S/d\)) forms a sheaf with vanishing higher cohomology. Indeed, consider the \((p,d)\)-complete Cech-nerve \(B_{\epsilon}^{\star}\) of \(S_{\epsilon}\to B_{\epsilon}\). By the proof of [1, Corollary 3.12] the corresponding total complex is exact and the sheaf property holds. Since \(B^{\star}=\varinjlim_{\epsilon\to 0}B_{\epsilon}^{\star}\) is the weakly completed (with respect to the \((p,d)\)-adic topology) Cech-nerve of \(S\to B\), it satisfies faithfully flat descent and vanishing higher cohomology as well, by exactness of the direct limit. The proof for \(S/d\) is similar. Then we have **Proposition 1.4**.: \(\mathcal{O}_{\mathbb{A}}^{\dagger}\) _and \(\overline{\mathcal{O}}_{\mathbb{A}}^{\dagger}\) define sheaves on \((R/A)_{\mathbb{A}}^{\dagger}\)._ The proof is straightforward from the previous arguments, using weakly completed Cech-nerves of faithfully flat covers of dagger prisms. We have a natural map \[v:\operatorname{Shv}(R/A)^{\dagger}_{\mathbb{A}}\to\operatorname{Shv}( \operatorname{Spf}^{\dagger}R)_{\operatorname{\acute{e}t}}\] which gives a complex of etale sheaves \[\mathbb{A}^{\dagger}:=Rv_{*}\mathcal{O}^{\dagger}_{\mathbb{A}}\,.\] Again we can define dagger prismatic cohomology as \(R\Gamma(\operatorname{Spf}^{\dagger}R,\mathbb{A}^{\dagger})\). By the obvious gluing process we obtain sheaves \(\mathcal{O}^{\dagger}_{\mathbb{A}}\) and \(\overline{\mathcal{O}}^{\dagger}_{\mathbb{A}}\) for any weak formal scheme \(X\) and can define overconvergent prismatic cohomology as \(R\Gamma(X,\mathbb{A}^{\dagger})\). In analogy to prismatic cohomology we describe now the computation of overconvergent prismatic cohomology for smooth weak formal affine schemes \(X=\operatorname{Spf}^{\dagger}R\) over \(\operatorname{Spf}A/d\) using the Cech-Alexander complex. We recall the following lemma from [Stacks, Tag 07JM] (see also [Bha18a, Lecture V, Lemma 4.3]). **Lemma 1.5**.: _Let \(\mathscr{C}\) be a small category admitting finite non-empty products. Let \(\mathcal{F}\) be an abelian presheaf. Assume there exists a weakly final object \(X\in\mathscr{C}\), i.e. \(\operatorname{Hom}(X,Y)\neq 0\) for all \(Y\in\mathscr{C}\). Then \(R\Gamma(\mathscr{C},\mathcal{F})\) is computed by the chain complex (the totalisation of) attached to_ \[\mathcal{F}(X)\to\mathcal{F}(X\times X)\rightrightarrows\mathcal{F}(X\times X \times X)\stackrel{{\rightarrow}}{{\rightarrow}}\cdots\] _by applying \(\mathcal{F}\) to the Cech-nerve of \(X\)._ As in [Bha18a, Lecture V, Corollary 5.2] it is proved that \((R/A)^{\dagger}_{\mathbb{A}}\) admits finite non-empty coproducts. There exists a free \(A\)-algebra (a polynomial algebra) \(F_{0}\) with a surjection \(F_{0}\to R\), let \(J_{0}=\ker(F_{0}\to R)\) and let \(F_{\delta}\) be the weakly completed free \(\delta\)-\(A\)-algebra on \(F_{0}\), following the construction in [BS22, 4.17]. So if \(F_{0}=A[T_{1},\ldots,T_{s}]\) then \(F_{\delta}=A^{\dagger}\langle T_{i},\delta(T_{i}),\delta^{2}(T_{i}),\ldots \rangle_{i=1}^{s}\). Let \(J_{0}=J_{0}F_{\delta}\) be the ideal in \(F_{\delta}\) generated by \(J_{0}\). Then we construct the dagger prism \((F,dF)\) by taking \(F_{\delta}^{\dagger}\{\frac{J_{0}}{d}\}\) where \(\dagger\) means weak completion with respect to the \((p,d)\)-adic topology. It has the obvious universal property. We get the following commutative diagram where the maps on the top are \(\delta\)-maps. We obtain an object \(X\) in \((R/A)^{\dagger}_{\mathbb{A}}\) \[X:=\left(R\to F/dF\gets F\right).\] Then \(X\) is a weakly initial object. Indeed, for any \((R\to B/dB\gets B)\in(R/A)^{\dagger}_{\mathbb{A}}\) there exists a map \(F_{0}\to B\) of \(A\)-algebras compatible with \(R\to B/dB\). It extends to a map of \(\delta\)-\(A\)-algebras \(F_{\delta}\to B\) and finally to a map of dagger prisms \(F\to B\) by the universal property of \(F\). By Lemma 1.5\(\mathbb{A}^{\dagger}_{R/A}\) is computed by the cosimplicial \(\delta\)-\(A\)-algebra \[F^{0}\to F^{1}\rightrightarrows F^{2}\stackrel{{\rightarrow}}{{ \rightarrow}}\cdots\] where \(F^{n}=\mathcal{O}^{\dagger}_{\mathbb{A}}(X^{\times(n+1)})\), so \(F=F^{0}\) and \(F^{n}\) is a \(d\)-torsion-free \((p,d)\)-weakly complete \(\delta\)-ring. It can be constructed as follows: Let \(F^{\star}_{0}\) be the \((p,d)\)-weakly completed Cech-nerve of \(A\to(F_{0})^{\dagger}=F_{0}^{0}\) and let \(J^{\star}\) be the kernel of the augmentation \(F^{\star}_{0}\to F_{0}^{0}\to R\). To each \(F^{\star}_{0}\) we apply the above construction of a weakly completed \(\delta\)-\(A\)-algebra \(F^{\star}_{\delta}\) on \(F^{\star}_{0}\) with ideal \(J^{\star}F^{\star}_{\delta}\) and take the associated prism \(F^{\star}=F^{\star\dagger}_{\delta}\{\frac{J^{\star}}{d}\}\) to obtain a cosimplicial object \((F^{\star}\to F^{\star}/dF^{\star}\gets R)\) in \((R/A)^{\dagger}_{\mathbb{A}}\) which is the Cech-nerve of \((F^{0}\to F^{0}/dF^{0}\gets R)\), the latter being the weakly final object of the topos \(\operatorname{Shv}(R/A)^{\dagger}_{\mathbb{A}}\). Hence \(\Delta^{\dagger}_{R/A}\cong\) totalisation of \(F^{\star}\). In the next section we derive our first comparison theorem, namely for the base prism \((A=W(k),I=(p))\) we can compare (rational) overconvergent prismatic cohomology of \((X/A)^{\dagger}_{\mathbb{A}}\), for a smooth \(k\)-scheme \(X\), with rigid cohomology. ## 2. The comparison with Monsky-Washnitzer, respectively rigid, cohomology Let \(\overline{A}=k[T_{1},\dots,T_{d}]/(\overline{f}_{1},\dots,\overline{f}_{r})\) be a smooth \(k\)-algebra with Monsky-Washnitzer lift \(A^{\dagger}=W(k)^{\dagger}[T_{1},\dots,T_{d}]/(f_{1},\dots,f_{r})\). Consider the following frame over \(W(k)\): \[B=W(k)[T_{1},\dots,T_{d}][Y_{1},\dots,Y_{r}]/(f_{1}-pY_{1},\dots,f_{r}-pY_{r})\,.\] Then \(B\) is an Elkik lift of \(\overline{B}=B/p=\overline{A}[Y_{1},\dots,Y_{r}]\). Let \(B^{\dagger}\) be the Monsky-Washnitzer (= weak) completion of \(B\), which is a Monsky-Washnitzer lift of \(\overline{B}\). We have \[\Omega^{\bullet}_{B^{\dagger}/W(k)}\otimes\mathbb{Q}\cong\Omega^{\bullet}_{A^ {\dagger}/W(k)}\otimes\mathbb{Q}\] by [13, Theorem 5.4]. Now we use the ideas in [10, Construction 4.17, 4.18] to compute overconvergent prismatic cohomology using a Cech-Alexander complex. Let \(M^{\star}\) be the weak completed Cech-nerve of \(W(k)\to M^{0}=W(k)[T_{1},\dots,T_{d}]\), so \(M^{n}=(W(k)[T_{1},\dots,T_{d}])^{\otimes(n+1)})^{\dagger}\). Let \(J^{\star}\) be the kernel of \(M^{\star}\to M^{0}\to\overline{A}\). Take for each \(M^{\star}\) the associated weakly completed free \(\delta\)-\(W(k)\)-algebra \(F^{\star}_{\delta}\) on \(M^{\star}\) with ideal \(J^{\star}F^{\star}_{\delta}\) and let \(B^{\dagger\star}=F^{\star\dagger}_{\delta}\{\frac{J^{\star}}{p}\}\), which is the Cech-nerve of \(B^{\dagger 0}=F^{0\dagger}_{\delta}\{\frac{J^{0}}{p}\}\). Then \((B^{\dagger\star}\to B^{\dagger\star}/p\leftarrow\overline{A})\) is the Cech-nerve as cosimplicial object of \((B^{\dagger 0}\to B^{\dagger 0}/p\leftarrow\overline{A})\) in \((\overline{A}/W(k))^{\dagger}_{\mathbb{A}}\). Then it follows from Lemma 1.5 and the subsequent construction that any object in \((\overline{A}/W(k))^{\dagger}_{\mathbb{A}}\) receives a map from \((B^{\dagger 0}\to B^{\dagger 0}/p\leftarrow\overline{A})\) hence the latter object is a weakly initial object in \((\overline{A}/W(k))^{\dagger}_{\mathbb{A}}\). This implies that \(\Delta^{\dagger}_{\overline{A}/W(k)}\) is computed by \(B^{\dagger\star}\). Let \(f_{1},\dots,f_{r(n)}\) be generators of the ideal \(J^{n}\). Then \[B^{\dagger n}=F^{n\dagger}_{\delta}\langle Y_{1},\dots,Y_{r(n)}\rangle/(f_{1}-pY _{1},\dots,f_{r(n)}-pY_{r(n)})\,.\] Then \(B^{\dagger n}\otimes_{W(k)}k=\overline{A}[Z_{1},Z_{2},\dots]\) is a polynomial algebra over \(\overline{A}\) in infinitely many variables. It is then clear that \(B^{\dagger n}\) is a direct limit of Monsky-Washnitzer lifts of polynomial algebras over \(\overline{A}\) in finitely many variables. Applying the argument [13, Theorem 5.4] as at the beginning of this section we conclude that \[\Omega^{\bullet}_{B^{\dagger n}/W(k)}\otimes\mathbb{Q}\cong\Omega^{\bullet}_{A^ {\dagger}/W(k)}\otimes\mathbb{Q}\] for all \(n\). We will see that an argument analogous to [11] implies that \[\Delta^{\dagger}_{\overline{A}/W(k)}\otimes\mathbb{Q}\cong\Omega^{\bullet}_{A^ {\dagger}/W(k)}\otimes\mathbb{Q}\,.\] Indeed, let \(M^{r,s}=\Omega^{r}_{B^{1\star}/W(k)}\otimes\mathbb{Q}\), and consider \(M^{\bullet,\bullet}\) as a first quadrant double complex. **Lemma 2.1**.: _For \(i>0\) the cosimplicial module \(\Omega^{i}_{B^{1\star}/W(k)}\otimes\mathbb{Q}\) is homotopy equivalent to zero._ Proof.: This holds for the cosimplicial module \(\Omega^{i}_{M^{\bullet}/W(k)}\) by taking the weak completion of \(\Omega^{i}_{P^{\star}/W(k)}\) where \(P^{\star}\) is the Cech-nerve of \(W(k)\to W(k)[T_{1},\dots,T_{d}]\) and using [1, Lemma 2.15, Lemma 2.17]. The same argument implies the statement for \(\Omega^{i}_{P^{\star}_{\delta}/W(k)}\) and then one has to tensor the cosimplicial module \(\Omega^{i}_{F^{\star}_{\delta}/W(k)}\) with the \(F^{\star}_{\delta}\)-module \(B^{\dagger\star}\) to prove the lemma. Each column complex \(M^{\bullet,s}\) is quasi-isomorphic to \(M^{\bullet,0}=\Omega^{\bullet}_{A^{\dagger}/W(k)}\otimes\mathbb{Q}\). The complex \(M^{0,\bullet}\) computes \(\frac{A^{\dagger}}{A/W(k)}\otimes\mathbb{Q}\). The total complex \(\operatorname{Tot}(M^{\bullet,\bullet})\) computes the cohomology of the de Rham complex \(\Omega^{\bullet}_{A^{\dagger}/W(k)}\otimes\mathbb{Q}\) by the first spectral sequence associated to the double complex \(M^{\bullet,\bullet}\). On the other hand, by Lemma 2.1, \(\operatorname{Tot}(M^{\bullet,\bullet})\) also computes the cohomology of \(M^{0,\bullet}\) by the second spectral sequence. Hence we have proven: **Theorem 2.2**.: _Let \(\overline{R}\) be a smooth \(k\)-algebra with Monsky-Washnitzer lift \(R^{\dagger}\). Then we have a canonical isomorphism_ \[\mathbb{A}^{\dagger}_{\overline{R}/W(k)}\otimes\mathbb{Q}\cong\Omega^{\bullet }_{R^{\dagger}/W(k)}\otimes\mathbb{Q}\,.\] Now we globalise the above comparison: Let \(\{U_{i}\}\) be an affine covering, \(U_{i}=\operatorname{Spec}A_{i}\subset X\), of \(X\), a smooth \(k\)-scheme. \(R\Gamma(X,\mathbb{A}^{\dagger}_{X/W(k)})\) is the total complex of \[\prod_{i}\mathbb{A}^{\dagger}_{U_{i}/W(k)}\rightrightarrows\prod_{i,j} \mathbb{A}^{\dagger}_{U_{i}\cap U_{j}/W(k)}\stackrel{{\rightarrow }}{{\rightarrow}}\cdots\,.\] On the other hand, we have a commutative diagram where the isomorphisms come from [1, Corollary 3.25]: we have isomorphisms \[\mathbb{A}^{\dagger}_{\operatorname{Spec}\overline{B}/W(k)}\otimes\mathbb{Q} \simeq\Omega^{\bullet}_{B^{\dagger}/W(k)}\otimes\mathbb{Q}\simeq W^{\dagger} \Omega^{\bullet}_{\overline{B}/k}\otimes\mathbb{Q}\] for any smooth \(k\)-algebra \(\overline{B}\) with Monsky-Washnitzer lift \(B^{\dagger}\). This implies the existence of a map \[R\Gamma(X,\mathbb{A}^{\dagger}_{X/W(k)}\otimes\mathbb{Q})=\operatorname{ Tot}\biggl{(}\prod_{i}\mathbb{A}^{\dagger}_{U_{i}/W(k)}\otimes\mathbb{Q}\implies \prod_{i,j}\mathbb{A}^{\dagger}_{U_{i}\cap U_{j}/W(k)}\otimes\mathbb{Q} \implies\dots\biggr{)}\] \[R\Gamma(X,W^{\dagger}\Omega^{\bullet}_{X/k}\otimes\mathbb{Q})= \operatorname{Tot}\biggl{(}\prod_{i}W^{\dagger}\Omega^{\bullet}_{U_{i}/k} \otimes\mathbb{Q}\implies\prod_{i,j}W^{\dagger}\Omega^{\bullet}_{U_{i}\cap U_{j }/k}\otimes\mathbb{Q}\implies\dots\biggr{)}\] due to the sheaf properties of \(\mathbb{A}^{\dagger}_{X/W(k)}\) and \(W^{\dagger}\Omega^{\bullet}_{X/k}\), which by construction is an isomorphism. **Corollary 2.3**.: _Let \(X\) be a smooth \(k\)-scheme. Then we have an isomorphism_ \[R\Gamma(X,\mathbb{A}^{\dagger}_{X/W(k)}\otimes\mathbb{Q})\cong R\Gamma_{ \operatorname{rig}}(X/W(k)\otimes\mathbb{Q})\] _between rational overconvergent prismatic and rigid cohomology._ Proof.: See [11, Theorem 4.40] for the case that \(X\) is quasiprojective, and [10] for the general case. ## 3. Etale comparison of overconvergent prismatic cohomology Let \((A,d)\) be a perfect prism, \(A/d=\mathcal{O}\), and \(X/\mathcal{O}\) a smooth weak formal scheme with generic fibre \(X_{\eta}=X\times_{\operatorname{Spec}\mathcal{O}}\operatorname{Spec}\mathcal{O }[1/p]\). Then we have **Theorem 3.1**.: _Let \(\mu:(X_{\eta})_{\operatorname{\acute{e}t}}\to(X)_{\operatorname{\acute{e}t}}\). Then we have a canonical isomorphism_ \[R\mu_{*}\mathbb{Z}/p^{n}\cong\left(\mathbb{A}^{\dagger}_{X/A}[1/d]/p^{n} \right)^{\varphi=1}\,.\] _If \(X=\operatorname{Spf}^{\dagger}S\) is affine with generic fibre \(\operatorname{Spec}S[1/p]\) an affinoid dagger variety, then we have_ \[R\Gamma(\operatorname{Spec}S[1/p],\mathbb{Z}/p^{n})\cong\left(\mathbb{A}^{ \dagger}_{S/A}[1/d]/p^{n}\right)^{\varphi=1}\,.\] _Remark 3.2_.: * Let \(\widehat{X}=\operatorname{Spf}\widehat{S}\) be the formal smooth scheme associated to \(\operatorname{Spf}^{\dagger}S\) with generic fibre \(\operatorname{Spec}\widehat{S}[1/p]\), a rigid variety. Then The etale cohomology of an affinoid dagger variety coincides with the etale cohomology of its associated affinoid variety [14, Propositon 3.5]. Hence the theorem implies that \(\left(\mathbb{A}^{\dagger}_{S/A}[1/d]/p^{n}\right)^{\varphi=1}\) is isomorphic to \(\left(\mathbb{A}_{\widehat{S}/A}[1/d]/p^{n}\right)^{\varphi=1}\). Proof.: Let \(C^{\bullet}\) be the Cech-Alexander complex that computes \(\mathbb{A}^{\dagger}_{S/A}\). We have a simplicial object \((C^{\bullet}\to C^{\bullet}/dC^{\bullet}\gets S)\) in \((S/A)^{\dagger}_{\mathbb{A}}\). Each \(C^{i}\) is a dagger prism and is weakly complete with respect to the \((p,d)\)-adic topology: \(C^{i}=(B^{i}_{\delta})^{\dagger}\{\frac{J^{i}}{d}\}\) where \(B^{i}_{\delta}\) is the \((p,d)\)-weakly completed \(\delta\)-\(B^{i}\)-algebra associated to \(B^{i}\) and where \(B^{\bullet}\) is the Cech-nerve of \(A\to B^{0}\), a \((p,d)\)-weakly complete free \(A\)-algebra with kernel \(J^{0}=\ker(B^{0}\to S)\) and \(J^{\star}=\ker(B^{\star}\to S)\). Then \(C^{i}=\varinjlim_{\epsilon\to 0}C^{i}_{\epsilon}\) (as in Lemma 1.1). The family of Gauss norms \(\gamma_{\epsilon}\) defining \(C^{i}_{\epsilon}\) can be chosen in a compatible way on \(C^{\star}\) to get simplicial complexes \(C^{\star}_{\epsilon}\) of \((p,d)\)-complete \(A\)-algebras with \(\varinjlim_{\epsilon}C^{\star}_{\epsilon}=C^{\star}\). Then \(C^{\star}_{\epsilon}/p\) is \(d\)-adically complete. Using [10, Lemma 9.2], we see that \(C^{\star}_{\epsilon}/p\in D_{\operatorname{comp}}(\mathcal{O}^{\flat}[F])\) and taking \(\varinjlim_{\epsilon}C^{\star}_{\epsilon}/p\), \(\epsilon>0\), commutes with the functors \(M\mapsto M^{\varphi=1}\) and \(M\mapsto M[1/d]^{\varphi=1}\), hence \[(\mathbb{A}^{\dagger}_{S/A}[1/d]/p)^{\varphi=1}=\varinjlim_{\epsilon\to 0}(C^{ \star}_{\epsilon}/p)[1/d]^{\varphi=1}\,.\] Define \(\mathbb{A}^{\dagger}_{S/A,\operatorname{perf}}:=C^{\bullet\dagger, \operatorname{perf}}\) using Definition 0.5. We have \[(\mathbb{A}^{\dagger}_{S/A}[1/d]/p)^{\varphi=1}\xrightarrow[\varphi]{\sim}( \mathbb{A}^{\dagger}_{S/A}/p[1/d])^{\varphi=1}\] hence we get \[(\mathbb{A}^{\dagger}_{S/A}[1/d]/p^{n})^{\varphi=1}\xrightarrow[\varphi]{\sim }(\mathbb{A}^{\dagger}_{S/A,\operatorname{perf}}[1/d]/p^{n})^{\varphi=1}\,.\] Indeed, without loss of generality \(n=1\). Then \[(\mathbb{A}^{\dagger}_{S/A}[1/d]/p)^{\varphi=1}=\varinjlim_{\epsilon\to 0}(C _{\epsilon}/p[1/d])^{\varphi=1}\] and \[\mathbb{A}^{\dagger}_{S/A,\operatorname{perf}}[1/d]/p)^{\varphi=1}=\varinjlim _{\epsilon\to 0}((C^{\bullet}_{\epsilon}/p)^{\operatorname{perf}}[1/d])^{ \varphi=1}\] where \((C^{\bullet}_{\epsilon}/p)^{\operatorname{perf}}\) is the \(d\)-completed filtered colimit of \(C^{\bullet}_{\epsilon}/p\xrightarrow[\varphi]{\sim}C^{\bullet}_{\epsilon}/p \xrightarrow[\varphi]{\sim}\cdots\). As each map acts trivially on applying \(((-)[1/d])^{\varphi=1}\), [10, Lemma 9.2] implies that \[(\mathbb{A}^{\dagger}_{S/A,\operatorname{perf}}[1/d]/p)^{\varphi=1} =\varinjlim_{\epsilon\to 0}(C^{\bullet}_{\epsilon}/p[1/d])^{ \varphi=1}\] \[=(\mathbb{A}^{\dagger}_{S/A}[1/d]/p)^{\varphi=1}\] as claimed. Let \(S\) be a dagger perfectoid with \(p\)-adic completion \(\widehat{S}\). Consider the composite map \[\alpha\,:\,\mathbb{A}^{\dagger}_{S/A}\to\mathbb{A}_{\widehat{S}/A}\cong W( \widehat{S}^{\flat})\] where the isomorphism is derived from [10, Lemma 4.7]. Since \((W^{\dagger}(S^{\flat})\to W^{\dagger}(S^{\flat})/d=S)\) is an object in \((\operatorname{Spf}^{\dagger}S/A)_{\mathbb{A}}\), there exists a unique map \(\mathbb{A}^{\dagger}_{S/A}\to W^{\dagger}(S^{\flat})\) compatible with \(\mathbb{A}_{S/A}\xrightarrow[\sim]{\sim}W(S^{\flat})\). Then the analogue of [10, Lemma 4.7] also holds: Let \(S\to B/d\) be a map, for a dagger prism \(B\) with completion \(\widehat{B}\) we have a commutative diagram where the lower horizontal arrow is induced by [10, Lemma 4.7] and the upper horizontal arrow is induced by the canonical map \(\mathbb{A}^{\dagger}_{S/A}\to B\) which exists by the universal property of \(\mathbb{A}^{\dagger}_{S/A}\). Hence \(\mathbb{A}^{\dagger}_{S/A}\cong W^{\dagger}(S^{\flat})\). In the following we reduce the proof of Theorem 3.1 to the case that \(S\) is dagger perfectoid. For an affine weak formal scheme \(\operatorname{Spf}S\) with formal completion \(\operatorname{Spf}\widehat{S}\) we have an isomorphism (Remark 3.2) \[F(S):=R\Gamma(\operatorname{Spec}S[1/p],\mathbb{Z}/p^{n})\cong R\Gamma( \operatorname{Spec}\widehat{S}[1/p],\mathbb{Z}/p^{n})=:F(\widehat{S})\] and we write \(S=\varinjlim_{\epsilon}S_{\epsilon}\) where \(S_{\epsilon}\) is \(p\)-adically complete and \(\{S_{\epsilon}\}_{\epsilon}\) defines the dagger structure on \(\operatorname{Spf}\widehat{S}=\operatorname{Spf}S\). Then \[F(S)=\varinjlim_{\epsilon}R\Gamma(\operatorname{Spec}S_{\epsilon}[1/p], \mathbb{Z}/p^{n})=\varinjlim_{\epsilon}F(S_{\epsilon})\,.\] In the proof of [1, Theorem 9.1] a comparison map \[F(-)\to G(-):=(\mathbb{A}_{-/A}[1/d]/p^{n})^{\varphi=1}\] of \(\operatorname{arc}_{p}\)-sheaves is constructed on \(\operatorname{fSch}_{/\operatorname{Spf}\mathcal{O}}\) and shown to be an isomorphism. **Definition 3.3**.: A homomorphism of weakly complete \(\mathcal{O}\)-algebras \(S\to T\) is called an \(\operatorname{arc}_{p}\)-cover if \(T=\varinjlim_{\epsilon}T_{\epsilon}\) has a dagger presentation with \(p\)-complete \(\mathcal{O}\)-algebras \(T_{\epsilon}\) such that \(S_{\epsilon}\to T_{\epsilon}\) is an \(\operatorname{arc}_{p}\)-cover in the sense of [1, Definition 6.14]. Note that \(F\) is an \(\operatorname{arc}_{p}\)-sheaf by [1, Corollary 6.17] on \(\operatorname{Spf}S_{\epsilon}\), hence \(F(S)=\varinjlim_{\epsilon}F(S_{\epsilon})\) is an \(\operatorname{arc}_{p}\)-sheaf on \(\operatorname{Spf}S\). Now let again \(C^{\bullet}\) be the Cech-Alexander complex computing \(\mathbb{A}_{S/A}^{\dagger}\) and \(\mathbb{A}_{S/A,\operatorname{perf}}^{\dagger}=C^{\bullet\dagger,\operatorname {perf}}\) as above. Then we have \[\mathbb{A}_{S/A,\operatorname{perf}}^{\dagger}/p^{n} =C^{\bullet\dagger,\operatorname{perf}}/p^{n}\] \[=\varinjlim_{\epsilon\to 0}W^{\dagger}((C_{\epsilon}^{\bullet}/p)^{ \operatorname{perf}})/p^{n}\] \[=\varinjlim_{\epsilon\to 0}W_{n}((C_{\epsilon}^{\bullet}/p)^{ \operatorname{perf}})\] \[=\varinjlim_{\epsilon\to 0}\mathbb{A}_{S_{\epsilon}/A, \operatorname{perf}}/p^{n}\,.\] This implies that \[(\mathbb{A}_{S/A,\operatorname{perf}}^{\dagger}[1/d]/p^{n})^{\varphi=1}= \varinjlim_{\epsilon\to 0}(\mathbb{A}_{S_{\epsilon}/A,\operatorname{perf}}[1/d]/p^{n})^{ \varphi=1}\,.\] Since \((\mathbb{A}_{S_{\epsilon}/A,\operatorname{perf}}[1/d]/p^{n})^{\varphi=1}\) is an \(\operatorname{arc}_{p}\)-sheaf on \(\operatorname{Spf}S_{\epsilon}\), \((\mathbb{A}_{S/A,\operatorname{perf}}^{\dagger}[1/d]/p^{n})^{\varphi=1}\) is an \(\operatorname{arc}_{p}\)-sheaf on \(\operatorname{Spf}S\). One defines the arc-topology for weak formal \(p\)-adic schemes by using \(p\)-adic weakly complete valuation rings of rank \(1\) instead, and shows that there exists an arc-cover \(S\to T\) with \(T\) dagger perfectoid. The proof is entirely similar to [1, SS8] for \(p\)-adic formal schemes. The proof of [1, Proposition 8.10] transfers to affine perfectoid dagger schemes \(\operatorname{Spf}T\); that is, \(H^{i}_{\operatorname{arc}}(\operatorname{Spf}T,\mathcal{O})=0\) and the structure presheaf on \(\operatorname{Spf}T\) is a sheaf. In order to show that the map \[F(S)=\varinjlim_{\epsilon}F(S_{\epsilon})\to(\mathbb{A}_{S/A, \operatorname{perf}}^{\dagger}[1/d]/p^{n})^{\varphi=1}\simeq\varinjlim_{\epsilon \to 0}(\mathbb{A}_{S_{\epsilon}/A,\operatorname{perf}}[1/d]/p^{n})^{\varphi=1}\] is an isomorphism, we can work arc-locally and hence from now on we may assume that \(S\) is dagger perfectoid. We claim that we have an isomorphism \[R\Gamma(\operatorname{Spec}S[1/p],\mathbb{Z}/p^{n})\cong R\Gamma(\operatorname{ Spec}S^{\flat}[1/d],\mathbb{Z}/p^{n})\,. \tag{3.3.1}\] Indeed, we have \[R\Gamma(\operatorname{Spec}S^{\flat}[1/d],\mathbb{Z}/p^{n}) =R\Gamma(\operatorname{Spa}S^{\flat}[1/d],\mathbb{Z}/p^{n})\] \[=R\Gamma(\operatorname{Spa}\widehat{S}^{\flat}[1/d],\mathbb{Z}/p^ {n})\] \[=R\Gamma(\operatorname{Spec}\widehat{S}^{\flat}[1/d],\mathbb{Z}/p^ {n})\] and likewise for \(S\). This follows from the comparison theorem of Huber/Scholze for etale cohomology of rigid/adic spaces. The isomorphism \[R\Gamma(\operatorname{Spa}\widehat{S}[1/p],\mathbb{Z}/p^{n})\cong R\Gamma( \operatorname{Spa}\widehat{S}^{\flat}[1/d],\mathbb{Z}/p^{n})\] follows from [13, Theorem 1.11]. This shows that we have (3.3.1). It remains to check \[R\Gamma(\operatorname{Spec}S^{\flat}[1/d],\mathbb{Z}/p^{n}) =(\mathbb{A}_{S/A}^{\dagger}[1/d]/p^{n})^{\varphi=1}\] \[=(W_{n}(S^{\flat})[1/d])^{\varphi=1}\,.\] By Artin-Schreier-Witt we have \[\mathbb{Z}/p^{n}\cong(W_{n}(S^{\flat})[1/d])^{\varphi=1}\] and since by the dagger analogue of [13, Theorem 4.9, Lemma 4.10] we have \[W_{n}(S^{\flat})[1/d]\xrightarrow{\varphi-1}W_{n}(S^{\flat})[1/d]\] is surjective, we conclude that \(H^{i}(\operatorname{Spa}S^{\flat}[1/d],\mathbb{Z}/p^{n})\) vanishes for \(i>0\). This concludes the proof of Theorem 3.1. ## 4. The tilting correspondence for perfectoid dagger algebras Let \(K\) be a perfectoid field, with absolute value \(|\ |\), ring of integers \(\mathcal{O}\) in \(K\), and \(w\) an element with \(|w|=1/p\). Consider the ring \(R=K\langle T_{1}^{1/p^{\infty}},\dots,T_{d}^{1/p^{\infty}}\rangle\) of \(w\)-adically converging power series, so \(z=\sum_{I\in\mathbb{N}[1/p]^{d}}a_{I}\underline{T}^{I}\), \(a_{I}\in K\) is in \(K\langle T_{1}^{1/p^{\infty}},\dots,T_{d}^{1/p^{\infty}}\rangle\) if and only if \(|a_{I}|\to 0\). Then \(K\langle T_{1}^{1/p^{\infty}},\dots,T_{d}^{1/p^{\infty}}\rangle\) is the perfectoidisation of the Tate algebra \(K\langle T_{1},\dots,T_{d}\rangle\), equipped with the Gauss norm \(\|z\|=\max_{I}\{|a_{I}|\}\). Let \(R^{\circ}\subset R\) be the subring with \(a_{I}\in\mathcal{O}\) and let \(\lambda\in\mathcal{O}\), \(|\lambda|<1\), we write \(\lambda=w^{\epsilon}\), \(\epsilon>0\). Then \[\left\{z=\sum_{I\in\mathbb{N}[1/p]^{d}}a_{I}\underline{T}^{I}\,:\,\underset{I }{\lim}\,|a_{I}||\lambda|^{-|I|}=0\right\}\] is for every \(\lambda=w^{\epsilon}\) a Banach algebra \(R_{\epsilon}\) with Gauss norm \(\max\{|a_{I}|w^{\epsilon}|^{-|I|}\}\). We write \[R_{\epsilon}=K\langle(\lambda T_{1})^{1/p^{\infty}},\dots,(\lambda T_{d})^{1/ p^{\infty}}\rangle\,,\] so we have convergence for \(|\lambda T_{i}|\leq 1\). We have an isomorphism \[R_{\epsilon} \to K\langle S_{1}^{1/p^{\infty}},\dots,S_{d}^{1/p^{\infty}}\rangle\] \[T_{i} \mapsto\lambda^{-1}S_{i},\ (\text{so }S_{i}=\lambda T_{i})\,.\] Let \(|\lambda|<|\lambda^{\prime}|<1\), \(\lambda=w^{\epsilon}\), \(\lambda^{\prime}=w^{\epsilon^{\prime}}\). Then we have an inclusion and we obtain a map of power bounded elements \[\mathcal{O}\langle S_{1}^{1/p^{\infty}},\ldots,S_{d}^{1/p^{\infty}}\rangle \to\mathcal{O}\langle{S^{\prime}_{1}}^{1/p^{\infty}},\ldots,{S^{\prime}_{d}}^ {1/p^{\infty}}\rangle\] inducing \(R_{\epsilon}^{\circ}\to R_{\epsilon^{\prime}}^{\circ}\). Define \(S=\varinjlim_{\epsilon}R_{\epsilon}=\bigcup_{\epsilon}R_{\epsilon}\). We define \[S^{\circ}=\left\{f\in S\,:\,\forall\alpha\in\mathcal{O},|\alpha|<1,\exists \lambda=w^{\epsilon}\text{ such that }f\in R_{\epsilon}\text{ and }\alpha f\in R_{\epsilon}^{\circ} \right\}.\] Now let \(K\) be a perfectoid field of characteristic \(0\), \(K^{\flat}\) its tilt (of characteristic \(p>0\)). Let \(\mathcal{O}\), resp. \(\mathcal{O}^{\flat}\), be the ring of integers in \(K\), resp. in \(K^{\flat}\). We construct perfectoid dagger algebras over \(K\), resp. over \(K^{\flat}\), and derive a tilting correspondence. Let \(A/K^{\flat}\) be a reduced affinoid dagger algebra with integral elements \(A^{+}/\mathcal{O}^{\flat}\). We fix a presentation \((K^{\flat})^{\dagger}\langle T_{1},\ldots,T_{d}\rangle\to A\) (and \((\mathcal{O}^{\flat})^{+}\langle T_{1},\ldots,T_{d}\rangle\to A^{+}\)). Define Gauss norms \(\gamma_{\epsilon}\) on \(\widehat{A}\) via the presentation and let \(A_{\epsilon}=\left\{a\in A\,:\,\gamma_{\epsilon}(a)>-\infty\right\}\). \(A_{\epsilon}\) is a \(\varpi\)-adically complete Tate algebra over \(K^{\flat}\), with integral elements \(A_{\epsilon}^{+}\) over \(\mathcal{O}^{\flat}\). Let \(\widehat{A}_{\epsilon}^{\mathrm{perf}}\) be the \(\varpi\)-adically completed perfection, with integral elements \(\widehat{A}_{\epsilon}^{+\mathrm{perf}}\). This is a perfectoid \(K^{\flat}\)-algebra with \[\mathrm{Spa}\left(\widehat{A}_{\epsilon}^{\mathrm{perf}},\widehat{A}_{ \epsilon}^{+\mathrm{perf}}\right)=\mathrm{Spa}\left(\widehat{A}_{\epsilon}, \widehat{A}_{\epsilon}^{+}\right)\] (see [1, Proposition 6.11]). Define \[(A^{\mathrm{perf}},A^{+\mathrm{perf}})=\varinjlim_{\epsilon}(\widehat{A}_{ \epsilon}^{\mathrm{perf}},\widehat{A}_{\epsilon}^{+\mathrm{perf}})\,.\] This is a perfectoid dagger algebra over \(K^{\flat}\) with \[\mathrm{Spa}\left(A^{\mathrm{perf}},A^{+\mathrm{perf}}\right)=\mathrm{Spa} \left(\widehat{A},\widehat{A}^{+}\right).\] The chosen presentation extends to a presentation \(R_{\epsilon}\twoheadrightarrow\widehat{A}_{\epsilon}^{\mathrm{perf}}\), \(R_{\epsilon}^{\circ}\twoheadrightarrow\widehat{A}_{\epsilon}^{+\mathrm{perf}}\), and hence \(S=\varinjlim_{\longrightarrow\epsilon}R_{\epsilon}\twoheadrightarrow A^{ \mathrm{perf}}\), \(S^{\circ}\twoheadrightarrow A^{+\mathrm{perf}}\). Let \((B_{\epsilon},B_{\epsilon}^{+})\) be the until of \((\widehat{A}_{\epsilon}^{\mathrm{perf}},\widehat{A}_{\epsilon}^{+\mathrm{perf}})\) under Scholze's tilting correspondence. Then \(\mathrm{Spa}\left(B_{\epsilon},B_{\epsilon}^{+}\right)\) is homeomorphic to its tilt \(\mathrm{Spa}\left(B_{\epsilon},B_{\epsilon}^{+}\right)^{\flat}=\mathrm{Spa} \left(\widehat{A}_{\epsilon}^{\mathrm{perf}},\widehat{A}_{\epsilon}^{+\mathrm{ perf}}\right)\)[1, Theorem 6.3]. Define \((B,B^{+})=\varinjlim_{\epsilon}(B_{\epsilon},B_{\epsilon}^{+})\). This is a perfectoid dagger algebra over \(K\). Then we have the analogue of [1, Proposition 6.17]: **Theorem 4.1**.: _There is an equivalence of perfectoid dagger spaces over \(K\) and perfectoid dagger spaces over \(K^{\flat}\), by associating to a perfectoid dagger space \(X^{\dagger}/K\) its tilt \((X^{\flat})^{\dagger}/K^{\flat}\)._ Proof.: The reduction to the case of an affinoid dagger space is shown in the same way as in [1, Proposition 6.17]. For any affinoid perfectoid dagger algebra \((B,B^{+})\) with tilt \((A^{\mathrm{perf}},A^{+\mathrm{perf}})\), the associated presheaves \(\mathcal{O}_{X}\) over \(X=\mathrm{Spa}(B,B^{+})\) and \(\mathcal{O}_{X^{\flat}}\) on \(X^{\flat}=\mathrm{Spa}(B,B^{+})^{\flat}\), defined by taking direct limits of the restrictions of \(\mathcal{O}_{X_{\epsilon}}\) on \(X_{\epsilon}=\operatorname{Spa}(B_{\epsilon},B_{\epsilon}^{+})\) and \(\mathcal{O}_{X_{\epsilon}^{\flat}}\) on \(X_{\epsilon}^{\flat}=\operatorname{Spa}(B_{\epsilon},B_{\epsilon}^{+})^{\flat}\) to \(X\) resp. \(X^{\flat}\) are sheaves by [13, Proposition 6.14], and [13, Theorem 6.3] holds verbatim for affinoid perfectoid dagger spaces. In particular, we have a homeomorphism \[\operatorname{Spa}(B,B^{+})\cong\operatorname{Spa}(B,B^{+})^{\flat}\,\,(= \operatorname{Spa}(\widehat{B},\widehat{B}^{+})=\operatorname{Spa}(\widehat{B },\widehat{B}^{+})^{\flat})\] which commutes with the homeomorphism between the associated affinoid perfectoid spaces. For an affinoid perfectoid space the equivalence follows from the above construction, namely \[\operatorname{Spa}\left(B,B^{+}\right)^{\flat}=\operatorname{Spa}\left(A^{ \operatorname{perf}},A^{+\operatorname{perf}}\right).\] The main example is \[\operatorname{Spa}\left(K^{\dagger}\langle T_{1}^{1/p^{\infty}},\dots,T_{d}^{ 1/p^{\infty}}\rangle,\mathcal{O}^{\dagger}\langle T_{1}^{1/p^{\infty}},\dots, T_{d}^{1/p^{\infty}}\rangle\right)\] and its tilt \[\operatorname{Spa}\left(K^{\flat\dagger}\langle T_{1}^{1/p^{\infty}},\dots, T_{d}^{1/p^{\infty}}\rangle,\mathcal{O}^{\flat\dagger}\langle T_{1}^{1/p^{ \infty}},\dots,T_{d}^{1/p^{\infty}}\rangle\right)\] where \(\dagger\) means \(\varpi\)-adic weak completion. ## 5. A dagger version of \(A\Omega\) Let \((A,I)=(A_{\inf}(\mathcal{O}),I=(d))\). Let \(\mathcal{X}/\operatorname{Spf}\mathcal{O}\) be a weak formal smooth scheme and let \(X\) be its generic fibre, a dagger variety. Let \(X_{\operatorname{pro\'{e}t}}\) be the pro-etale site on \(X\), given locally by \[(U_{i+1}\xrightarrow{\text{f\'{e}t}}U_{i}\xrightarrow{\text{f\'{e}t}}\dots \xrightarrow{\text{f\'{e}t}}U_{1}\xrightarrow{\text{\'{e}t}}X)\] where \(U_{i}\) are affinoid dagger varieties, \(U_{i+1}\to U_{i}\) is finite etale for \(i\geq 1\) and \(U_{1}\to X\) is etale. Let \(U=\varprojlim_{i}\operatorname{Spec}U_{i}\). Let \(T_{i}=\varprojlim_{\epsilon}T_{\epsilon,i}\) be the affinoid dagger algebra corresponding to \(U_{i}\), with the notation of SS4. Let \(T_{\epsilon,i}^{\operatorname{perf}}\) be the perfectoidisation arising from the \(T_{\epsilon,i}\), with integral elements \(T_{\epsilon,i}^{\operatorname{o},\operatorname{perf}}\). We have the structure sheaf \(\mathcal{O}(U)=\varprojlim_{\epsilon}T_{\epsilon}^{\operatorname{perf}}\), and \(\mathcal{O}^{+}(U)=\varinjlim_{\epsilon}T_{\epsilon}^{\operatorname{o}, \operatorname{perf}}\). These algebras are weakly complete with respect to the \(p\)-adic topology. Then \((\mathcal{O}(U),\mathcal{O}^{+}(U))\) is a perfectoid dagger algebra over \(\mathbb{C}_{p}\). Let \(Z=\operatorname{Spa}\left(\mathcal{O}(U),\mathcal{O}^{+}(U)\right)\) and let \(Z^{\flat}=\operatorname{Spa}\left(\mathcal{O}(U^{\flat}),\mathcal{O}^{+}(U)\right)\) be its tilt. Let \(v:X_{\operatorname{pro\'{e}t}}\to\mathcal{X}_{\operatorname{Zar}}\) be the canonical map of topoi. We want to define a dagger version of \(A\Omega_{\widehat{X}/\mathcal{O}}\) where \(\widehat{X}\) is the associated formal smooth scheme and \(A\Omega_{\widehat{X}/\mathcal{O}}\) is defined as in [1]. So we first need to give a reasonable definition of \(A_{\inf,X}^{\dagger}=W^{\dagger}(\mathcal{O}_{X}^{\dagger\flat})\). Let us first consider the case \(\mathcal{X}=\operatorname{Spf}^{\dagger}R\) for \(R=\mathcal{O}^{\dagger}\langle T_{1}^{\pm 1},\dots,T_{d}^{\pm 1}\rangle\). We have an isomorphism \[A_{\inf}\langle\underline{U}^{\pm 1/p^{\infty}}\rangle\cong W(\mathcal{O} \langle\underline{T}^{\pm 1/p^{\infty}}\rangle^{\flat})\] of complete \(A_{\inf}\)-algebras [1, p. 70] with respect to the \((p,d)\)-adic topology. Define the \((p,d)\)-weakly complete analogue \[A^{\dagger}_{\inf}\langle\underline{U}^{\pm 1/p^{\infty}}\rangle =\varinjlim_{\epsilon\to 0}A_{\inf,\epsilon}\langle\underline{U}^{\pm 1 /p^{\infty}}\rangle\] \[=\varinjlim_{\epsilon\to 0}A_{\inf}\langle(p^{\epsilon} \underline{U})^{\pm 1/p^{\infty}}\rangle\] \[=\varinjlim_{\epsilon\to 0}W^{\dagger}(\mathcal{O}\langle(p^{ \epsilon}\underline{T})^{\pm 1/p^{\infty}}\rangle^{\flat})\] \[=:W^{\dagger}(\mathcal{O}^{\dagger}\langle\underline{T}^{\pm 1/p^{ \infty}}\rangle^{\flat})\,. \tag{5.0.1}\] To be precise, \(A_{\inf}\langle(p^{\epsilon}\underline{U})^{\pm 1/p^{\infty}}\rangle\) are the power series in \(\underline{U}^{\pm 1/p^{\infty}}\) with radius of convergence \(p^{\epsilon}\) in \(A_{\inf}[1/p]\langle\underline{U}^{\pm 1/p^{\infty}}\rangle\). Then \(W^{\dagger}(\mathcal{O}^{\dagger}\langle\underline{T}^{\pm 1/p^{\infty}} \rangle^{\flat})\) consists of Witt vectors \((w_{0},w_{1},\ldots)\) such that there exists \(\epsilon>0\) with \(\inf_{i}\{i+\gamma_{\epsilon}(w_{i})/p^{i}\}>-\infty\). In particular, \(w_{i}\in\mathcal{O}\langle(p^{\epsilon}\underline{T})^{\pm 1/p^{\infty}} \rangle^{\flat}\). For \(R\) etale over \(\mathcal{O}^{\dagger}\langle T^{\pm 1}_{1},\ldots,T^{\pm 1}_{d}\rangle\) we have, using a lifting \(A^{\dagger}(R)\) of \(R\) over \(A_{\inf}\) under the map \(\theta:A_{\inf}\to\mathcal{O}\), \[A^{\dagger}(R)\otimes_{A^{\dagger}_{\inf}\langle\underline{U}^{\pm 1} \rangle}A^{\dagger}_{\inf}\langle\underline{U}^{\pm 1/p^{\infty}}\rangle=A^{ \dagger}_{\inf}(R_{\infty})=:W^{\dagger}((R_{\infty})^{\flat}) \tag{5.0.2}\] where \((R_{\infty})^{\flat}=(R\otimes_{\mathcal{O}^{\dagger}(\underline{T}^{\pm 1})} \mathcal{O}^{\dagger}\langle\underline{T}^{\pm 1/p^{\infty}}\rangle)^{\flat}\). (This is a dagger version of [1, p. 70]). This construction sheafifies (see also the proof of Proposition 5.4 below), namely we have \[W^{\dagger}(\mathcal{O}^{+\flat}_{X})=\varinjlim_{\epsilon\to 0}W^{\dagger}( \mathcal{O}^{+\flat}_{X_{\epsilon}})\] for \(X_{\epsilon}=\operatorname{Spf}R_{\epsilon}\) (such that \(R_{\epsilon}\) is etale over \(\mathcal{O}\langle p^{\epsilon}\underline{T}^{\pm 1}\rangle\) and \(R=\varinjlim_{\epsilon}R_{\epsilon}\)) and global sections equal to \(W^{\dagger}((R_{\infty})^{\flat})\). We define for any \(X=\operatorname{Spec}S\) the sheaf \(A^{\dagger}_{\inf,X}\) on \(X_{\operatorname{pro\acute{e}t}}\) \[A^{\dagger}_{\inf,X}:=W^{\dagger}(\mathcal{O}^{+\flat}_{X})\] by local conditions using a covering by small affines and using that \(A_{\inf,X}\) is a sheaf on \(\widehat{X}_{\operatorname{pro\acute{e}t}}\). **Definition 5.1**.: We define a dagger version of \(A\Omega\) as follows: Let \(\mathcal{X}\) be a weak formal smooth scheme over \(\mathcal{O}\). Then \[A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}}=L\eta_{\mu}(Rv_{*}A^{\dagger}_{\inf,X})\] where \(L\eta_{\mu}\) is the decalage functor and \(\mu=[\epsilon]-1\in W(\mathcal{O}^{\flat})\). Now let \(\mathcal{X}=\operatorname{Spf}^{\dagger}R\) be small affine as above. We have a local-to-global map \[A^{\dagger}\Omega^{\operatorname{pro\acute{e}t}}_{\mathcal{X}/\mathcal{O}}:=L \eta_{\mu}R\Gamma_{\operatorname{pro\acute{e}t}}(\mathcal{X},A^{\dagger}_{ \inf,X})\to R\Gamma(\mathcal{X},A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}})\,.\] **Lemma 5.2**.: _The local-to-global map is a quasi-isomorphism._ Proof.: Since \(L\eta_{\mu}\) commutes with direct limits, we have: \[A^{\dagger}\Omega^{\operatorname{pro\acute{e}t}}_{\mathcal{X}/ \mathcal{O}} \cong\varinjlim_{\epsilon}L\eta_{\mu}R\Gamma_{\operatorname{pro \acute{e}t}}(\mathcal{X}_{\epsilon},A^{\dagger}_{\inf,X_{\epsilon}})\] \[\cong\varinjlim_{\epsilon}R\Gamma(\mathcal{X}_{\epsilon},A^{ \dagger}\Omega_{\mathcal{X}_{\epsilon}/\mathcal{O}})\] \[\cong R\Gamma(\mathcal{X},A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O }})\] where the middle quasi-isomorphism follows from [1, Proposition 9.14]. Our main theorem for comparing overconvergent prismatic cohomology with \(A^{\dagger}\Omega\) can be formulated as follows: **Theorem 5.3**.: _Let \(\mathcal{X}\) be weak formal smooth over \(\operatorname{Spf}\mathcal{O}\). Then we have a \(\varphi\)-equivariant quasi-isomorphism_ \[R\Gamma(\mathcal{X},A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}})\otimes_{A_{ \operatorname{inf}}}^{L^{\dagger}}B^{+}_{\operatorname{cris}}\cong\varphi^{*} \left(R\Gamma((\mathcal{X}/A_{\operatorname{inf}})_{\mathbb{A}},\mathcal{O}^{ \dagger}_{\mathbb{A}})\right)\otimes_{A_{\operatorname{inf}}}^{L^{\dagger}}B^{+} _{\operatorname{cris}}\] _where \(B^{+}_{\operatorname{cris}}\) is Fontaine's ring._ Proof.: It suffices to show the analogous statement for \(A^{\dagger}\Omega^{\operatorname{pro\'{e}t}}_{\mathcal{X}/\mathcal{O}}\) where \(\mathcal{X}=\operatorname{Spf}^{\dagger}R\) is small affine. We view \(A_{\operatorname{inf}}(\mathcal{O})=W(\mathcal{O}^{\flat})\) as a perfect \(\mathbb{Z}[\![q-1]\!]\)-algebra, for \(q=[\epsilon]\), \(\epsilon=(1,\zeta_{p},\zeta_{p^{2}},\ldots)\in\mathcal{O}^{\flat}\). The outline of the proof is as follows: We relate \(A^{\dagger}\Omega^{\operatorname{pro\'{e}t}}_{\mathcal{X}/\mathcal{O}}\) to group cohomology, where the group is \(\Gamma=\mathbb{Z}_{p}(1)^{d}\), the Galois group of the pro-etale extension \(\mathcal{U}/U\). The group cohomology can be computed by a dagger Koszul complex, which can be related to a dagger \(q\)-de Rham complex that computes overconvergent prismatic cohomology. To be precise, let \(R\) be small etale over \(\mathcal{O}^{\dagger}\langle T_{1}^{\pm 1},\ldots,T_{d}^{\pm 1}\rangle\) and \(R_{i}=R\otimes_{\mathcal{O}^{\dagger}\langle\underline{T}^{\pm 1/p^{i}}\rangle} \mathcal{O}^{\dagger}\langle\underline{T}^{\pm 1/p^{i}}\rangle\). Then \(\mathcal{U}=``\varprojlim"\operatorname{Spec}R_{i}[1/p]\) is a Galois cover of \(U=\operatorname{Spec}R[1/p]\). We have an action of \(\mu^{d}_{p^{i}}\) on \(R_{i}[1/p]\) in the usual way: \(\underline{\xi}=(\xi_{1},\ldots,\xi_{d})\in\mu^{d}_{p^{i}}\) acts via \(\underline{\xi}T_{1}^{i_{1}/p^{i}}\cdots T_{d}^{i_{d}/p^{i}}=\xi_{1}^{i_{1}} \cdots\xi_{d}^{i_{d}}T_{1}^{i_{1}/p^{i}}\cdots T_{d}^{i_{d}/p^{i}}\). We get a Cartan-Leray spectral sequence \[H^{n}_{\operatorname{cont}}(\mu^{d}_{p^{i}},H^{l}_{\operatorname{pro\'{e}t}}(U _{i},W^{\dagger}(\mathcal{O}^{+\flat}_{X}))\Rightarrow H^{n+l}_{\operatorname{ pro\'{e}t}}(U,W^{\dagger}(\mathcal{O}^{+\flat}_{X}))\] for \(U_{i}=\operatorname{Spa}R[1/p]\) and after taking limits over \(i\) yielding the derived version \[R\Gamma_{\operatorname{gp}}(\mathbb{Z}_{p}(1)^{d},R\Gamma_{\operatorname{ pro\'{e}t}}(\mathcal{U},W^{\dagger}(\mathcal{O}^{+\flat}_{X}))\simeq R\Gamma_{ \operatorname{pro\'{e}t}}(U,W^{\dagger}(\mathcal{O}^{+\flat}_{X}))\,.\] We have the following proposition, which is an almost purity version for overconvergent Witt vectors: **Proposition 5.4**.: * \(H^{i}_{\operatorname{pro\'{e}t}}(\mathcal{U},W^{\dagger}(\mathcal{O}^{+\flat }_{X}))\) _is almost zero, killed by_ \(W(\mathfrak{m}^{\flat})\)_, for_ \(i>0\)_._ * \(H^{0}_{\operatorname{pro\'{e}t}}(\mathcal{U},W^{\dagger}(\mathcal{O}^{+\flat }_{X}))=W^{\dagger}((R_{\infty})^{\flat})\) _where_ \((R_{\infty})^{\flat}\) _is the dagger tilt of the perfectoid dagger algebra_ \(R_{\infty}\)_._ Proof.: As before, let \(W^{\dagger}(\mathcal{O}^{+\flat}_{X})=\varinjlim_{\epsilon\to 0}W^{\dagger}( \mathcal{O}^{+\flat}_{X_{\epsilon}})\) for \(\mathcal{X}_{\epsilon}=\operatorname{Spf}R_{\epsilon}\). By [1, 4.10, 5.11] we have \(H^{i}_{\operatorname{pro\'{e}t}}(\mathcal{U},W(\mathcal{O}^{+\flat}_{X_{ \epsilon}}))\) is almost zero for \(i>0\) and \(H^{0}_{\operatorname{pro\'{e}t}}(\mathcal{U},W(\mathcal{O}^{+\flat}_{X_{ \epsilon}}))=W(R^{\flat}_{\infty,\epsilon})\). We need the same assertions for the overconvergent Witt vectors. Now let \(S^{\circ}=R^{\flat}_{\infty,\epsilon}\) be as above, which is a perfectoid algebra over \(\mathcal{O}_{K^{\flat}}\) and \(S=R_{\infty,\epsilon}[1/p]^{\flat}\) which is a perfectoid \(K^{\flat}\)-algebra. Let \(S^{\prime}/S\) be finite etale of degree \(n\), then \(S^{\prime}\) is a perfectoid \(K^{\flat}\)-algebra and \(S^{\prime\circ}\), the integral closure of \(S^{\circ}\) in \(S^{\prime}\), is almost finite etale over \(S^{\circ}\) by [1, Proposition 5.23]. By [1, Corollary 2.46]\(W^{\dagger}(S^{\prime})\) is finite etale over \(W^{\dagger}(S)\) of degree \(n\). The proof in [1], which is given for a finite etale extension of a finitely generated algebra over a perfect field, easily transfers to perfectoid algebras. Under different assumptions the result is also proved in [1]. Let \(X^{\flat}_{\epsilon}=\operatorname{Spa}(\mathcal{O}^{\flat}_{X_{\epsilon}}, \mathcal{O}^{\flat\flat}_{X_{\epsilon}})\). We will show that \(W^{\dagger}(\mathcal{O}^{\flat}_{X_{\epsilon}})\) is a sheaf on \(X^{\flat}_{\epsilon}\) for the topology generated by rational subdomains. Moreover, for any etale covering \(\{U_{\epsilon,i}\}\) of \(X^{\flat}_{\epsilon}\), where \(U_{\epsilon,i}\) is a rational subdomain of a finite etale map to a rational subdomain of \(X^{\flat}_{\epsilon}\), using etale acyclicity of the sheaf \(\mathcal{O}^{\flat}_{X_{\epsilon}}\) ([13, Proposition 7.13]), the total complex associated to the simplicial complex \[0\to\varinjlim_{\epsilon}W^{\dagger}(\mathcal{O}(X^{\flat}_{\epsilon}))\to \prod_{i}\varinjlim_{\epsilon}W^{\dagger}(\mathcal{O}(U_{\epsilon,i}))\rightto \varinjlim_{i,j}\varinjlim_{\epsilon}W^{\dagger}(\mathcal{O}(U_{\epsilon,i} \times U_{\epsilon,j}))\righttoq\cdots \tag{5.4.1}\] is exact. Indeed, let \(S_{\epsilon}=(R_{\infty,\epsilon}[1/p])^{\flat}\), a perfectoid \(K^{\flat}\)-algebra, and let \(S^{\prime}_{\epsilon}\) be finite etale, again a perfectoid \(K^{\flat}\)-algebra. We may assume that there are surjections (with the notation of SS4) \[\rho_{\epsilon}:K^{\flat}\langle w^{\epsilon}T_{1}^{1/p^{\infty}},\ldots,w^{ \epsilon}T_{d}^{1/p^{\infty}}\rangle\twoheadrightarrow S_{\epsilon}\] and likewise for \(S^{\prime}_{\epsilon}\). Define finite Gauss norms \(\gamma_{S_{\epsilon}}\) on \(S_{\epsilon}\), resp. \(\gamma_{S^{\prime}_{\epsilon}}\) on \(S^{\prime}_{\epsilon}\), as follows: let \(w^{\epsilon}t_{i}\) be the image of \(w^{\epsilon}T_{i}\) in \(S_{\epsilon}\) and define \(\gamma^{\rho_{\epsilon}}(z)=\inf_{I}(v(a_{I})-\epsilon|k_{I}|)>-\infty\) for \(z=\sum_{k_{I}\in\mathbb{N}[1/p]^{d}}a_{I}(w^{\epsilon}t_{I})^{k_{I}}\in S_{\epsilon}\) and \(|k_{I}|=\sum_{i}k_{i}\), and then \(\gamma_{S_{\epsilon}}(z):=\sup_{\rho_{\epsilon}}\gamma^{\rho_{\epsilon}}(z)\), and likewise for \(S^{\prime}_{\epsilon}\). Now let \(U_{\epsilon}\) be a rational subdomain of \(\operatorname{Spa}S^{\prime}_{\epsilon}\), hence \[U_{\epsilon} =\operatorname{Spa}S^{\prime}_{\epsilon}(\langle w^{\epsilon}Y_{ 1}\rangle^{1/p^{\infty}},\ldots,(w^{\epsilon}Y_{s})^{1/p^{\infty}})/\langle f _{i}-gY_{i}\rangle\] \[=\operatorname{Spa}S^{\prime}_{\epsilon}\left\langle\frac{(w^{ \epsilon}f_{1})^{1/p^{\infty}},\ldots,(w^{\epsilon}f_{s})^{1/p^{\infty}}}{g^ {1/p^{\infty}}}\right\rangle\] \[=\operatorname{Spa}S^{\prime}_{U_{\epsilon}}\,.\] Define the Gauss norms \(\gamma_{S^{\prime}_{U_{\epsilon}}}\) by adding a degree valuation for each new variable \(Y_{i}\). It then follows from the techniques in [14, SS1] that the restriction of \(\gamma_{S^{\prime}_{U_{\epsilon}}}\) to \(S_{\epsilon}\) is linearly equivalent to \(\gamma_{S_{\epsilon}}\), that is, there exists \(c>0\) such that \[\gamma_{S^{\prime}_{U_{\epsilon}}}(z)\geq\gamma_{S_{\epsilon}}(z)\geq c\gamma_ {S^{\prime}_{U_{\epsilon}}}(z)\] for all \(z\in S_{\epsilon}\). We may assume that \(\gamma_{S^{\prime}_{U_{\epsilon}}}(z)\) is negative, so that \(c>1\). Then \(\frac{1}{c}\gamma_{S_{\epsilon}}(z)\geq\gamma_{S^{\prime}_{U_{\epsilon}}}(z)\), but from the definitions we have the inequality \(\gamma_{S_{\epsilon/c}}(z)\geq\frac{1}{c}\gamma_{S_{\epsilon}}(z)\) and hence \(\gamma_{S_{\epsilon/c}}(z)\geq\gamma_{S^{\prime}_{U_{\epsilon}}}(z)\) for all \(z\in S_{\epsilon}\). Now let \((z_{0},z_{1},\ldots)\in W(S_{\epsilon})\) be a Witt vector such that its image lies in \(W^{\dagger}(S^{\prime}_{U_{\epsilon}})\). Then \(\varinjlim_{i}\left(i+\frac{\gamma_{S^{\prime}_{U_{\epsilon}}}(z_{i})}{p^{i }}\right)=\infty\), hence \((z_{0},z_{1},\ldots)\in W^{\dagger}(S_{\epsilon/c})\). To show acyclicity in higher degrees it suffices to consider the two cases 1. \(U_{\epsilon}=\operatorname{Spa}S^{\prime}_{\epsilon}\) finite etale faithfully flat over \(\operatorname{Spa}S_{\epsilon}\). Then the complex (5.4.1) associated to the acyclic complex \[0\to S_{\epsilon}\to S^{\prime}_{\epsilon}\righttoq S^{\prime}_{\epsilon}\otimes_{S_{ \epsilon}}S^{\prime}_{\epsilon}\righttoq S^{\prime}_{\epsilon}\otimes_{S_{ \epsilon}}S^{\prime}_{\epsilon}\otimes_{S_{\epsilon}}S^{\prime}_{\epsilon} \left.\varinjlim_{\epsilon}\cdots\] is acyclic because \(W^{\dagger}(S^{\prime}_{\epsilon})\) is finite etale and faithfully flat over \(W^{\dagger}(S_{\epsilon})\). 2. \(\{U_{\epsilon,i}\}\) is a covering by rational subdomains of \(\operatorname{Spa}S_{\epsilon}\). Then by using the same reduction argument as in [1, Chapter 8.2], it suffices to consider Laurent coverings and then by an induction argument the case that \(\operatorname{Spa}S_{\epsilon}\) is covered by \(\operatorname{Spa}S_{\epsilon}\langle w^{\epsilon}f\rangle\) and \(\operatorname{Spa}S_{\epsilon}\langle w^{\epsilon}f^{-1}\rangle\) for some \(f\in S_{\epsilon}\). In this case (5.4.1) becomes (5.4.2) \[0\to\varinjlim_{\epsilon\to 0}W^{\dagger}(S_{\epsilon}))\to \varinjlim_{\epsilon}(W^{\dagger}(S_{\epsilon}\langle(w^{\epsilon} f)^{1/p^{\infty}}\rangle\times W^{\dagger}(S_{\epsilon}\langle(w^{ \epsilon}f^{-1})^{1/p^{\infty}}\rangle)\] \[\to\varinjlim_{\epsilon}W^{\dagger}(S_{\epsilon}\langle(w^{ \epsilon}f)^{1/p^{\infty}},(w^{\epsilon}f^{-1})^{1/p^{\infty}}\rangle)\to 0\,.\] Let \(S^{\dagger}\langle f^{1/p^{\infty}}\rangle=\varinjlim_{\bullet}S_{\epsilon} \langle(w^{\epsilon}f)^{1/p^{\infty}}\rangle\) be the corresponding perfectoid dagger algebra. Then \[W^{\dagger}(S_{\epsilon}\langle(w^{\epsilon}f)^{1/p^{\infty}} \rangle)=\{z=(z_{0},z_{1},\ldots)\in W(S^{\dagger}\langle f^{1/p^{\infty}} \rangle)\,|\,\forall i:\,z_{i}\in S_{\epsilon}\langle(w^{\epsilon}f)^{1/p^{ \infty}}\rangle\\ \text{ and }\varinjlim_{i}\left(\frac{\gamma_{S_{\epsilon}((w^{ \epsilon}f)^{1/p^{\infty}}}(z_{i})}{p^{i}}+i\right)=\infty\right).\] Likewise, \(W^{\dagger}(S_{\epsilon}\langle(w^{\epsilon}f^{-1})^{1/p^{\infty}}\rangle)\) and \(W^{\dagger}(S_{\epsilon}\langle(w^{\epsilon}f)^{1/p^{\infty}},(w^{\epsilon}f ^{-1})^{1/p^{\infty}}\rangle)\) are defined. Let \(\widehat{S}\langle f^{1/p^{\infty}}\rangle\) be the \(w\)-adic completion of \(S^{\dagger}\langle f^{1/p^{\infty}}\rangle\). Under the isomorphism \[W(\widehat{S}\langle f^{1/p^{\infty}}\rangle)=W(\widehat{S})\langle[f]^{1/p^{ \infty}}\rangle\] (where \(\langle\,\cdot\,\rangle\) denotes \(p\)-\(d\)-adic completion), let \[R(f):=W^{\dagger}(S_{\epsilon})\langle[w^{\epsilon}f]^{1/p^{\infty}}\rangle\] be the image of \(W^{\dagger}(S_{\epsilon}\langle[w^{\epsilon}f]^{1/p^{\infty}}\rangle)\). Likewise, for a variable \(\eta\), under the isomorphism \[W(\widehat{S}\langle\eta^{1/p^{\infty}}\rangle)=W(\widehat{S})\langle[\eta]^{ 1/p^{\infty}}\rangle\] let \[R(\eta):=W^{\dagger}(S_{\epsilon})\langle[w^{\epsilon}\eta]^{1/p^{\infty}}\rangle\] be the image of \(W^{\dagger}(S_{\epsilon}\langle[w^{\epsilon}\eta]^{1/p^{\infty}}\rangle)\). Similarly for variables \(\xi,\eta\) \[R(\xi,\eta):=W^{\dagger}(S_{\epsilon})\langle[w^{\epsilon}\xi]^{1/p^{\infty}},[w^{\epsilon}\eta]^{1/p^{\infty}}\rangle\] and \[R(\xi,\xi^{-1}):=W^{\dagger}(S_{\epsilon})\langle[w^{\epsilon}\xi]^{1/p^{ \infty}},[w^{\epsilon}\xi^{-1}]^{1/p^{\infty}}\rangle\] are defined. Now consider the following commutative diagram (5.4.3) \(\lambda\) is the map \((h_{1}(w^{\epsilon}\xi),h_{2}(w^{\epsilon}\eta))\mapsto(h_{1}(w^{\epsilon} \xi)-h_{2}(w^{\epsilon}\xi^{-1}))\), \(\lambda^{\prime}\) is induced by \(\lambda\), the vertical maps are the canonical ones given by \([w^{\epsilon}\xi]^{k}\mapsto[w^{\epsilon}f]^{k}\) and \([w^{\epsilon}\eta]^{k}\mapsto[w^{\epsilon}f^{-1}]^{k}\) for \(k\in\mathbb{Z}_{\geq 0}[1/p]\). \(I(\xi)\) denotes the \((p,d)\)-completed ideal in \(R(\xi)\) generated by \([w^{\epsilon}\xi]^{k}-[w^{\epsilon}f]^{k}\), \(k\in\mathbb{Z}_{\geq 0}[1/p]\) and, likewise, \(I(\eta)\) is the \((p,d)\)-completed ideal in \(R(\eta)\) generated by \(1-[f]^{k}[\eta]^{k}\), \(k\in\mathbb{Z}_{\geq 0}[1/p]\). It is then clear that the second column is exact. Now \[R(f,f^{-1})=R(\xi,\eta)/\langle([w^{\epsilon}\xi]^{k})-[w^{\epsilon}f]^{k}),(1-[f] ^{k}[\eta]^{k})\rangle_{k\in\mathbb{Z}_{\geq 0}[1/p]}\] and since the ideal \[\langle([w^{\epsilon}\xi]^{k})-[w^{\epsilon}f]^{k}),(1-[f]^{k}[\eta]^{k}) \rangle_{k\in\mathbb{Z}_{\geq 0}[1/p]}\] coincides with the ideal \[\langle([w^{\epsilon}\xi]^{k})-[w^{\epsilon}f]^{k}),(1-[\xi]^{k}[\eta]^{k}) \rangle_{k\in\mathbb{Z}_{\geq 0}[1/p]}\] we obtain \[R(f,f^{-1})=R(\xi,\xi^{-1})/I(\xi)_{R(\xi,\xi^{-1})}\] and hence the third column is exact too. (Note that \(I(\xi)_{R(\xi,\xi^{-1})}\) is the \((p,d)\)-completed ideal in \(R(\xi,\xi^{-1})\) generated by \([w^{\epsilon}\xi]^{k}-[w^{\epsilon}f]^{k}\) for \(k\in\mathbb{Z}_{\geq 0}[1/p]\). The equations \[R(\xi,\xi^{-1})=R(\xi)+\langle w^{\epsilon},\xi^{-1})^{1/p^{\infty}}R(\xi^{-1}) \tag{5.4.4}\] and \[I(\xi)_{R(\xi,\xi^{-1})}=I(\xi)_{R(\xi)}+\langle 1-[f]^{k}[\xi^{-1}]^{k} \rangle_{k\in\mathbb{Z}_{\geq 0}[1/p]}R(\xi^{-1})\] show the surjectivity of \(\lambda\) and \(\lambda^{\prime}\), and the exactness of the first row in (5.4.3). The second row in (5.4.3) is easily seen to be exact too. We already know that after taking \(\varinjlim_{\epsilon}\) the last row is exact in the middle. But a simple diagram chase in (5.4.3) yields that the last row is already exact before taking \(\varinjlim_{\epsilon}\). Hence the complex (5.4.1) is acyclic. Hence \(H^{i}_{\text{\'{e}t}}(X^{\flat},W^{\dagger}(\mathcal{O}^{\flat}_{X}))=H^{i}_{ \text{\'{e}t}}(\mathcal{U},W^{\dagger}(\mathcal{O}^{\flat}_{X}))=0\) for \(i>0\). To obtain an almost vanishing result for the cohomology of the sheaves \(W^{\dagger}(\mathcal{O}^{\sharp\flat}_{X_{\epsilon}})\) we need an appropriate definition of almost finite etaleness for \(W(\mathcal{O}^{\flat})\)-modules. **Definition 5.5**.: * Let \(R\) be a \(W(\mathcal{O}^{\flat})\)-algebra and \(N\) an \(R\)-module. Then \(N\) is uniformly finitely generated if there exists some integer \(n\) such that for all \(\epsilon\in W(\mathfrak{m}^{\flat})\) there exists an \(R\)-module \(N_{\epsilon}\), finitely generated by \(n\) elements, and a map \(f_{\epsilon}:N_{\epsilon}\to N\) such that the kernel and cokernel of \(f_{\epsilon}\) are annihilated by \(\epsilon\). * Let \(R\) be a \(W(\mathcal{O}^{\flat})\)-algebra and \(N\) an \(R\)-module. Then \(N\) is almost projective over \(R\) if for all \(R\)-modules \(X\) and all \(i>0\), \(\text{Ext}^{i}_{R}(N,X)\) is almost zero (i.e. annihilated by \(W(\mathfrak{m}^{\flat})\)). * A morphism of \(W(\mathcal{O}^{\flat})\)-modules \(A\to B\) is almost unramified if \(A\otimes_{W(\mathcal{O}^{\flat})}W(K^{\flat})\to B\otimes_{W(\mathcal{O}^{ \flat})}W(K^{\flat})\) is unramified and the corresponding idempotent \(e\in(B\otimes_{W(\mathcal{O}^{\flat})}W(K^{\flat}))\otimes_{A\otimes_{W( \mathcal{O}^{\flat})}W(K^{\flat})}(B\otimes_{W(\mathcal{O}^{\flat})}W(K^{ \flat}))\) defines an almost element in \(B\otimes_{A}B\) : that is \(e\) lies in \(\text{Hom}(W(\mathfrak{m}^{\flat}),B\otimes_{A}B)\) under the map \(\epsilon\mapsto\epsilon e\). * A morphism \(A\to B\) of \(W(\mathcal{O}^{\flat})\)-modules is almost finite etale if it is almost unramified, almost projective and uniformly almost finitely presented. **Proposition 5.6**.: _The assumptions are as above, so \(S^{\prime}/S\) is finite etale of degree \(n\), \(S^{\prime\circ}/S^{\circ}\) is almost finite etale. Then \(W^{\dagger}(S^{\prime\circ})\) is almost finite etale over \(W^{\dagger}(S^{\circ})\)._ Proof.: Let \(\overline{e}\in S^{\prime}\otimes_{S}S^{\prime}\) be the idempotent showing that \(S^{\prime}\) is unramified over \(S\). Then for all \(\overline{\epsilon}\in\mathfrak{m}^{\flat}\)\(\overline{\epsilon}\cdot\overline{e}\in S^{\prime\circ}\otimes_{S^{\circ}}S^{\prime\circ}\) (see [12, Proposition 5.23]). Then \(e:=[\overline{e}]\in W^{\dagger}(S^{\prime}\otimes_{S}S^{\prime})=W^{\dagger}(S ^{\prime})\otimes_{W^{\dagger}(S)}W^{\dagger}(S^{\prime})\) is an idempotent showing unramifiedness of \(W^{\dagger}(S^{\prime})\) over \(W^{\dagger}(S)\). Then for all \(\epsilon\in W(\mathfrak{m}^{\flat})\), \(\epsilon=(\epsilon_{0},\epsilon_{1},\epsilon_{2},\ldots)\) we have \(\epsilon\cdot e=\epsilon[\overline{\epsilon}]=(\epsilon_{0}\overline{e}, \epsilon_{1}\overline{e},\epsilon_{2}\overline{e},\ldots)\) is an element in \(W^{\dagger}(S^{\prime\circ}\otimes_{S^{\circ}}S^{\prime\circ})\). Write \[e=\left[\sum_{\begin{subarray}{c}i=1\\ j=1\end{subarray}}^{n}\lambda_{ij}(x_{i}\otimes x_{j})\right]\] for an \(S\)-basis \(x_{1},\ldots,x_{n}\) of \(S^{\prime}\). Choose \(u\in\mathfrak{m}^{\flat}\) such that \(ux_{j}=y_{j}\in S^{\prime\circ}\) for all \(j\). Then \[e=\left[\sum_{\begin{subarray}{c}i=1\\ j=1\end{subarray}}^{n}\lambda_{ij}(x^{\prime}_{i}\otimes y_{j})\right]\] with \(x^{\prime}_{i}=x_{i}/u\). Using [10, Lemma A.9] we can also write - for each \(z\in S^{\prime}\otimes_{S}S^{\prime}\) and \(l\in\mathbb{N}\): \[z=\sum_{i=1}^{n}\xi^{(l)}_{ij}(x_{i}\otimes x_{j}^{p^{l}})\] and hence \[z=\sum_{i=1}^{n}\xi^{(l)}_{ij}x^{\prime}_{i,l}\otimes y_{j}^{p^{l}}\] for \(x^{\prime}_{i,l}=x_{i}/u^{p^{l}}\). We have \(y_{j}^{p^{l}}\in S^{\prime\circ}\) for all \(l\), by construction then, in \(W^{\dagger}(S^{\prime})\otimes_{W^{\dagger}(S)}W^{\dagger}(S^{\prime})\), we can write \(e\) as follows: \[e=\sum_{i,j}[\lambda_{ij}(x^{\prime}_{i})]\otimes[y_{j}]-\sum_{i,j}[0,m^{(1)} _{ij},m^{(2)}_{ij},\ldots] \tag{5.6.1}\] where \(m^{(l)}_{ij}=\xi^{(l)}_{ij}[x^{\prime}_{i,l}\otimes y_{j}^{p^{l}}]\) with uniquely determined \(\xi^{(l)}_{ij}\in S\). Hence we can write \[e=\sum_{i,j}[\lambda_{ij}(x^{\prime}_{i})]\otimes[y_{j}]-\sum_{i,j}[0,\xi^{(1 )}_{ij}x^{\prime}_{i,1},\xi^{(2)}_{ij}x^{\prime}_{i,2},\ldots]\otimes[y_{j}] \tag{5.6.2}\] as an element in \(W^{\dagger}(S^{\prime})\otimes_{W^{\dagger}(S)}W^{\dagger}(S^{\prime})\). For \(\epsilon_{0}\in\mathfrak{m}^{\flat}\), \([\epsilon_{0}]e\in W^{\dagger}(S^{\prime\circ}\otimes_{S^{\circ}}S^{\prime \circ})\) as above. The uniqueness of the representation (5.6.2) of \(e\) implies that \(\epsilon_{0}\lambda_{ij}x^{\prime}_{i}\in S^{\prime\circ}\) for all \(i,j\) and likewise \([\epsilon_{0}][0,\xi^{(1)}_{ij}x^{\prime}_{i,1},\xi^{(2)}_{ij}x^{\prime}_{i,2 },\ldots]\in W^{\dagger}(S^{\prime\circ})\) and therefore \([\epsilon_{0}]e\in W^{\dagger}(S^{\prime\circ})\otimes_{W^{\dagger}(S^{\circ})} W^{\dagger}(S^{\prime\circ})\). We also see that \(V^{s}[\epsilon_{s}]e=p^{s}[\epsilon_{s}]^{1/p^{s}}e\in W^{\dagger}(S^{\prime \circ})\otimes_{W^{\dagger}(S^{\circ})}W^{\dagger}(S^{\prime\circ})\) for all \(\epsilon_{s}\in\mathfrak{m}^{\flat}\), and therefore for all \(\epsilon\in W(\mathfrak{m}^{\flat})\) we have \(\epsilon\cdot e\in W^{\dagger}(S^{\prime\circ})\otimes_{W^{\dagger}(S^{\circ})} W^{\dagger}(S^{\prime\circ})\) and therefore \(W^{\dagger}(S^{\prime\circ})\) is almost unramified over \(W^{\dagger}(S^{\circ})\). Let \(\epsilon\in W(\mathfrak{m}^{\flat})\) and \(\epsilon\cdot e=\sum_{i=1}^{r}a_{i}\otimes b_{i}\) for \(a_{i},b_{i}\in W^{\dagger}(S^{\prime\circ})\) (so in the following \(\epsilon\) is fixed). Consider the map \[W^{\dagger}(S^{\prime\circ})\to W^{\dagger}(S^{\circ})^{n},s\mapsto(\text{Tr} _{W^{\dagger}(S^{\prime})/W^{\dagger}(S)}(s,b_{1}),\ldots,\text{Tr}_{W^{\dagger }(S^{\prime})/W^{\dagger}(S)}(s,b_{n}))\] and the map \(W^{\dagger}(S^{\circ})^{n}\to W^{\dagger}(S^{\prime\circ})\) defined by \((r_{1},\ldots,r_{n})\mapsto\sum_{i=1}^{n}a_{i}r_{i}\). As in the proof of [13, Proposition 5.23] one easily checks that the composite map \(W^{\dagger}(S^{\prime\circ})\to W^{\dagger}(S^{\circ})^{n}\to W^{\dagger}(S^{ \prime\circ})\) is multiplication by \(\epsilon\). This shows that \(W^{\dagger}(S^{\prime\circ})\) is a uniformly almost finitely presented almost projective \(W^{\dagger}(S^{\circ})\)-module and proves Proposition 5.6. Now consider the sequence (5.4.1) at the "integral level", namely the total complex associated to the simplicial complex using the presheaves \(\mathcal{O}^{\downarrow\flat}_{X_{\epsilon}}\), resp. \(W^{\dagger}(\mathcal{O}^{\downarrow\flat}_{X_{\epsilon}})\). We have **Lemma 5.7**.: _The total complex associated to the simplicial complex_ \[0\to\varinjlim_{\epsilon}W^{\dagger}(\mathcal{O}^{\downarrow\flat}_{X_{ \epsilon}})(X_{\epsilon}^{\flat})\to\prod_{i}\varinjlim_{\epsilon}W^{\dagger} (\mathcal{O}^{\downarrow\flat}_{X_{\epsilon}})(U_{i})\rightrightarrows\prod_{i,j}\varinjlim_{\epsilon}W^{\dagger}(\mathcal{O}^{\downarrow\flat}_{X_{ \epsilon}})(U_{i}\times U_{j})\rightrightarrows\cdots \tag{5.7.1}\] _has cohomology killed by \(W(\mathfrak{m}^{\flat})\)._ Proof.: Again we consider the two cases 1. \(U_{\epsilon}=\operatorname{Spa}S^{\prime}_{\epsilon}\) finite etale faithfully flat over \(\operatorname{Spec}\mathaccent 869{\delta}_{\epsilon}\). Since \(W^{\dagger}(S^{\prime}_{\epsilon})\) is finite etale over \(W^{\dagger}(S_{\epsilon})\), the canonical map \[W^{\dagger}(S^{\prime}_{\epsilon})\otimes_{W^{\dagger}(S_{\epsilon})}W^{ \dagger}(S^{\prime}_{\epsilon})\to W^{\dagger}(S^{\prime}_{\epsilon} \otimes_{S_{\epsilon}}S^{\prime}_{\epsilon})\] defined by \[(\sum_{i\geq 0}p^{i}[x_{i}]^{1/p^{i}})\otimes(\sum_{j\geq 0}p^{j}[x_{j}]^{1/p^{j }})\mapsto\sum_{i,j\geq 0}p^{i+j}[x_{i}^{1/p^{i}}\otimes x_{j}^{1/p^{j}}]\] for \(x_{i},x_{j}\in S^{\prime}_{\epsilon}\), is an isomorphism. Proposition 5.6 implies that the same map, defined at integral level, \[W^{\dagger}(S^{\prime\circ}_{\epsilon})\otimes_{W^{\dagger}(S^{\circ}_{\epsilon })}W^{\dagger}(S^{\prime\circ}_{\epsilon})\to W^{\dagger}(S^{\prime \circ}_{\epsilon}\otimes_{S^{\circ}_{\epsilon}}S^{\prime\circ}_{\epsilon})\] has kernel and cokernel killed by \(W(\mathfrak{m}^{\flat})\). Moreover, Proposition 5.6 implies that the complex (5.7.1) associated to the acyclic complex \[0\to S_{\epsilon}\to S^{\prime}_{\epsilon}\rightrightarrows S^{\prime}_{ \epsilon}\otimes_{S_{\epsilon}}S^{\prime}_{\epsilon}\rightrightarrows\cdots,\] namely \[0\to W^{\dagger}(S^{\circ}_{\epsilon})\to W^{\dagger}(S^{\prime \circ}_{\epsilon})\rightrightarrows W^{\dagger}(S^{\prime\circ}_{\epsilon} \otimes_{S^{\circ}_{\epsilon}}S^{\prime\circ}_{\epsilon})\leftrightarrows \cdots\] has cohomology killed by \(W(\mathfrak{m}^{\flat})\). Indeed, for \(\kappa\in W(\mathfrak{m}^{\flat})\), let \(\lambda_{\kappa}:W^{\dagger}(S^{\circ}_{\epsilon})^{n}\to W^{\dagger}(S^{ \prime\circ}_{\epsilon})\) be the map considered in the proof of Proposition 5.6 such that the cohomology of the cone of \(\lambda_{\kappa}\) is killed by \(\kappa\). The corresponding simplicial complex for \(M:=W^{\dagger}(S^{\circ}_{\epsilon})^{n}\) \[0\to W^{\dagger}(S^{\circ}_{\epsilon})\to M\rightrightarrows M \otimes_{W^{\dagger}(S^{\circ}_{\epsilon})}M\rightrightarrows\cdots\] is obviously acyclic, hence the cohomology of the complex (5.7.1) is killed by \(\kappa\). Since this holds for all \(\kappa\), the claim follows. 2. Now let \(\{U_{\epsilon,i}\}\) be a covering by rational subdomains of \(\operatorname{Spa}S_{\epsilon}\). Using again the same reduction argument as in [1, SS8.2] it suffices to consider Laurent coverings and then by induction the case \(\operatorname{Spa}S_{\epsilon}=\operatorname{Spa}S_{\epsilon}\langle w^{ \epsilon}f\rangle\cup\operatorname{Spa}S_{\epsilon}\langle w^{\epsilon}f^{-1}\rangle\) for some \(f\in S_{\epsilon}\). In the limit we have the covering \(\operatorname{Spa}S^{\dagger}=\operatorname{Spa}S^{\dagger}\langle f\rangle \cup\operatorname{Spa}S^{\dagger}\langle 1/f\rangle\) where \(S^{\dagger}\) is a perfectoid dagger algebra. Since we consider integral elements we can use the argument in [1, Lemma 6.4] to assume that \(f\in S^{\dagger,\circ}\) at the expense of writing \(\operatorname{Spa}S^{\dagger}=\operatorname{Spa}S^{\dagger}\langle f\rangle \cup\operatorname{Spa}S^{\dagger}\langle w^{N}/f\rangle\) for some \(N\). The Laurent covering does not change. Then, to show Lemma 5.7, we need to show that the sequence \[0\to\varinjlim_{\epsilon\to 0}W^{\dagger}(S^{\circ}_{\epsilon})\to \varinjlim_{\epsilon\to 0}W^{\dagger}(S^{\circ}_{\epsilon}\langle(w^{ \epsilon}f)^{1/p^{\infty}}\rangle)\times W^{\dagger}(S^{\circ}_{\epsilon} \langle(w^{\epsilon}w^{N}/f)^{1/p^{\infty}}\rangle)\] \[\to\varinjlim_{\epsilon\to 0}W^{\dagger}(S^{\circ}_{\epsilon} \langle(w^{\epsilon}f)^{1/p^{\infty}},(w^{\epsilon}w^{N}/f)^{1/p^{\infty}} \rangle)\to 0\] has cohomology killed by \(W(\mathfrak{m}^{\flat})\). Define, in analogy to the 'generic' case in diagram (5.4.3), \[R^{\circ}(f)=W^{\dagger}(S^{\circ}_{\epsilon}\langle(w^{\epsilon}f)^{1/p^{ \infty}}\rangle)\] \[R^{\circ}(\eta)=W^{\dagger}(S^{\circ}_{\epsilon}\langle(w^{\epsilon}\eta)^{1/ p^{\infty}}\rangle)\] for a variable \(\eta\) and likewise for \(\xi\) \[R^{\circ}(w^{N}/f)=W^{\dagger}(S^{\circ}_{\epsilon}\langle(w^{\epsilon}w^{N} f^{-1})^{1/p^{\infty}}\rangle)\] \[R^{\circ}(\xi,w^{N}/\xi)=W^{\dagger}(S^{\circ}_{\epsilon}\langle(w^{\epsilon }\xi)^{1/p^{\infty}},(w^{\epsilon}w^{N}\xi^{-1})^{1/p^{\infty}}\rangle)\] and likewise for \(R^{\circ}(f,w^{N}/f)\). In analogy to diagram (5.4.3) we consider the following commutative diagram at "integral" level: (5.7.3) \(\lambda\) is the map \[(\sum_{i}p^{i}[h^{(i)}_{1}(w^{\epsilon}\xi)]^{1/p^{i}},\sum_{i}p^{i}[h^{(i)}_ {2}(w^{\epsilon}\eta)]^{1/p^{i}})\mapsto\sum_{i}p^{i}[h^{(i)}_{1}(w^{\epsilon }\xi)]-h^{(i)}_{2}(w^{\epsilon}w^{N}\eta^{-1})]^{1/p^{i}}\] \[I^{\circ}(\xi)=\{\sum_{i}p^{i}[w^{\epsilon}\xi-w^{\epsilon}f]^{r_{i}}[\theta_ {i}]\,|\,r_{i}\in\mathbb{Z}_{\geq 0}[1/p],\theta_{i}\in S^{\circ}_{ \epsilon}\langle w^{\epsilon}\xi\rangle^{1/p^{\infty}}\}\] where we only consider elements that still lie in \(W^{\dagger}(S^{\circ}_{\epsilon}\langle w^{\epsilon}\xi\rangle^{1/p^{\infty}})\), i.e. which are overconvergent. \[I^{\circ}(\eta)=\{\sum_{i}p^{i}[w^{\epsilon}w^{N}-w^{\epsilon}\eta f]^{r_{i}} [\theta_{i}]\,|\,r_{i}\in\mathbb{Z}_{\geq 0}[1/p],\theta_{i}\in S^{\circ}_{ \epsilon}\langle w^{\epsilon}\eta\rangle^{1/p^{\infty}}\}\] again with the condition that these elements are overconvergent, hence lie in \(W^{\dagger}(S^{\circ}_{\epsilon}(w^{\epsilon}\eta)^{1/p^{\infty}})\). \(I^{\circ}(\xi)_{R^{\circ}(\xi,w^{N}/\xi)}\) is defined analogously, where we assume that \(r_{i}\in S^{\circ}_{\epsilon}((w^{\epsilon}\xi)^{1/p^{\infty}},(w^{\epsilon} w^{N}\xi^{-1})^{1/p^{\infty}})\). The map \(\lambda^{\prime}\) in diagram (5.7.3) is then induced by \(\lambda\). The vertical maps are the obvious ones given by \(([h_{1}[w^{\epsilon}\xi],[h_{2}(w^{\epsilon}\eta)])\mapsto(([h_{1}[w^{\epsilon}f],[h_{2}(w^{\epsilon}w^{N}/f)])\text{). Since }\) \[[w^{\epsilon}\xi-w^{\epsilon}f]^{k}\cdot[w^{N}\xi^{-1}]^{k}=[w^{\epsilon}w^{N}-w ^{\epsilon}w^{N}f\xi^{-1}]^{k}\] for \(k\in\mathbb{Z}_{\geq 0}[1/p]\), we see that \(\lambda^{\prime}\) is bijective. As in the generic case (5.4.3) it is easy to see that the middle row in (5.7.3) is exact. Now consider, in particular, the map \(\mu:R^{\circ}(\eta)\to R^{\circ}(w^{N}/f)\) in the second column in (5.7.3). Then, modulo \(p\), it follows from the proof of [12, Lemma 6.4] that \[S^{\circ}_{\epsilon}\langle(w^{\epsilon}\eta)^{1/p^{\infty}}\rangle/(I^{ \circ}(\eta)\mod p)\to S^{\circ}_{\epsilon}\langle(w^{\epsilon}w^{N}f^{-1})^{ 1/p^{\infty}}\rangle\] is an almost isomorphism, the kernel is killed by \(\mathfrak{m}^{\flat}\). This implies that the map \[\mu:R^{\circ}(\eta)/I^{\circ}(\eta)\to R^{\circ}(w^{N}/f)\] is an almost isomorphism with respect to the ideal \([\mathfrak{m}^{\flat}]=\langle[x]\,|\,x\in\mathfrak{m}^{\flat}\rangle\), hence \([\mathfrak{m}^{\flat}]\ker\mu\subset I^{\circ}(\eta)\). Now consider the corresponding rings/ideals for the full ring of Witt vectors: \(\widehat{R}^{\circ}(f)=W(S^{\circ}_{\epsilon}\langle(w^{\epsilon}f)^{1/p^{ \infty}}\rangle)\), likewise \(\widehat{R}^{\circ}(\eta)\), \(\widehat{R}^{\circ}(w^{N}/f)\), \(\widehat{R}^{\circ}(\xi,w^{N}/\xi)\), \(\widehat{R}^{\circ}(f,w^{N}/f)\), obtained by taking \(p\)-adic completions, also \(\widehat{I}^{\circ}(\eta)\), the \(p\)-adic completion of \(I^{\circ}(\eta)\). We have the induced map \[\widehat{\mu}:\widehat{R}^{\circ}(\eta)\to\widehat{R}^{\circ}(w^{N}/f)\,.\] Since \(\widehat{I}^{\circ}(\eta)\) is \(p\)-adically complete and \(W(\mathfrak{m}^{\flat})\) is the \(p\)-adic completion of \([\mathfrak{m}^{\flat}]\), we conclude that \(W(\mathfrak{m}^{\flat})\ker\widehat{\mu}\subset\widehat{I}^{\circ}(\eta)\), so \(W(\mathfrak{m}^{\flat})\) kills \(\ker\widehat{\mu}/\widehat{I}^{\circ}(\eta)\). On the other hand, returning to overconvergent elements, we see that multiplication by \(\lambda\in W(\mathfrak{m}^{\flat})\) maps \(\ker\mu\) to overconvergent elements in \(R^{\circ}(\eta)\), hence to \(I^{\circ}(\eta)\), hence \(W(\mathfrak{m}^{\flat})\) kills \(\ker\mu/I^{\circ}(\eta)\). A diagram chase in (5.7.3) yields then that the cohomology of the bottom row in (5.7.3) is killed by \(W(\mathfrak{m}^{\flat})\). This finishes the proof of Lemma 5.7. **Corollary 5.8**.: _We have an almost quasi-isomorphism_ \[R\Gamma_{\operatorname{gp}}(\mathbb{Z}_{p}(1)^{d},W^{\dagger}(R^{\flat}_{ \infty}))\simeq R\Gamma_{\operatorname{pro\acute{e}t}}(U,W^{\dagger}( \mathcal{O}^{+\flat}_{X}))\,.\] _(This means that the cohomology of the cone is killed by \(W(\mathfrak{m}^{\flat})\))._ The decalage functor \(L\eta_{\mu}\) applied to \(R\Gamma_{\operatorname{pro\acute{e}t}}(U,W^{\dagger}(\mathcal{O}^{+\flat}_{X }))\) yields, by definition, \(A^{\dagger}\Omega^{\operatorname{pro\acute{e}t}}_{\mathcal{X}/\mathcal{O}}\). We have the following lemma: **Lemma 5.9**.: \(L\eta_{\mu}\) _transforms the almost quasi-isomorphism in Corollary 5.8 into a quasi-isomorphism._ Proof.: Since \(W(\mathfrak{m}^{\flat})\neq W(\mathfrak{m}^{\flat})^{2}\) we cannot apply the usual technology of almost mathematics. Instead we use the following proposition of Bhatt [10, Lemma 6.14]: **Proposition 5.10**.: _Let \(\mathfrak{m}\) be an ideal of a ring \(A\) and \(f\) a non-zero divisor in \(\mathfrak{m}\). Let \(\sigma:C\to D\) be a homomorphism of complexes of \(A\)-modules such that_ 1. _the cone of_ \(\sigma\) _is killed by_ \(\mathfrak{m}\)_._ 2. _all cohomology groups of_ \(C\otimes^{L}A/fA\) _contain no non-zero elements killed by_ \(\mathfrak{m}^{2}\)_._ _Then \(L\eta_{f}C\to L\eta_{f}D\) is a quasi-isomorphism._ To continue the proof of Lemma 5.9, property i) in Proposition 5.10 is satisfied since our map is an almost quasi-isomorphism. For ii) we use Morrow's arguments in his notes [10, p. 42]. We need to show that the cohomology of \(R\Gamma_{\operatorname{gp}}(\Gamma,W^{\dagger}((R^{\dagger}_{\infty})^{\flat}/ \mu)))\) has no non-zero elements killed by \(W(\mathfrak{m}^{\flat})^{2}\). Now, \(R\Gamma_{\operatorname{gp}}(\Gamma,W^{\dagger}((R^{\dagger}_{\infty})^{\flat} /\mu)))\) is quasi-isomorphic to the weak \(p\)-adic completion of \[\bigoplus_{k_{1},\dots,k_{d}\in\mathbb{Z}[1/p]}K_{A_{\inf}/\mu A_{\inf}}([ \epsilon]^{k_{1}}-1,\dots,[\epsilon]^{k_{d}}-1)\] which is a dagger completion of Koszul complexes. By [1, Lemma 7.10], the cohomology of each of these complexes is the weak \(p\)-adic completion of a finite direct sum of copies of \(A_{\inf}/\mu A_{\inf}([\epsilon]^{k}-1)\) and \(A_{\inf}/([\epsilon]^{k}-1)A_{\inf}\) for varying \(k\in\mathbb{Z}[1/p]\). It is shown in [10, p. 42] that these \(p\)-torsion-free modules contain no non-zero elements killed by \(W(\mathfrak{m}^{\flat})^{2}\). Now we are going to relate the group cohomology \(L\eta_{\mu}R\Gamma_{\operatorname{gp}}(\Gamma,W^{\dagger}((R_{\infty})^{ \flat})))\) to a dagger version of the \(q\)-de Rham complex. In analogy to the \(\Gamma\)-equivariant decomposition \[A_{\inf}(\widehat{R}_{\infty})=A(\widehat{R})^{\square}\oplus A_{\inf}( \widehat{R}_{\infty})^{\operatorname{nonint}}\] (see below [1, Lemma 9.6]) we have the dagger version \[A_{\inf}^{\dagger}(R_{\infty})=W^{\dagger}((R_{\infty})^{\flat})=A(R)^{ \square}\oplus A_{\inf}^{\dagger}(R_{\infty})^{\operatorname{nonint}}\,.\] Here \(A(R)^{\square}\) is a lifting of \(R\) over \(A_{\inf}\), it is etale over \(A_{\inf}^{\dagger}(\underline{U}^{\pm 1})\). Following the proof of [1, Lemma 9.6], we conclude that \(H_{\operatorname{cont}}^{i}(\Gamma,A_{\inf}^{\dagger}(R_{\infty})^{ \operatorname{nonint}})\) is killed by \(\mu\) and hence \(L\eta_{\mu}R\Gamma_{\operatorname{cont}}(\Gamma,A_{\inf}^{\dagger}(R_{\infty} )^{\operatorname{nonint}})\) vanishes. On the other hand, we have the \(q\)-derivation \[\frac{\partial_{q}}{\partial_{q}\log(U_{i})}=\frac{\gamma_{i}-1}{[\epsilon]-1}\] action on \(A(R)^{\square}\) (see [1, SS9.2]). It is well-known that we can compute group cohomology via Koszul complexes. The differentials in the complex \[R\Gamma_{\operatorname{cont}}(\Gamma,A(R)^{\square})=K_{A(R)^{\square}}( \gamma_{1}-1,\dots,\gamma_{d}-1)\] are divisible by \(\mu=[\epsilon]-1\) and we get by [1, Lemma 7.9] \[\eta_{\mu}R\Gamma_{\operatorname{cont}}(\Gamma,A(R)^{\square})=K_{A(R)^{ \square}}\left(\frac{\gamma_{1}-1}{[\epsilon]-1},\dots,\frac{\gamma_{d}-1}{[ \epsilon]-1}\right)\] and we define the \(q\)dagger degree de Rham complex of \(A(R)^{\square}\) as \[q\Omega_{A(R)^{\square}/A_{\inf}}^{\dagger\bullet}:=K_{A(R)^{\square}}\left( \frac{\gamma_{1}-1}{[\epsilon]-1},\dots,\frac{\gamma_{d}-1}{[\epsilon]-1} \right)\,.\] Hence, by definition, we have a quasi-isomorphism \[L\eta_{\mu}R\Gamma_{\operatorname{cont}}(\Gamma,W^{\dagger}(R_{\infty})^{ \flat}))=q\Omega_{A(R)^{\square}/A_{\inf}}^{\dagger\bullet}\,.\] We will show that the \(q\)-dagger de Rham complex computes the overconvergent prismatic cohomology after applying \(\otimes_{A_{\inf}}^{\dagger}B_{\operatorname{cris}}^{+}\). Let \(A=A_{\inf}\) and let \(R/\mathcal{O}\) (\(\mathcal{O}=A/d\)) be weakly complete, smooth. So \(d=1+[\epsilon^{1/p}]+\cdots+[\epsilon^{1/p}]^{p-1}\) which is a generator of \(\ker(A\to\mathcal{O})\). Then \[\varphi(d):=[p]_{q}=\frac{q^{p}-1}{q-1}\] for \(q=[\epsilon]\) which is the \(q\)-analogue of \(p\). Let \(\varphi^{*}R=R^{(1)}/(A/[p]_{q})\) be the base change of \(R\) along \(A\xrightarrow{\varphi}A\). Consider \(P_{0}\), a free \(A\)-algebra, weakly \((p,d)\)-completed with an action of \(\varphi\) and a surjection \(P_{0}\twoheadrightarrow R\). Let \(P_{\star}:=P_{0}\rightrightarrows P_{1}\xrightarrow{\ast}\cdots\) be the \((p,d)\)-weakly completed Cech-nerve of \(A\to P_{0}\), and let \(J_{\star}=\ker(P_{\star}\to R)\) be the kernel of the augmentation and \(P_{\star,\delta}\) the associated \(\delta\)-\(P_{\star}\)-algebra. Then \(C_{i}:=P_{i,\delta}^{\dagger}\{\frac{J_{i}}{d}\}\) is a dagger prism (\(\dagger=(p,d)\)-weakly completed). We know that \(C_{\star}\), the \((p,d)\)-weakly completed Cech-nerve of \(A\to C_{0}\) computes \(\mathbb{A}_{R/A_{\inf}}^{\dagger}\) equipped with its \(\varphi\)-action. Then \[\varphi^{*}(C_{\star}) =\varphi^{*}_{A}(C_{\star})=\varphi^{*}_{P_{\star,\delta}}(C_{ \star})\] \[=\varphi^{*}_{P_{\star,\delta}}\left(P_{\star,\delta}^{\dagger} \left\{\frac{J_{\star}}{d}\right\}\right)=P_{\star,\delta}^{\dagger}\left\{ \frac{\varphi(J_{\star})}{[p]_{q}}\right\}\] where we have used that \(A\to P_{\star,\delta}\) is a cosimplicial homotopy equivalence. Then we have a \(((p,[q]_{p})\)-weakly completed) \([p]_{q}\)-PD version of Berthelot's Poincare lemma [1, Lemma V 2.1.2]: **Proposition 5.11**.: \(\varphi^{*}(C_{\star})=P_{n,\delta}^{\dagger}\left\{\frac{\varphi(J_{n})}{[p] _{q}}\right\}\) _is a \((p,[q]_{p})\)-weakly completed \([p]_{q}\)-PD-polynomial algebra over \(P_{0,\delta}^{\dagger}\left\{\frac{\varphi(J_{0})}{[p]_{q}}\right\}\), so for \(N:=P_{0,\delta}^{\dagger}\left\{\frac{\varphi(J_{0})}{[p]_{q}}\right\}\) we have_ \[\varphi^{*}(C_{\star})\cong N^{\dagger}\langle T_{1},\ldots,T_{s}\rangle \left\langle\frac{T_{1}^{p}}{[p]_{q}},\ldots,\frac{T_{s}^{p}}{[p]_{q}}\right\rangle\,.\] **Lemma 5.12**.: _Let \(S\) be a \((p,[p]_{q})\)-weakly complete \(\mathbb{Z}_{q}[\![q-1]\!]\)-algebra. Then the \(q\)-dagger de Rham complex_ \[q{\Omega_{S\langle T_{1},\ldots,T_{s}\rangle}^{\dagger}\Big{\langle}\frac{T_{ 1}^{p}}{[p]_{q}},\ldots,\frac{T_{s}^{p}}{[p]_{q}}\Big{\rangle}}\] _is acyclic, where the derivative is given by the \(q\)-derivation \(\nabla_{q,n}:T^{n}\mapsto[n]_{q}T^{n-1}\)._ Proof.: Before taking the \((p,[p]_{q})\) weak completion, the proof is analogous to the proof that the usual de Rham complex of a PD-polynomial algebra is acyclic. Taking \((p,[p]_{q})\) weak completions preserves acyclicity. **Corollary 5.13**.: \(q{\Omega_{\varphi^{*}(C_{n})/A}^{\dagger\bullet}}\) _is quasi-isomorphic to \(q{\Omega_{\varphi^{*}(C_{0})/A}^{\dagger\bullet}}\)._ We observe that \(q{\Omega_{\varphi^{*}(C_{0})/A}^{\dagger\bullet}}\) is the \(q\)-dagger de Rham complex over the \((p,[p]_{q})\)-weakly completed \(q\)-PD-envelope \(D_{J_{0},q}^{\dagger}(P_{0,\delta})\), which is the Koszul complex \(K_{D_{J_{0},q}^{\dagger}(P_{0,\delta})}(\nabla_{q,1},\ldots,\nabla_{q,s})\). But since we have a smooth lifting \(A(R)^{\Box}\) of \(R\) over \(A\), we have also considered the \(q\)-de Rham complex \(q{\Omega_{A(R)^{\Box}/A_{\inf}}^{\dagger\bullet}}=K_{A(R)^{\Box}}(\nabla_{q,1},\ldots,\nabla_{q,s})\). We explain the relationship between these complexes. Let again \(C_{\star}\) be the Cech-Alexander complex that computes \(\mathbb{A}_{R/A}^{\dagger}\) and let \(\varphi^{*}C_{\star}=P_{\star,\delta}^{\dagger}\left\{\frac{\varphi(J_{\star} )}{[p]_{q}}\right\}=\mathbb{A}_{R^{(1)}/A}^{\dagger}\). By the argument in [1, Theorem 2.12], using Corollary 5.13, the total complex of the simplicial complex \(\varphi^{*}C_{\star}\) is quasi-isomorphic to the \(q\)-dagger de Rham complex \[q\Omega^{\dagger\bullet}_{\varphi^{*}(C_{0})/A}=K_{D^{\dagger}_{J_{0},q}(P_{0})} (\nabla_{q,1},\dots,\nabla_{q,s})\,.\] Now \(\varphi^{*}(C_{\star})=A_{\varphi}\otimes_{A}C_{\star}\cong C_{\star}\) with \[C_{i} =A^{\dagger}[X_{1},\dots,X_{d},T_{1},\dots,T_{s}]/\langle g_{1}-T_ {1}d,\dots,g_{s}-T_{s}d\rangle\] \[\cong\widetilde{R}^{\dagger}\langle T_{1},\dots,T_{s}\rangle\] where \(\widetilde{R}\) is a dagger lift of \(R\) over \(A\), previously denoted by \(A(R)^{\square}\). The isomorphism comes from the uniqueness of weak formalisations [13, Theorem 3.3]. By [1, Corollary 12.4] the \(q\)-dagger de Rham complex \(q\Omega^{\dagger\bullet}_{\widetilde{R}^{\dagger}\langle T_{1},\dots,T_{s} \rangle/A}\) becomes the usual de Rham complex after \(\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris}}\). Hence we get \[q\Omega^{\dagger\bullet}_{\widetilde{R}^{\dagger}\langle T_{1}, \dots,T_{s}\rangle/A}\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris}} =\Omega^{\dagger\bullet}_{\widetilde{R}^{\dagger}\langle T_{1}, \dots,T_{s}\rangle/A}\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris}}\] \[\cong\Omega^{\dagger\bullet}_{\widetilde{R}/A}\otimes^{\dagger} _{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris}}\] \[\cong q\Omega^{\dagger\bullet}_{\widetilde{R}/A}\otimes^{\dagger }_{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris}}\] where we have used again [1, Corollary 12.4]. The symbol \(\otimes^{\dagger}_{A_{\mathrm{inf}}}\) denotes the weakly completed tensor product with respect to the \(p\)-adic topology. Applying again [1, Theorem 2.12] in this situation we obtain \[\mathrm{Tot}(C_{\star})\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris }}\cong q\Omega^{\dagger\bullet}_{\widetilde{R}/A}\otimes^{\dagger}_{A_{ \mathrm{inf}}}B^{+}_{\mathrm{cris}}\,.\] The final argument for the proof of Theorem 5.3 is then very similar to the proof of the comparison with Monsky-Washnitzer cohomology in the case \(A=W(k)\). Namely, let \(M^{r,s}=q\Omega^{\dagger r}_{\varphi^{*}(C_{s})/A}\). For \(r=0\) we get the cosimplicial complex that computes \(\varphi^{*}\mathbb{A}^{\dagger}_{R/A}\). For fixed \(s\), the \(q\)-de Rham complex \(q\Omega^{\dagger\bullet}_{\varphi^{*}(C_{s})/A}\) is quasi-isomorphic to \(q\Omega^{\dagger\bullet}_{A(R)^{\square}/A}\) after \(\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_{\mathrm{cris}}\). Moreover, for \(r>0\) the cosimplicial complex \(q\Omega^{\dagger r}_{\varphi^{*}(C_{s})/A}\) is homotopic to zero (analogue of [1, Lemma 2.15, Lemma 2.17]) - compare with the proof of Lemma 2.1. The vertical totalisation of the simplicial bicomplex computes \(A^{\dagger}\Omega_{R/\mathcal{O}}\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_{ \mathrm{cris}}\), and the horizontal totalisation computes \(\varphi^{*}\mathbb{A}^{\dagger}_{R/A}\otimes^{\dagger}_{A_{\mathrm{inf}}}B^{+}_ {\mathrm{cris}}\). Since the \(q\)-derivation specialises to the usual derivation modulo \(q-1\), the \(q\)-dagger deger de Rham complex satisfies \[q\Omega^{\dagger\bullet}_{\varphi^{*}(C_{0})/A}\otimes^{L}A/[p]_{q}A\cong \Omega^{\dagger\bullet}_{R/\mathcal{O}}\,.\] Hence we have **Theorem 5.14**.: \[\varphi^{*}\mathbb{A}^{\dagger}_{R/A}\otimes^{L}A/[p]_{q}A\cong\Omega^{ \dagger\bullet}_{R/\mathcal{O}}\,.\] _This is the dagger de Rham comparison of overconvergent prismatic cohomology. It is the analogue of [1, Theorem 1.8 (3)]._ ## 6. Comparison with an overconvergent de Rham-Witt complex In the final section we compare the complex \(A^{\dagger}\Omega\) with an overconvergent de Rham-Witt complex, defined for a smooth weak formal \(\mathcal{O}\)-scheme (\(\mathcal{O}=\mathcal{O}_{\mathbb{C}_{p}}\)). As before, let \(\operatorname{Spf}R\) be a small affine weak formal scheme over \(\operatorname{Spf}\mathcal{O}\), \(A_{\inf}(\mathcal{O})=W(\mathcal{O}^{\flat})\), equipped with the \(p\)-\(d\)-adic topology. Consider a dagger lifting \(A(R)^{\square}\) of \(R\) over \(A_{\inf}(\mathcal{O})\) and let (for \(m=\dim\operatorname{Spf}R\)) \[A(R)^{\square}\to(A(R)^{\square})^{m}\to(A(R)^{\square})^{\binom{m}{2}}\to\dots\] be the Koszul complex, or \(q\)-dagger de Rham complex \(q\Omega^{\dagger\bullet}_{A(R)^{\square}/A_{\inf}}\), considered in Section 5. It is a complex of \(A_{\inf}\)-modules, where each entry is weakly complete with respect to the \((p,d)\)-adic topology. **Lemma 6.1**.: _Under the map \(\theta_{\infty}:A_{\inf}(\mathcal{O})\to W(\mathcal{O})=\varprojlim W_{r}( \mathcal{O})\), the \((p,d)\)-adic topology maps to the \(p\)-\(V\)-adic topology. More precisely, for \(\xi\) generating \(\ker(A_{\inf}(\mathcal{O})\to\mathcal{O})\) such that for \(\theta_{r}:W(\mathcal{O}^{\flat})\to W_{r}(\mathcal{O})\), \(r>1\) we have \(\theta_{r}(\xi)=V(1)\) and \(\theta_{r}\) maps \(\xi\varphi^{-1}(\xi)\cdots\varphi^{-s}(\xi)\) to \(V^{s+1}(1)\) for \(s<r-1\)._ Proof.: See [1, Lemma 3.4, Lemma 3.12]. By [1, Example 3.16] we can take \(\xi=d\). We have \(\varphi^{-i}(d)\in(p,d)\) for \(i\in\mathbb{Z}\), because \(\varphi(d)=[p]_{q}\equiv p\mod q-1\equiv p\mod d\) since \(d|q-1\) (where \(q=[\epsilon]\)). Hence \(\lambda_{1}d+p=\varphi(d)\in(p,d)\), so \(\lambda_{1}\varphi^{-1}(d)+p=d\in(p,d)\) and thus \(\varphi^{-1}(d)\in(p,d)\). By induction \(\varphi^{-i}(d)\in(p,d)\) for all \(i\). Under the base change \(\theta_{\infty}:A_{\inf}(\mathcal{O})\to W(\mathcal{O})\) with kernel \((\mu)=([\epsilon]-1)\), the Koszul complex (\(q\)-de Rham complex) \[A(R)^{\square}\to(A(R)^{\square})^{m}\to(A(R)^{\square})^{\binom{m}{2}}\to\dots\] becomes the usual de Rham complex over \(W(\mathcal{O})\), so \[q\Omega^{\dagger\bullet}_{A(R)^{\square}/A_{\inf}}\otimes_{A_{\inf}}W( \mathcal{O})\cong\Omega^{\dagger\bullet}_{A(R)^{\square}\otimes W(\mathcal{O}) /W(\mathcal{O})}\] which is a complex of \(p\)-\(V\)-weakly complete \(W(\mathcal{O})\)-modules. Let \(\tilde{R}:=A(R)^{\square}\otimes W(\mathcal{O})\), which is a lifting of \(R\) over \(W(\mathcal{O})\). Then \(\Omega^{\dagger\bullet}_{\tilde{R}/W(\mathcal{O})}:=\Omega^{\dagger\bullet}_{ A(R)^{\square}\otimes W(\mathcal{O})/W(\mathcal{O})}\) is a dagger de Rham complex for the lifting \(\tilde{R}\) of \(R\) over \(W(\mathcal{O})\). In order to globalise the comparison with \(A^{\dagger}\Omega\) we need to define an overconvergent de Rham-Witt complex. Let \[W\Omega^{\bullet}_{\widehat{\mathcal{O}}[T_{1},\dots,T_{m}]/\mathcal{O}}:= \varprojlim_{\tilde{s},n}W_{s}\Omega^{\bullet}_{\mathcal{O}/p^{n}[T_{1},\dots, T_{m}]/\mathcal{O}}\] be the \(p\)-adically completed de Rham-Witt complex of [1]. Let \(z\in\mathcal{O}^{\dagger}\langle T_{1},\dots,T_{m}\rangle\), \(z=\sum_{\kappa}a_{\kappa}\underline{T}^{\kappa}\), which converges weakly with respect to the \(p\)-adic topology, and \(\kappa\) runs through multi-indices in \(\mathbb{N}_{0}^{m}\). Define for \(\epsilon>0\)\(\gamma_{\epsilon}(z)=\inf_{\kappa}\{v_{p}(a_{\kappa}-\epsilon|\kappa]\}\). Then \(\gamma_{\epsilon}(z)>-\infty\) for some \(\epsilon>0\). Define \(\underline{Y}=(Y_{0},Y_{1},\dots)\in W(\widehat{\mathcal{O}}\langle T_{1}, \dots,T_{m}\rangle)\) to be in \(W^{\dagger}(\mathcal{O}^{\dagger}\langle T_{1},\dots,T_{m}\rangle)\) if there exists \(\epsilon>0\) such that \(\gamma_{\epsilon}(Y_{i})>-\infty\) for all \(i\), and moreover, if \(Y_{i}=\sum_{\kappa(i)}a_{\kappa(i)}\underline{T}^{\kappa(i)}\) we have \[\inf_{i,\kappa(i)}\{i+(v_{p}(a_{\kappa(i)}-\epsilon(\kappa(i))p^{-i})\}>- \infty\,,\] and likewise for \(W^{\dagger}\Omega^{\bullet}_{\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m} \rangle/\mathcal{O}}\), using the unique description of an element in \(W\Omega^{\bullet}_{\widehat{\mathcal{O}}\langle T_{1},\ldots,T_{m}\rangle/ \mathcal{O}}\) as a \(p\)-\(V\)-adically convergent sum of basic Witt differentials in [10]. We have the following **Lemma 6.2**.: _Under the base change \(\mathcal{O}\to k\),_ \[W^{\dagger}\Omega^{\bullet}_{\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m} \rangle/\mathcal{O}}\otimes^{L}_{W(\mathcal{O})}W(k)\simeq W^{\dagger}\Omega^{ \bullet}_{k[T_{1}\ldots,T_{m}]/k}\] _which is the overconvergent de Rham-Witt complex of the closed fibre defined in [10]._ Proof.: This is clear because the condition \[\inf_{i,\kappa(i)}\{i+(v_{p}(a_{\kappa(i)}-\epsilon(\kappa(i))p^{-i})\}>-\infty\] becomes \[\inf_{i}\{i-p^{-i}\epsilon\deg\overline{Y}_{i})\}=\gamma_{\epsilon}( \overline{Y}_{0},\overline{Y}_{1},\ldots)>-\infty\] which is the growth condition on \(W(k[T_{1},\ldots,T_{n}])\), and likewise for \(W\Omega\). Then we have **Lemma 6.3**.: _There is a quasi-isomorphism of \(p\)-\(V\)-adically weakly complete complexes_ \[\Omega^{\bullet}_{W(\mathcal{O})^{\dagger}\langle T_{1},\ldots,T_{m}\rangle/ \mathcal{O}}\simeq W^{\dagger}\Omega^{\bullet}_{\mathcal{O}^{\dagger}\langle T _{1},\ldots,T_{m}\rangle/\mathcal{O}}\,.\] Proof.: Decompose \(W^{\dagger}\Omega^{\bullet}_{\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m} \rangle/\mathcal{O}}\) into a direct sum of \[W^{\dagger}\Omega^{\bullet,(\text{int})}_{\mathcal{O}^{\dagger}\langle T_{1}, \ldots,T_{m}\rangle/\mathcal{O}}:=\Omega^{\bullet}_{W(\mathcal{O})^{\dagger} \langle T_{1},\ldots,T_{m}\rangle/\mathcal{O}}\] and a fractional part \(W^{\dagger}\Omega^{\bullet,(\text{frac})}_{\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m}\rangle/\mathcal{O}}\). The direct sum decomposition holds at finite level [10] for \(W_{s}\Omega^{\bullet}_{\mathcal{O}/p^{n}[T_{1},\ldots,T_{m}]/\mathcal{O}/p^{n}}\). Then take \(p\)-\(V\)-adic completion and identify the overconvergent elements in the integral and fractional part. Details are left to the reader. It is clear that \(W^{\dagger}\Omega^{\bullet,(\text{frac})}_{\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m}\rangle/\mathcal{O}}\) is acyclic. Now let \(\tilde{R}\) be etale over \(W(\mathcal{O})^{\dagger}\langle T_{1},\ldots,T_{m}\rangle\), lifting \(R/\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m}\rangle\). We claim that the above Lemma 6.3 extends to \(\tilde{R}\), i.e. **Lemma 6.4**.: _There is a quasi-isomorphism_ \[\Omega^{\dagger\bullet}_{\tilde{R}/W(\mathcal{O})}\simeq W^{\dagger}\Omega^{ \bullet}_{R/\mathcal{O}}\,.\] Proof.: We have \(\mathcal{O}^{\dagger}\langle T_{1},\ldots,T_{m}\rangle=\varinjlim_{\epsilon} \widehat{\mathcal{O}}\langle p^{\epsilon}T_{1},\ldots,p^{\epsilon}T_{m}\rangle\) where \(\widehat{\mathcal{O}}\langle p^{\epsilon}T_{1},\ldots,p^{\epsilon}T_{m}\rangle\) denotes the \(p\)-adically convergent power series in \(T_{1},\ldots,T_{m}\) that converge on a ball of radius \(p^{\epsilon}\). Then \(R=\varinjlim_{\epsilon}R_{\epsilon}\), with \(R_{\epsilon}\) etale over \(\widehat{\mathcal{O}}\langle p^{\epsilon}T_{1},\ldots,p^{\epsilon}T_{m}\rangle\), and \(\tilde{R}=\varinjlim_{\epsilon}\tilde{R}_{\epsilon}\). By etale base change \[W_{s}\Omega^{\bullet}_{\mathcal{O}/p^{n}[p^{\epsilon}T_{1},\ldots,p^{\epsilon} T_{m}]}\otimes\tilde{R}_{\epsilon,s}/p^{n}=W_{s}\Omega^{\bullet}_{R_{ \epsilon}/p^{n}}\] where \(\tilde{R}_{s,\epsilon}:=\tilde{R}_{\epsilon}\otimes_{W(\mathcal{O})}W_{s}( \mathcal{O})\), and we have again a direct sum decomposition into an integral part (isomorphic to \(\Omega^{\bullet}_{\tilde{R}_{\epsilon,s}/W_{s}(\mathcal{O})}\)) and an acyclic fractional part. By taking limits over \(s,n\) we get the analogous decomposition for \(W\Omega^{\bullet}_{R_{\epsilon}/\mathcal{O}}\) and likewise for \(W^{\dagger}\Omega^{\bullet}_{R_{\epsilon}/\mathcal{O}}\), identifying \(W^{\dagger}\Omega^{\bullet,\text{int}}_{R_{\epsilon}/\mathcal{O}}\) with \(\Omega^{\prime\bullet}_{\tilde{R}_{\epsilon}/W(\mathcal{O})}\), and an acyclic fractional part. Taking direct limits over \(\epsilon\) identifies \(W^{\dagger}\Omega^{\bullet}_{R/\mathcal{O}}\) as a subcomplex in \(W\Omega^{\bullet}_{\tilde{R}/\mathcal{O}}\) with integral part isomorphic to \(\Omega^{\dagger\bullet}_{\tilde{R}/W(\mathcal{O})}\) and acyclic fractional part. For a weak formal smooth \(\mathcal{O}\)-scheme \(\mathcal{X}\) we can define \(W^{\dagger}\Omega^{\bullet}_{\mathcal{X}/\mathcal{O}}\) by gluing local data arising in a covering of \(\mathcal{X}\) by affine weak formal schemes \(\operatorname{Spf}^{\dagger}S\) such that \(S\) is etale over a weakly completed polynomial algebra and using that \(W\Omega^{\bullet}_{\mathcal{X}/\mathcal{O}}\) is a complex of sheaves. Then we have the following final comparison result **Theorem 6.5**.: _Let \(\mathcal{X}\) be a weak formal smooth \(\mathcal{O}\)-scheme. Then_ \[A^{\dagger}\Omega_{\mathcal{X}/\mathcal{O}}\otimes_{A_{\inf}}^{L}W(\mathcal{O })\simeq W^{\dagger}\Omega^{\bullet}_{\mathcal{X}/\mathcal{O}}\,.\]
2305.06934
Humans are Still Better than ChatGPT: Case of the IEEEXtreme Competition
Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance excels in typical tasks suited for ChatGPT, specifically in the domain of computer programming. We utilize the IEEExtreme Challenge competition as a benchmark, a prestigious, annual international programming contest encompassing a wide range of problems with different complexities. To conduct a thorough evaluation, we selected and executed a diverse set of 102 challenges, drawn from five distinct IEEExtreme editions, using three major programming languages: Python, Java, and C++. Our empirical analysis provides evidence that contrary to popular belief, human programmers maintain a competitive edge over ChatGPT in certain aspects of problem-solving within the programming context. In fact, we found that the average score obtained by ChatGPT on the set of IEEExtreme programming problems is 3.9 to 5.8 times lower than the average human score, depending on the programming language. This paper elaborates on these findings, offering critical insights into the limitations and potential areas of improvement for AI-based language models like ChatGPT.
Anis Koubaa, Basit Qureshi, Adel Ammar, Zahid Khan, Wadii Boulila, Lahouari Ghouti
2023-05-10T08:16:46Z
http://arxiv.org/abs/2305.06934v1
# Humans are Still Better than ChatGPT: ###### Abstract Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance excels in typical tasks suited for ChatGPT, specifically in the domain of computer programming. We utilize the IEEEExtreme Challenge competition as a benchmark--a prestigious, annual international programming contest encompassing a wide range of problems with different complexities. To conduct a thorough evaluation, we selected and executed a diverse set of 102 challenges, drawn from five distinct IEEEExtreme editions, using three major programming languages: Python, Java, and C++. Our empirical analysis provides evidence that contrary to popular belief, human programmers maintain a competitive edge over ChatGPT in certain aspects of problem-solving within the programming context. In fact, we found that the average score obtained by ChatGPT on the set of IEEEExtreme programming problems is 3.9 to 5.8 times lower than the average human score, depending on the programming language. This paper elaborates on these findings, offering critical insights into the limitations and potential areas of improvement for AI-based language models like ChatGPT. ChatGPT, GPT-4, GPT-3.5, GPT Performance, GPT Limitations, OpenAI, NLP, Computer Programming. ## I Introduction ### _Background and motivation_ Large Language Models (LLMs) [1] have emerged as a groundbreaking artificial intelligence technology, especially since the release of ChatGPT in late November 2022. LLMs can mimic human-level capabilities in various complex natural language processing and understanding tasks across multiple domains, such as virtual assistants, chatbots, language translation, sentiment analysis, and more. ChatGPT has been trained on an extensive corpus of data spanning various disciplines, enabling it to acquire a broad spectrum of knowledge. Its training data comprises diverse sources from multiple domains, including but not limited to science, literature, law, programming, finance, and many more. This various training data has given ChatGPT a global perspective, making it capable of understanding and generating responses across a wide range of subjects. The vast knowledge base of ChatGPT allows it to provide insights and solutions to complex problems that span different domains, making it an effective tool for various applications in natural language processing and understanding. With the ChatGPT's unprecedented capabilities compared to other LLMs in competing with humans across various applications, there has been a significant increase in the number of studies investigating its performance in specialized and complex domains, such as healthcare [2], and finance [3]. However, despite the growing interest in evaluating ChatGPT's abilities in these areas, there has been a lack of research specifically focusing on its specific performance in problem-solving and programming assessment domains, which is the main focus of this paper. This research gap has motivated us to investigate and evaluate ChatGPT's abilities in these areas. ### _Objective_ This paper aims to investigate the problem-solving capabilities of ChatGPT by evaluating its performance on programming problemmarks. Our objective is to assess how ChatGPT compares to human programmers and to extract valuable insights into its strengths and weaknesses in this domain-specific context. To accomplish our research objective, we identified the IEEEExtreme Programming Challenge as the most reputable and prestigious competition that could serve as an appropriate benchmark for comparing the problem-solving abilities of ChatGPT and human programmers. The IEEEExtreme Programming Challenge is an annual international programming competition organized by the Institute of Electrical and Electronics Engineers (IEEE). This 24-hour competition attracts programming professionals from across the globe to compete in solving programming problems with varying degrees of complexity, which demand high-level problem-solving and programming skills. In summary, the primary objective of this study is to evaluate the performance of ChatGPT in the programming and problem-solving-specific context. To this end, we will employ the IEEExtreme Challenge competition as a benchmark, utilizing problems of varying complexities. Moreover, we aim to analyze the limitations of ChatGPT in solving specific problems and programming tasks and identify areas for improvement and optimization. By conducting this analysis, we aim to provide the community with insights into the effectiveness of ChatGPT in programming and problem-solving domains and provide recommendations for future developments in this area. ### _Methodology_ For this study, we selected five IEEExtreme programming competitions, each consisting of an average of 20 questions. To guide ChatGPT in designing solutions while meeting non-functional requirements such as memory usage and execution time, we designed well-crafted prompts. For each problem, we presented the prompt to ChatGPT and evaluated its corresponding solution using Hackerrank, which was used to generate scores. We evaluated solutions in three top programming languages: Python 3, C++ 11, and Java 7. In case of errors, we made up to seven attempts to guide ChatGPT toward the correct solution by providing the corresponding Hackerrank error message. Furthermore, to ensure the consistency of the results, we conducted this process three times, using different ChatGPT chat windows for each programming problem of the five selected IEEExtreme Challenges. The final results were analyzed, and we identified ChatGPT's limitations in solving specific problems and programming tasks. Additionally, we provided recommendations for areas of improvement and optimization, which could enhance ChatGPT's effectiveness in programming and problem-solving domains. #### Research Questions In this study, we aim to respond to four research questions: 1. How does ChatGPT compare to human programmers in problem-solving and programming tasks, in the context of the IEEExtreme Challenge competition? 2. In which specific programming tasks or problem types do humans outperform ChatGPT, and what are the underlying reasons for this disparity? 3. Is ChatGPT performance biased towards particular programming languages among the three selected languages, namely, Python, C++ 11, and Java? 4. What are the fundamental limitations of ChatGPT in programming and problem-solving, and how can these findings guide future research and development of ChatGPT and other domain-specific large language models? ### _Overview of the paper structure_ The paper is organized as follows. Section II provides a literature review on ChatGPT and its applications, human performance in programming tasks, and previous comparisons between AI and human performance. Section III describes the methodology, including selecting IEEExtreme challenges, evaluation criteria and metrics, programming languages used, and the data collection and analysis approach. Section IV presents the results of the study that compares ChatGPT and human performance in programming tasks of IEEExtreme challenges and identifies the gap with human-level performance. The interpretation of the results and limitations of ChatGPT in programming and problem-solving tasks, as well as the reasons for disparities, are discussed in the same section. Finally, Section V concludes the paper with a summary of findings, potential areas for improvement in ChatGPT, and suggestions for future research directions. ## II Related Works ### _ChatGPT and its applications_ ChatGPT has made significant progress and it has been used in various applications. The detailed comparison of ChatGPT performance in various domains is shown in Table I. In our previous study, ChatGPT's applications were classified into five main categories [4]: * **NLP:** In this type of application, ChatGPT generates human-like responses in natural language. Applications of ChatGPT in NLP include building virtual assistants, chatbots, language translation systems, and text generation tasks such as summarization and question answering [5, 6, 7]. * **Healthcare:** ChatGPT has been used in various healthcare fields. It has been applied in healthcare decision support to provide relevant information and recommendations [8]. In addition, many recent research works have investigated the case of using ChatGPT in patient education, where ChatGPT provides patients with educational information about their health conditions, treatments, and medications [9]. Moreover, ChatGPT has been included in applications related to telemedicine to provide more efficient and accurate virtual diagnosis and treatment [10]. * **Ethics:** Many recent research works addressed the challenge of using ChatGPT for the benefit of society and how to maintain public safety [11]. Many authors explored using ChatGPT to generate student works and scientific publications [12]. Many other researchers focused on the ethical concerns, data biases, and safety issues related to ChatGPT [13]. * **Education:** ChatGPT has played an essential role in several applications in education [14, 15, 16]. It helped to improve the learning experience for students. It can provide personalized educational content. In addition, it can generate educational materials for students and tutors. ChatGPT is considered a promising tool for education, as it can provide insightful direction and feedback. * **Industry:** recently, various applications across many industries have been focused on using ChatGPT to improve efficiency, streamline processes, and enhance customer experiences [17, 18, 19]. Applications include the manufacturing industry, where it can monitor and control production processes. In addition, ChatGPT is used in the financial sector, where it can offer support to customers and company owners. Moreover, it can provide customer support to handle routine inquiries. ### _Previous comparisons between AI and human performance_ In a recent study [5], Guo et al. compared the responses of ChatGPT and human experts to around 40K questions in various domains, such as finance, psychology, medical, legal, and open-domain, in both English and Chinese languages. They analyzed ChatGPT's response characteristics, differences and gaps from human experts, and future directions for LLMs. The researchers discovered that ChatGPT's responses are generally more helpful than human experts' in over half of the questions, especially in finance and psychology, due to its ability to offer specific suggestions. However, ChatGPT performs poorly in the medical domain. The authors also found that ChatGPT writes in an organized manner, with clear logic, and tends to provide detailed answers with less bias and harmful information. However, it may fabricate facts. Notably, the study did not include programming tasks but only theoretical questions about computer science-related concepts taken from Wikipedia. On another hand, Qin et al. [20] examined the zero-shot learning ability of ChatGPT. The evaluation was conducted on 20 commonly used natural language processing (NLP) datasets covering seven task categories, including natural language inference, question answering, dialogue, summarization, named entity recognition, and sentiment analysis. However, the study did not include any programming tasks. The authors performed extensive empirical studies to analyze the strengths and limitations of the current version of ChatGPT. They discovered that ChatGPT performed well on tasks that require reasoning abilities, such as arithmetic reasoning, but it struggled with specific tasks like sequence tagging. Additionally, ChatGPT was outperformed by previous models that had been fine-tuned for a specific task. The findings suggest that ChatGPT is still far from reaching perfection as a generalist model. Kashefi and Mukerji [21] investigated the potential of ChatGPT in one specific aspect of programming, which is to produce numerical algorithms for solving mathematical problems. They explored generating code in different programming languages, debugging user-written code, completing unfinished code, rewriting code in different programming languages, and parallelizing serial code. Although the study outcomes demonstrated that ChatGPT is capable of programming numerical algorithms, certain limitations and challenges were encountered. These included issues such as generating singular matrices, producing incompatible arrays, and irregular interruption when generating long codes required for scientific simulations. Another challenge was the inclusion of unknown libraries. Despite these limitations, the study suggests that ChatGPT has the potential for further development and improvement in programming numerical algorithms. Liu et al. [22] evaluated the performance of ChatGPT and GPT-4 on various logical reasoning tasks using multiple datasets, including both well-known benchmarks and newly-released ones. The experiments showed that ChatGPT performs better than the RoBERTa [23] fine-tuning method on most logical reasoning benchmarks. However, both ChatGPT and GPT-4 struggle with newly-released and out-of-distribution datasets. GPT-4 showed higher performance than ChatGPT on most logical reasoning datasets. Nevertheless, despite advancements in models like ChatGPT and GPT-4, the task of logical reasoning still poses significant challenges for these models, particularly when dealing with out-of-distribution and natural language inference datasets. Tian et al. [24] presented an empirical study evaluating the potential of the ChatGPT generative large-scale language model as an assistant bot for programmers. The study assesses ChatGPT's performance on three code-related tasks: code generation, program repair, and code summarization. ChatGPT is found to perform well in the code generation task but struggles to generalize to new and unseen problems. The study also highlights the negative impact of long prompts on ChatGPT's inference capabilities. In the program repair task, ChatGPT achieves competitive results compared to Refactory [25], a state-of-the-art semantic-based assignments repair tool. However, prompts that are not related to bug information are found to make ChatGPT perform even worse due to its limited attention span. The study's limitation pertains to the absence of a comparison between ChatGPT's performance and that of human programmers, thus hindering the establishment of its proficiency in relation to human experts. Biswas [26] explored existing language models and tools for computer programming. ChatGPT is introduced as a powerful and versatile tool that can perform a variety of programming-related tasks such as code completion, correction, optimization, and refactoring. The paper highlights the ability of ChatGPT to provide explanations and guidance to help users understand complex concepts and resolve technical issues. The use of ChatGPT is noted as a potential means to improve overall satisfaction with support services and build a reputation for expertise and reliability. In summary, the paper suggests that ChatGPT is a valuable resource for technical support and improving efficiency and accuracy in computer programming tasks. The author solved simple programs without comparing them with human performance. Additionally, the author's work is only limited to exploratory analysis without empirical results. In reference [27], the authors discussed the challenges faced by behavior analysts in automating and systematizing experimental tasks. With the development of online platforms, OpenAI ChatGPT has emerged as a chatbot that can generate text responses similar to humans in a conversational context. One of its key functions is the ability to generate programming code blocks in various programming languages. The article presents the use of ChatGPT as a programming assistant to develop an online behavioral task using HTML, CSS, and JavaScript code. While ChatGPT cannot replace programmers entirely, it can provide detailed programming solutions and reduce the time associated with programming. The authors assess the performance of the ChatGPT with random problems in diverse directions. There is no quantitative study to assess the performance of the ChatGPT. It also lacks comparison with human performance. The previous studies on ChatGPT mainly explored its performance in various contexts, but most of them did not follow a quantitative approach. In contrast, our study quantitatively evaluates ChatGPT's performance in solving IEEE Xtreme problems and compares it to average human performance in three different programming languages. ## III Methodology ### _The IEEExtreme Competition_ There are several global programming competitions including, IEEE Extreme Programming Competition (IEEEXtreme) [28], ACM International Collegiate Programming contest (ICPC) [29], Google Code Jam [30], Facebook / Meta Hacker Cup [31], and International Olympiel in Informatics (IOI) [32], to name a few. The IEEEXtreme is a global programming competition organized by the Institute of Electrical and Electronics Engineers (IEEE). The IEEEXtreme programming competition has been running annually since 2006. The number of participants in the competition has been increasing each year. In the early years, the competition had around 500 teams participating. In recent years, the number of participating teams has grown to around 10,000 or more, with participants from over 100 countries. The competition has become a major event in the global programming community, attracting top talent worldwide. The competition provides a platform for students to showcase their technical skills and talent. Winning the competition is a significant achievement that can help participants stand out to potential employers or graduate schools. It is a 24-hour coding marathon where teams of up to three students worldwide compete to solve a series of challenging programming problems. The problems are usually related to topics in computer science, mathematics, and engineering. The competition is designed to encourage and develop programming skills in students, as well as to promote teamwork and creativity. Participants must rely on their knowledge of algorithms, data structures, and programming languages to solve problems within the time limit. The competition is judged based on the number of problems solved, with ties broken based on the time taken to solve them. Each problem has a set number of test cases defined in the evaluation platform. Students need to provide a programming solution to the given problem by passing all the test cases to earn scores. ### _Selecting IEEExtreme Challenges_ IEEExtreme competition editions 11 and beyond were held on the CSacademy platform [33], while the earlier editions were conducted on the Hackerrank platform [34]. These platforms are accessible worldwide and open to all, and both may rely on Amazon Web Services (AWS) for their infrastructure. The entire problem set for IEEExtreme competitions versions 15 and 16, which were hosted in 2021 and 2022 respectively, are available on the CSacademy platform. The earlier versions are unfortunately not available. Selected problems from versions 8, 9, and 10 are available on the Hackerrank platform, however, these can be accessed using the practice community website only. In this work, we rely on the problem sets presented in IEEExtreme versions 8, 9, 10, 15, and 16. Table III shows a list of problems presented in each of these competitions. Each problem is classified based on difficulty level defined by the organizers as easy, medium, hard, and advanced. While all problems are available for IEEExtreme versions 9, 15, and 16, only a select few are available for versions 8 and 10, on the hackerrank [35] practice community website. ### _Solving and scoring a problem_ As mentioned in the previous section, each competition comprises a certain number of problems. A problem statement generally includes a brief description of the problem, its input and output format, and any constraints or limitations that apply to the solution. The problem may also include sample test cases that provide examples of the expected input and output. Participants are expected to write a computer program using a programming language of their choice, that solves the problem statement and produces the expected output. The solution is then submitted to the competition's online system, which tests the program against a variety of test cases and assigns a score based on its accuracy and efficiency. In recent versions of the competition, participants are expected to submit correct solutions that satisfy the minimum execution time and memory limitations. This usually requires participants to post optimized solutions based on the correct choice of programming constructs, efficient data structures, and algorithms. To obtain scores for each problem, participants can submit solutions multiple times to pass most, if not all, of the hidden test cases. However, multiple submissions to the same problem typically result in point deductions, known as penalties, which are factored into the total score in case of tiebreakers between teams scoring the exact same number of points. A Programmer may solve any of these problems using a programming language of their choice including but not limited to C/C++, java, python, etc. At the end of the competition, the website displays relevant information for each problem, including the average score earned per team and the percentage of teams that attempted to solve the problem. This important information factors into Human performance in this research work and is compared with AI performance, as explained in the next section. ### _ChatGPT Code Generation and Data Collection_ As the IEEE extreme programming competition is open to participants worldwide, it is essential that each participant has access to the necessary resources to solve the problems. With this goal in mind, we have developed a method for participants in this study to simulate the experience of competing in IEEE extreme by using ChatGPT to solve the problems. We evaluated the problem-solving performance of ChatGPT and human programmers using three primary programming languages: Python 3, Java 7, and C++. These languages were selected based on their popularity in the programming community and their suitability for solving the programming problems provided by the IEEE extreme Challenge competition. We ensured that all participants were familiar with these languages and had equivalent levels of proficiency. As a problem appears, the participants simply copy and paste the problem statement from the competition website into the prompt space. ChatGPT then generates detailed results that include an explanation of the algorithm, complete executable code in the selected programming language, sample test cases, and other relevant information needed to solve the problem. If for any of the following reasons, the provided code fails to execute, the participant will repeat the process for a maximum number of 7 trials until either the code works perfectly, or the number of trials has exhausted. To this end, several prompts would be used to improve the quality of results generated by chatGPT. The reasons include: * Incomplete code produced * Compile Errors * Runtime Errors * Memory Limit Exceeded * The execution time limit Exceeded * Failing test cases The following prompts are to be used to improve the quality of the response generated: * Provide a complete code for this problem using [language]. * Provide an optimized code using [language] that runs the program in a minimum time of x minutes and memory limitations of y Megabytes. * Following up on this code, improve it to solve this test case: [provided test case with the expected output generated for certain input]. The participant runs the code generated by ChatGPT on the platform and records the success rate, which includes the number of passed test cases and the maximum score earned for each problem. This procedure is replicated for all problems, utilizing all three programming languages. The results are recorded by all participants in a shared data repository. The data collected include: * Competition edition * Problem title identifier * Difficulty level of the problem * Language used (C++, Java or Python) * Number of trials/iterations * Scored earned by chatGPT generated code * Average Human performance for the problem Data is gathered from all five editions of the IEEE extreme competition, encompassing a total of 102 problems. For every problem set, code is generated using each of the three programming languages, with at least three iterations performed. Participants conducted three to seven trials per iteration to complete the execution of the generated code. The data is analyzed to generate conclusions for this study. These are presented in the next sections. ## IV Discussion ### _Interpretation of the results_ Table III shows the average score that ChatGPT achieved on the set of programming problems for Python 3, Java 7, and C++ 11, as well as the average human performance on the same set of problems. It appears that ChatGPT's average score is significantly (from 3.9 to 5.8 times) lower than the average human performance for all three languages, which shows that there is still a large room for improvement in ChatGPT's programming abilities. It is noteworthy that ChatGPT's performance varies among the three programming languages, with Java 7 showing the highest average score, followed by Python 3 and then C++. This observation may suggest that the size and quality of available learning materials for each language in ChatGPT's dataset are not equal. Indeed, Java has been the most widely used language for many years and has extensive documentation, which could have contributed to ChatGPT's better performance on problems written in Java 7 compared to Python 3 and C++. Figure 1 shows the average ChatGPT and human performances for programming problems categorized by their complexity level. The complexity levels are Easy, Hard, Medium, and Advanced. As expected, when the complexity level of the problems increases, both ChatGPT's average score and human performance significantly decrease. However, the decrease is much sharper for ChatGPT. Its score is 23 times lower for the Advanced category compared to the Easy category, while this decrease is only of 2.4 times for human performance. It should be noted that the categories 'Hard' and 'Medium' used by IEEEXtreme competition may not be accurate indicators of problem difficulty, as both humans and ChatGPT demonstrate significantly better performance in the 'Hard' category compared to the 'Medium' category. This also highlights the subjective character of this categorization. On another hand, the correlation coefficient between ChatGPT's and human scores is low (0.21), which indicates that the easiest programming problems for human programmers are not necessarily the easiest for ChatGPT to solve, and vice versa. This lack of correlation between ChatGPT's and human scores could be due to various factors, such as differences in problem-solving approaches and strategies used by ChatGPT and human programmers, variations in the level of programming expertise, and the type and complexity of the problems presented to them. Table IV breaks down further these results per complexity levels. We notice again that ChatGPT performs better on Java for all complexity levels except the 'Advanced' category. For this category, all the tests on all problems completely failed except one test on the "Finite Domain Constraints" problem from Xtreme9 that gave a partial success (12.74%) only on Python. Therefore, we cannot draw a general conclusion from this single partial success. On another hand, the average human score presented in this table is the same for all three languages because it was provided by the IEEEXtreme website as an overall average over all programming languages, and no average scores per programming languages was available. Figure 2 shows the complete score distributions of ChatGPT and average human programmers. This figure clearly demonstrates the marked superiority of average human programmers over ChatGPT, with ChatGPT obtaining a null score in the large majority of cases (72%), while only 10.0% of cases correspond to an average human performance less than 10%. Figure 3 compares the sunburst charts of ChatGPT and human scores per programming language and complexity level. The color of inner sectors (representing programming languages and complexity categories) corresponds to the average colors of outer sectors belonging to them. The darker the color, the better the results. This figure provides additional evidence of the dominance of average human programmers over ChatGPT in almost all tested cases. To get an idea about the progress achieved in GPT-4 compared to GPT-3.5, in terms of programming capabilities, we tested their performance on a representative set of 6 problems using the Python 3 language. The results are presented in Table V. GPT-4 showed a slight improvement in one problem ("Counting Molecules") with an average score increasing from 65% to 70% (but still lower than average human score), and a clear improvement in another problem ("Painter's Dilemma") where it went from complete failure to complete success. However, for the remaining 4 problems, both GPT-3.5 and GPT-4 obtained a score of zero. For the "Painter's Dilemma", which is an optimization problem, we also prompted ChatGPT to generate C++ and Java solutions. Both tests yielded a 100% success in GPT-4, compared to 4.76% and 0%, respectively, in GPT-3.5. These results indicate that the improvement in GPT-4 programming abilities, compared to GPT-3.5 is limited to specific types of problems. ### _Limitations of ChatGPT in programming tasks_ Based on the results of our experiments, we can draw several general conclusions about the limitations of ChatGPT in programming tasks. First, ChatGPT's performance on programming tasks is significantly lower than that of an average human programmer, indicating that there is still a way to go before ChatGPT can fully match human intelligence in programming. This is especially true for more complex problems, where the performance gap between ChatGPT and humans is even more significant. This suggests that ChatGPT still has limitations in understanding and solving complex programming problems that require high-level reasoning and expertise. Second, the lack of correlation between ChatGPT's and human scores indicates that the easiest programming problems for human programmers are not necessarily the easiest for ChatGPT to solve, and vice versa. This suggests that ChatGPT may have limitations in problem-solving approaches and strategies that differ from those of human programmers. Finally, although there have been some improvements in the GPT-4 compared to GPT-3.5 in terms of programming capabilities, there is still a significant performance gap between ChatGPT and human programmers, especially for more complex problems. This suggests that there are still limitations in the current state-of-the-art language models for programming tasks, and that further research and development are needed to bridge Fig. 3: Subburst charts comparing programming language proficiency scores between ChatGPT (left) and human programmers (right) across different complexity levels and programming languages. The scores in the outer sectors have been rounded to the nearest 10. the gap between ChatGPT's performance and that of human programmers. In summary, while ChatGPT represents a significant breakthrough in language modeling, its limitations in programming tasks suggest that there is still much room for improvement. Further research and development are needed to improve ChatGPT's performance on programming tasks, especially for more complex problems, and to bridge the gap between ChatGPT's performance and that of human programmers. ### _Implications for AI development and applications_ The implications of theses results for AI development and applications in the programming field are significant. While ChatGPT and other language models have shown promise in natural language processing and generation, their limitations in complex programming tasks indicate that they may not be suitable for fully automated programming, at least not yet. However, they can still be useful for tasks such as generation of simple programs, code completion, code summarization, and documentation generation. To fully leverage the potential of language models in programming, further research is needed to develop models that can understand and reason about code in the same way as human programmers. This will require a better understanding of the cognitive processes involved in programming and the ability to incorporate this knowledge into AI models. Additionally, more comprehensive and diverse datasets need to be developed that better capture the variety of programming tasks and languages used in real-world programming. Overall, the limitations of ChatGPT in programming tasks highlight the need for continued research and development in AI and programming, and the importance of understanding the strengths and limitations of AI models in different contexts. ## V Conclusion Numerous studies have demonstrated the impressive performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presented an alternative perspective by showing a situation where human performance excels over ChatGPT in typical tasks that suit it, specifically in relatively complex computer programming. To evaluate this claim quantitatively, we used the IEEE extreme Challenge competition as a benchmark, which offers a range of programming problems with varying levels of difficulty. We executed a diverse set of 102 challenges drawn from five IEEE extreme editions, using three major programming languages: Python, Java, and C++. We then compared ChatGPT's score to the average score achieved by human competitors. Our empirical analysis demonstrated that human programmers maintain a significant advantage over ChatGPT in certain aspects of problem-solving within the programming context. This paper offers critical insights into the potential areas of improvement for ChatGPT and other AI-based language models. Future research could investigate the factors that enable humans to outperform ChatGPT in programming tasks and explore ways to address the limitations of AI-based language models in this area, such as improving their understanding of programming languages and their ability to work with complex code structures.
2302.13420
Tangent-filling plane curves over finite fields
We study plane curves over finite fields whose tangent lines at smooth $\mathbb{F}_q$-points together cover all the points of $\mathbb{P}^2(\mathbb{F}_q)$.
Shamil Asgarli, Dragos Ghioca
2023-02-26T22:16:23Z
http://arxiv.org/abs/2302.13420v2
# Tangent-filling plane curves over finite fields ###### Abstract. We study plane curves over finite fields whose tangent lines at smooth \(\mathbb{F}_{q}\)-points together cover all the points of \(\mathbb{P}^{2}(\mathbb{F}_{q})\). Key words and phrases:Tangent-filling, plane curves, finite fields 2020 Mathematics Subject Classification: Primary 14H50, 11G20; Secondary 14G15, 14N05 ## 1. Introduction The investigation of algebraic curves over finite fields is an ever-growing research topic. Stemming from the intersection of algebra, number theory and algebraic geometry, it influences a wide array of fields such as coding theory and combinatorial design theory [1]. As one specific example in this vast body of work, finding curves with many \(\mathbb{F}_{q}\)-rational points remains an interesting challenge. The motivation behind searching for extremal curves ranges from purely theoretical reasons (e.g. understanding the sharpness of Hasse-Weil inequality) to more applied constructions (e.g. obtaining a rich configuration of points). It is already instructive to restrict attention to plane curves. We list a few different definitions from the literature for a given projective irreducible plane curve \(C\subset\mathbb{P}^{2}\) of degree \(d\) over a finite field \(\mathbb{F}_{q}\) to have "a lot of \(\mathbb{F}_{q}\)-rational points". 1. We say that \(C\) is a _maximal curve_ if \(\#C(\mathbb{F}_{q})=q+1+(d-1)(d-2)\sqrt{q}\), namely, the curve achieves the equality in the Hasse-Weil upper bound for its \(\mathbb{F}_{q}\)-rational points. 2. We say that \(C\) is _plane-filling_ if \(C(\mathbb{F}_{q})=\mathbb{P}^{2}(\mathbb{F}_{q})\), that is, \(C\) contains each of the \(q^{2}+q+1\) distinct \(\mathbb{F}_{q}\)-points of \(\mathbb{P}^{2}\). 3. We say that \(C\) is _blocking_ if \(C(\mathbb{F}_{q})\) is a blocking set, that is, \(C\) meets every \(\mathbb{F}_{q}\)-line \(L\) at some \(\mathbb{F}_{q}\)-point. The main purpose of the present paper is to introduce a new concept that indicates in yet another way that the curve contains many \(\mathbb{F}_{q}\)-points. 1. We say that \(C\) is _tangent-filling_ if every point \(P\in\mathbb{P}^{2}(\mathbb{F}_{q})\) lies on a tangent line \(T_{Q}C\) to the curve \(C\) at some smooth \(\mathbb{F}_{q}\)-point \(Q\). Regarding the literature, we note that curves satisfying (a) have been thoroughly studied in many papers ranging from foundational work [1, 1, 2] to the more recent discoveries [1, 1]. The curves satisfying (b) have been analyzed by [1, 1, 2]. Finally, the curves satisfying (c) have been recently examined by the authors in joint work with Yip [1, 2, 1, 3, 4]. Our first theorem shows that a curve of a low degree cannot be tangent-filling. We first state the result when the ground field is \(\mathbb{F}_{p}\) for some prime \(p\). For convenience, we state the result for \(d\geq 3\) and discuss the case \(d=2\) in Remark 2.2. **Theorem 1.1**.: _Let \(C\subset\mathbb{P}^{2}\) be an irreducible plane curve of degree \(d\geq 3\) defined over \(\mathbb{F}_{p}\) where \(p\) is a prime. If \(p\geq 4(d-1)^{2}(d-2)^{2}\), then \(C\) is not tangent-filling._ We have an analogous result for an arbitrary finite field \(\mathbb{F}_{q}\). **Theorem 1.2**.: _Let \(C\subset\mathbb{P}^{2}\) be an irreducible plane curve of degree \(d\geq 2\) defined over \(\mathbb{F}_{q}\). If \(p>d\) and \(q\geq d^{2}(d-1)^{6}\), then \(C\) is not tangent-filling._ Let us briefly compare the bounds in these two theorems. The bound \(p\geq O(d^{4})\) in Theorem 1.1 is replaced with a pair of bounds \(p>d\) and \(q\geq O(d^{8})\) in Theorem 1.2. From one perspective, Theorem 1.2 provides worse bounds on \(q\), and it remains open to improve \(q\geq O(d^{8})\) to \(q\geq O(d^{4})\). From another perspective, Theorem 1.2 provides better bounds on the characteristic \(p\); for instance, when \(q=p^{n}\) with \(n=4\), the bound \(p^{4}=q\geq O(d^{8})\) is equivalent to \(p\geq O(d^{2})\), which is a weaker hypothesis than the earlier bound \(p\geq O(d^{4})\). It is also natural to consider the situation where we restrict our attention to a more restrictive class of smooth curves; in this case, Remark 2.3 explains to obtain a slightly improved result. We are also interested in finding examples of tangent-filling curves. Clearly, any smooth plane-filling curve is tangent-filling. Since the degree of the smallest plane-filling curve over \(\mathbb{F}_{q}\) is \(q+2\) by [13], it is natural to search for tangent-filling curves with degrees less than \(q+2\). Our next theorem exhibits an example of a tangent-filling curve of degree \(q-1\). **Theorem 1.3**.: _Let \(q\geq 11\) and \(p=\operatorname{char}(\mathbb{F}_{q})>3\). The curve \(C\) defined by the equation_ \[x^{q-1}+y^{q-1}+z^{q-1}-3(x+y+z)^{q-1}=0\] _is an irreducible tangent-filling curve._ _Remark 1.4_.: We note that if \(\operatorname{char}(\mathbb{F}_{q})=2\) in Theorem 1.3, then the curve \(C\) is reducible, as it contains the lines \(x=y\), \(y=z\) and \(z=x\). On the other hand, if \(\operatorname{char}(\mathbb{F}_{q})=3\), then curve \(C\) in Theorem 1.3 is smooth, but it is not tangent-filling since no tangent line at a point of \(C(\mathbb{F}_{q})\) passes through any of the points \([1:0:0]\), \([0:1:0]\) and \([0:0:1]\). This claim can be easily checked since the points \([x_{0}:y_{0}:z_{0}]\in C(\mathbb{F}_{q})\) have the property that \(x_{0}y_{0}z_{0}\neq 0\) (the proof of this fact follows similarly as in Lemma 3.2, which characterizes the \(\mathbb{F}_{q}\)-points of \(C\) when \(\operatorname{char}(\mathbb{F}_{q})>3\)), while the equation of the tangent line at the point \([x_{0}:y_{0}:z_{0}]\in C(\mathbb{F}_{q})\) is \[x_{0}^{q-2}\cdot x+y_{0}^{q-2}\cdot y+z_{0}^{q-2}\cdot z=0.\] Finally, a simple computer check shows that the curve \(C\) from Theorem 1.3 is not tangent-filling when \(q\in\{5,7\}\) (see also Remark 3.7). While we expect that \(d=q-1\) is not the smallest possible degree of a tangent-filling curve, we believe that Theorem 1.3 is novel in several ways. First, checking the tangent-filling condition over \(\mathbb{F}_{q}\) requires careful analysis of the \(\mathbb{F}_{q}\)-points of the curve. Second, in our previous work with Yip [1], we found several families of blocking smooth curves of degree less than \(q\) and so, it was natural to test those families whether they are also tangent-filling; however, none of the tested families of blocking smooth curves turned out to be tangent-filling. This suggests that finding tangent-filling curves may be very challenging, much more than the case of blocking curves. In particular, finding tangent-filling curves of degree less than \(q\) seems very difficult in general. Quite interestingly, the curve from Theorem 1.3 is _not_ blocking since \(C(\mathbb{F}_{q})\) does not intersect the \(\mathbb{F}_{q}\)-lines \(x=0\), \(y=0\), \(z=0\) and \(x+y+z=0\) (see also Corollary 3.3). We remark that when \(q\) has a special form, there are more optimal examples. The noteworthy example is the Hermitian curve \(\mathcal{H}_{q}\) defined by \(x^{\sqrt{q}+1}+y^{\sqrt{q}+1}+z^{\sqrt{q}+1}=0\) when \(q\) is a square. We will see in Example 3.1 that \(\mathcal{H}_{q}\) is a tangent-filling curve. Thus, for \(q\) square, there is a (smooth) tangent-filling curve of degree \(\sqrt{q}+1\). Inspired by the example in the previous paragraph, we may ask for the most optimal curve that has the tangent-filling property. **Question 1.5**.: What is the minimum degree of an irreducible tangent-filling plane curve over \(\mathbb{F}_{q}\)? Let us explain a heuristic that suggests that the optimal degree may not be too far away from \(\sqrt{q}\) even for a general \(q\). Consider a collection \(\mathcal{L}\) of \(\mathbb{F}_{q}\)-lines such that \[\bigcup_{L\in\mathcal{L}}L(\mathbb{F}_{q})=\mathbb{P}^{2}(\mathbb{F}_{q}). \tag{1.1}\] By viewing each line as a point in the dual space \((\mathbb{P}^{2})^{*}\), the condition (1.1) is equivalent to \(\mathcal{L}\) being a blocking set in \((\mathbb{P}^{2})^{*}(\mathbb{F}_{q})\). There are plenty of blocking sets with size a constant multiple of \(q\); for instance, the so-called _projective triangle_, a well-known example of a blocking set, has \(\frac{3}{2}(q+1)\) points for odd \(q\)[10]. So, we choose \(\mathcal{L}\) that satisfies (1.1) and \(|\mathcal{L}|=c_{0}q\) for some constant \(c_{0}>0\). Next, suppose that it is possible to pick distinct \(\mathbb{F}_{q}\)-points \(P_{i}\in L_{i}\) for each \(L_{i}\in\mathcal{L}\), so that \(P_{i}\neq P_{j}\) for \(i\neq j\). Let us impose the condition that a degree \(d\) curve passes through the point \(P_{i}\) and has contact order at least \(2\) with the line \(L_{i}\) at the point \(P_{i}\). For each value of \(i\), this imposes \(2\) linear conditions in the parameter space \(\mathbb{P}^{N}\) of plane curves of degree \(d\), where \(N=\binom{d+2}{2}-1\). Assuming that \(\binom{d+2}{2}-1>2|\mathcal{L}|=2c_{0}q\), we obtain a curve of degree \(d\) satisfying each of these local conditions. By construction, each such curve is tangent to the line \(L_{i}\) at the point \(P_{i}\), and tangent-filling property is enforced by (1.1). The main issue is that all such resulting curves may be singular at one (or more) of the points \(P_{i}\). While the bound of the form \(d>c_{1}\sqrt{q}\) for some constant \(c_{1}>0\) is predicted by this heuristic, it seems very challenging to make this interpolation argument precise. ### Structure of the paper In Section 2, we borrow tools from classical algebraic geometry and combinatorics of blocking sets to prove our Theorems 1.2 and 1.1. In Section 3, we prove Theorem 1.3 by studying in detail the geometric properties of the given curve \(C\), such as its singular locus and irreducibility, along with an arithmetic analysis for the equation of a tangent line at a smooth \(\mathbb{F}_{q}\)-point of \(C\). ### Acknowledgments We thank the anonymous referee for their useful comments and suggestions, which improved our presentation. ## 2. Curves of low degree are not tangent-filling In this section, we prove Theorem 1.2 and Theorem 1.1. We start with preliminary geometric constructions. Given a plane curve \(C\), recall that the dual curve \(C^{*}\) parametrizes the tangent lines to \(C\). More formally, \(C^{*}\) is the closure of the image of the Gauss map \(\gamma_{G}\colon C\to(\mathbb{P}^{2})^{*}\) mapping a regular point \(P\) on \(C\) to the line \(T_{P}C\). When the Gauss map \(\gamma_{G}\) is separable, the geometry of the tangent lines to the curve in characteristic \(p\) is similar to the behaviour observed in characteristic \(0\). It turns out that the curve \(C\) is _reflexive_ (that is, the double dual \((C^{*})^{*}\) can be canonically identified with \(C\) itself) if and only if \(\gamma_{G}\) is separable [14]. Thus, all curves in characteristic \(0\) are reflexive. In positive characteristic \(p\), the condition \(p>d\) is sufficient to ensure that a plane curve of degree \(d\) is reflexive [11, Proposition 1.5]. ### Bitangents For a given plane curve \(C\), we say that a line \(L\) is _bitangent_ to \(C\) if \(L\) is tangent to the curve \(C\) in at least two points. The following is a well-known result in classical algebraic geometry; we include its proof to emphasize how the hypothesis \(p>d\) is used. Since it is possible to have a curve with infinitely many bitangents [10, Example 2], the lemma below would not be true if we completely remove the assumption \(p>d\). **Lemma 2.1**.: _Let \(C\subset\mathbb{P}^{2}\) be a geometrically irreducible plane curve of degree \(d\geq 2\) defined over \(\mathbb{F}_{q}\) such that \(p>d\). Then \(C\) has at most \(\frac{1}{2}d^{2}(d-1)^{2}\) many bitangents._ Proof.: The condition \(p>d\) guarantees that the Gauss map \(\gamma_{G}\) is separable. The dual curve \(C^{*}\) has degree \(\delta\leq d(d-1)\). Since \(C^{*}\) is geometrically irreducible, it has at most \(\binom{\delta-1}{2}\) many singular points [11, Exercise 20.18]. Every bitangent of the curve \(C\) corresponds to some singular point of \(C^{*}\), because \(\gamma_{G}\) is separable [12]. Thus, the number of bitangents to \(C\) is at most \[\binom{\delta-1}{2}\leq\binom{d(d-1)-1}{2}=\frac{1}{2}\left(d^{2}-d-1\right) \left(d^{2}-d-2\right)\leq\frac{1}{2}\left(d^{2}-d\right)\left(d^{2}-d\right)\] as desired. The previous lemma would hold if we replaced the hypothesis \(p>d\) with the weaker hypothesis that the Gauss map of \(C\) is separable. ### Strange curves We say that an irreducible plane curve \(C\) of degree \(d\geq 2\) over a field \(K\) is _strange_ if all the tangent lines to the curve \(C\) at its smooth \(\overline{K}\)-points are concurrent. This is equivalent to requiring that the dual curve of \(C\) is a line. Since the double dual of a strange curve cannot be the original curve, it follows that strange curves must be nonreflexive. In particular, strange curves can only exist when \(p=\operatorname{char}(K)>0\). Strange curves do exist [10, Example 1]: for instance, all the tangent lines to the curve \(xy^{p-1}-z^{p}=0\) pass through the point \([0:0:1]\). The paper [1] contains several results on various properties and characterizations of strange curves. As mentioned in the beginning of the section, the hypothesis \(p>d\) ensures that the curve is reflexive. Thus, a plane curve of degree \(d\geq 2\) cannot be strange whenever \(p>d\). This fact will be crucially used in the proofs below, when we verify that the \(\mathbb{F}_{q}\)-points of the dual curve \(C^{*}\) do not produce a trivial blocking set. ### Proofs of Theorem 1.1 and Theorem 1.2 We now present the proof of our first main theorem which roughly states that tangent-filling curves over \(\mathbb{F}_{p}\) cannot exist when \(p\) is larger than a quadratic function of \(d\). Proof of Theorem 1.1.: We first assume that \(C\) is geometrically irreducible. We start by observing that the hypothesis \(p\geq 4(d-1)^{2}(d-2)^{2}\) implies \(p>d\) for \(d\geq 3\). Thus, the curve \(C\) is reflexive, and in particular, \(C\) is not strange, meaning that \(\deg(C^{*})>1\). By applying Hasse-Weil bound [1, Corollary 2.5], we have \[\#C(\mathbb{F}_{p})\leq p+1+(d-1)(d-2)\sqrt{p}.\] Suppose, to the contrary, that \(C\) is tangent-filling. Let \(B\subseteq C^{*}(\mathbb{F}_{q})\) correspond to the set of tangent \(\mathbb{F}_{p}\)-lines to the curve \(C\) at smooth \(\mathbb{F}_{p}\)-points. It is clear that \[\#B\leq\#C(\mathbb{F}_{p})\leq p+1+(d-1)(d-2)\sqrt{p}. \tag{2.1}\] Note that \(B\) is a blocking set by definition of tangent-filling; indeed, each \(\mathbb{F}_{p}\)-line in the dual projective plane parametrizes lines passing through a fixed point, so \(B\) meets every \(\mathbb{F}_{p}\)-line in the dual space. Since \(1<\deg(C^{*})\leq d(d-1)<p+1\), the set \(B\) is a non-trivial blocking set, that is, \(B\) cannot contain all the \(\mathbb{F}_{p}\)-points of some \(\mathbb{F}_{p}\)-line in \((\mathbb{P}^{2})^{*}(\mathbb{F}_{p})\). Indeed, \(C^{*}\) is irreducible (as it is the image of the irreducible curve \(C\) through the map \(\gamma_{G}\)) and has degree less than \(p+1\). By Blokhuis' theorem [1], \[\#B\geq\frac{3}{2}(p+1). \tag{2.2}\] Combining (2.1) and (2.2), we get \(p+1+(d-1)(d-2)\sqrt{p}\geq\frac{3}{2}(p+1)\) which contradicts the hypothesis \(p\geq 4(d-1)^{2}(d-2)^{2}\). Now, suppose that \(C\) is not geometrically irreducible. Since \(C\) is irreducible but not geometrically irreducible, we conclude that \(\#C(\mathbb{F}_{p})\leq\frac{d^{2}}{4}\) (see [1, Lemma 2.3] or [1, Remark 2.2]). In particular, the number of distinct tangent \(\mathbb{F}_{p}\)-tangent lines to \(C\) is at most \(\frac{d^{2}}{4}\). Since each \(\mathbb{F}_{p}\)-line covers \(p+1\) points of \(\mathbb{P}^{2}(\mathbb{F}_{p})\), all the tangent lines to \(C\) at its smooth \(\mathbb{F}_{p}\)-points together can cover at most \(\frac{d^{2}}{4}\cdot(p+1)\) distinct \(\mathbb{F}_{q}\)-points. Since \(p\geq 4(d-1)^{2}(d-2)^{2}\), it is immediate that \(\frac{d^{2}}{4}\cdot(p+1)<p^{2}+p+1\), so \(C\) is not tangent-filling. _Remark 2.2_.: Note that the inequality \(p\geq 4(d-1)^{2}(d-2)^{2}\) automatically implies \(p>d\) when \(d\geq 3\). However, when \(d=2\), the inequality \(p\geq 4(d-1)^{2}(d-2)^{2}\) is vacuous, and \(p=2\) is allowed. When \(p=2\) and \(d=2\), the smooth conics are strange curves, which are therefore tangent-filling because the tangent lines at the \(\mathbb{F}_{q}\)-rational points of this conic are all the \(q+1\) lines passing through some given point in \(\mathbb{P}^{2}(\mathbb{F}_{q})\). So, Theorem 1.1 does not hold when \(p=d=2\); on the other hand, Theorem 1.1 continues to hold when \(d=2\) and \(p>2\) with essentially the same proof as the one above. We proceed to prove our second main result concerning tangent-filling curves over an arbitrary finite field \(\mathbb{F}_{q}\). Proof of Theorem 1.2.: We first assume that the curve \(C\) is geometrically irreducible, that is, irreducible over \(\overline{\mathbb{F}_{q}}\). We claim that \(C^{*}\) is not a blocking curve. Suppose, to the contrary, that \(C^{*}(\mathbb{F}_{q})\) is a blocking set in \((\mathbb{P}^{2})^{*}(\mathbb{F}_{q})\). Since \(p>d\), the curve \(C\) is not strange, that is, \(\deg(C^{*})>1\). Since \(1<\deg(C^{*})\leq d(d-1)<q+1\), the set \(B\) is a _non-trivial_ blocking set by the same reasoning given in the proof of Theorem 1.1. By [1, Lemma 4.1], \[\#C^{*}(\mathbb{F}_{q})>q+\frac{q+\sqrt{q}}{\deg(C^{*})}\geq q+\frac{q+\sqrt{ q}}{d(d-1)}. \tag{2.3}\] On the other hand, the number of \(\mathbb{F}_{q}\)-points on the dual curve \(C^{*}\) is bounded above: \[\#C^{*}(\mathbb{F}_{q})\leq\#C(\mathbb{F}_{q})+\#\{\text{bitangents to $C$ defined over $\mathbb{F}_{q}$}\}. \tag{2.4}\] Combining Lemma 2.1, Hasse-Weil bound applied to \(C\)[1, Corollary 2.5], and (2.4), we obtain an upper bound: \[\#C^{*}(\mathbb{F}_{q})\leq q+1+(d-1)(d-2)\sqrt{q}+\frac{1}{2}d^{2}(d-1)^{2}. \tag{2.5}\] Comparing (2.3) and (2.5), we obtain \[(d-1)(d-2)\sqrt{q}+\frac{1}{2}d^{2}(d-1)^{2}+1>\frac{q+\sqrt{q}}{d(d-1)},\] or equivalently, \[d(d-1)^{2}(d-2)\sqrt{q}+\frac{1}{2}d^{3}(d-1)^{3}+d(d-1)>q+\sqrt{q} \tag{2.6}\] Since \(\sqrt{q}\geq d(d-1)^{3}\), we have \(\sqrt{q}\geq\frac{1}{2}d^{2}(d-1)\) which allows us to deduce, \[q+\sqrt{q} \geq d(d-1)^{2}\cdot((d-1)\sqrt{q})+\sqrt{q}\] \[\geq d(d-1)^{2}\cdot((d-2)\sqrt{q}+\sqrt{q})+d(d-1)\] \[\geq d(d-1)^{2}\cdot\big{(}(d-2)\sqrt{q}+\tfrac{1}{2}d^{2}(d-1) \big{)}+d(d-1)\] \[=d(d-1)^{2}(d-2)\sqrt{q}+\frac{1}{2}d^{3}(d-1)^{3}+d(d-1)\] contradicting (2.6). We conclude that \(C^{*}\) is not a blocking curve, which means that \(C\) is not tangent-filling. When \(C\) is irreducible but not geometrically irreducible, we know that \(\#C(\mathbb{F}_{q})\leq\frac{d^{2}}{4}\), so we apply the same argument (with \(p\) replaced with \(q\) everywhere) at the end of the proof of Theorem 1.1. We conclude that \(C\) is still not tangent-filling. _Remark 2.3_.: Kaji [10] proved that the Gauss map of a smooth plane curve over \(\overline{\mathbb{F}_{q}}\) must be purely inseparable. Consequently, a smooth plane curve must have finitely many bitagents. Moreoever, only smooth strange curves are conics in characteristic 2. These observations together tell us that Theorem 1.2 holds for smooth curves even when the hypothesis \(p\geq d\) is removed as long as \(d\geq 3\). ## 3. Explicit examples of tangent-filling curves We start with an example of a plane curve of degree \(\sqrt{q}+1\) which is tangent-filling over \(\mathbb{F}_{q}\) when \(q\) is a square. **Example 3.1**.: Let \(q\) be a prime power such that \(q\) is a square. The curve \(\mathcal{H}_{q}\) defined by \[x^{\sqrt{q}+1}+y^{\sqrt{q}+1}+z^{\sqrt{q}+1}=0\] is tangent-filling over \(\mathbb{F}_{q}\). The curve \(\mathcal{H}_{q}\) is known as the _Hermitian curve_ in the literature. It can be checked that \(\mathcal{H}_{q}\) has exactly \((\sqrt{q})^{3}+1\) distinct \(\mathbb{F}_{q}\)-points. Moreover, the set \(\mathcal{H}_{q}(\mathbb{F}_{q})\) forms a _unital_ in the sense of combinatorial geometry, meaning that the points can be arranged into subsets of size \(\sqrt{q}+1\) so that any two points of \(\mathcal{H}_{q}(\mathbb{F}_{q})\) lie in a unique subset. In particular, it can be shown that every \(\mathbb{F}_{q}\)-line meets \(\mathcal{H}_{q}(\mathbb{F}_{q})\) in either \(1\) or \(\sqrt{q}+1\) points [1, Theorem 2.2]. As a result, \(\mathcal{H}_{q}\) is a blocking curve over \(\mathbb{F}_{q}\). To show that \(\mathcal{H}_{q}\) is a tangent-filling curve, we let \(P_{0}=[a:b:c]\) to be a point in \(\mathbb{P}^{2}(\mathbb{F}_{q})\). We are searching for a point \(Q=[x_{0}:y_{0}:z_{0}]\in\mathcal{H}_{q}(\mathbb{F}_{q})\) such that \(T_{Q}(C)\) contains \(P_{0}\). This is equivalent to finding \([x_{0}:y_{0}:z_{0}]\in\mathcal{H}_{q}(\mathbb{F}_{q})\) such that \[x_{0}^{\sqrt{q}}a+y_{0}^{\sqrt{q}}b+z_{0}^{\sqrt{q}}c=0. \tag{3.1}\] Note that the map \([x:y:z]\mapsto[x^{\sqrt{q}}:y^{\sqrt{q}}:z^{\sqrt{q}}]\) is a bijection on the set \(\mathbb{P}^{2}(\mathbb{F}_{q})\), and therefore also on \(\mathcal{H}_{q}(\mathbb{F}_{q})\) because \(\mathcal{H}_{q}(\mathbb{F}_{q})\) is defined over \(\mathbb{F}_{q}\). Thus, there exists \([x_{1}:y_{1}:z_{1}]\in\mathcal{H}_{q}(\mathbb{F}_{q})\) with the property that \[[x_{0}:y_{0}:z_{0}]=\left[x_{1}^{\sqrt{q}}:y_{1}^{\sqrt{q}}:z_{1}^{\sqrt{q}} \right].\] In other words, it suffices to find \([x_{1}:y_{1}:z_{1}]\in\mathcal{H}_{q}(\mathbb{F}_{q})\) such that \[x_{1}^{q}a+y_{1}^{q}b+z_{1}^{q}c=0. \tag{3.2}\] Since \(x_{1},y_{1},z_{1}\) are elements of \(\mathbb{F}_{q}\), we see that (3.2) is equivalent to \[x_{1}a+y_{1}b+z_{1}c=0. \tag{3.3}\] Let \(L\) be the \(\mathbb{F}_{q}\)-line defined by \(ax+by+cz=0\). Since \(\mathcal{H}_{q}(\mathbb{F}_{q})\) is a blocking set, the equation (3.3) is satisfied for some \(Q=[x_{1}:y_{1}:z_{1}]\in\mathcal{H}_{q}(\mathbb{F}_{q})\), as claimed. This argument also shows that the dual of the Hermitian curve is isomorphic to itself. For the remainder of the paper, we will focus on the curve \(C\) defined by the equation \[x^{q-1}+y^{q-1}+z^{q-1}-3(x+y+z)^{q-1}=0. \tag{3.4}\] Unless otherwise stated, we will assume that \(p=\operatorname{char}(\mathbb{F}_{q})>3\). We will study the curve \(C\) by first finding the singular points, and then checking that \(C\) is irreducible. Finally, we will prove that \(C\) is tangent-filling, establishing Theorem 1.3. ### Rational points of the curve We start by finding all the \(\mathbb{F}_{q}\)-points on \(C\). **Lemma 3.2**.: _The set \(C(\mathbb{F}_{q})\) is equal to the set of all points \([x:y:z]\in\mathbb{P}^{2}(\mathbb{F}_{q})\) such that_ \[xyz(x+y+z)\neq 0.\] Proof.: Since \(x^{q-1}=1\) holds for every \(x\in\mathbb{F}_{q}^{*}\), the conclusion is clear from (3.4). **Corollary 3.3**.: _The curve \(C\) is not blocking._ Proof.: Consider the \(\mathbb{F}_{q}\)-line \(L=\{z=0\}\). Then \(C\cap L\) has no \(\mathbb{F}_{q}\)-points due to the condition in Lemma 3.2. Thus, \(C(\mathbb{F}_{q})\) is not a blocking set. ### Singular points of the curve Our goal is to determine the singular points of the curve \(C\) over \(\overline{\mathbb{F}_{q}}\). **Proposition 3.4**.: _The curve \(C\) has only one singular point, namely \([1:1:1]\)._ Proof.: By looking at the partial derivatives of the defining polynomial in (3.4), any singular point \([x_{0}:y_{0}:z_{0}]\) of \(C\) must satisfy, \[x_{0}^{q-2}=y_{0}^{q-2}=z_{0}^{q-2}=3(x_{0}+y_{0}+z_{0})^{q-2}. \tag{3.5}\] In particular, any singular point \([x_{0}:y_{0}:z_{0}]\in C(\overline{\mathbb{F}_{q}})\) satisfies: \[x_{0}y_{0}z_{0}(x_{0}+y_{0}+z_{0})\neq 0. \tag{3.6}\] So, without loss of generality, we may assume that \(z_{0}=1\). Thus, a potential singular point takes the form \([x_{0}:y_{0}:1]\) and satisfies \(x_{0}y_{0}\neq 0\) by equation (3.6). Applying (3.5), we get \[x_{0}^{q-2}=y_{0}^{q-2}=3(x_{0}+y_{0}+1)^{q-2}=1. \tag{3.7}\] We begin by computing the expression \((x_{0}+y_{0}+1)^{q-2}\), \[(x_{0}+y_{0}+1)^{q-2}=\frac{(x_{0}+y_{0}+1)^{q}}{(x_{0}+y_{0}+1)^{2}}=\frac{1+x _{0}^{q}+y_{0}^{q}}{(1+x_{0}+y_{0})^{2}}. \tag{3.8}\] The two equations (3.7) and (3.8) together give, \[\frac{3+3x_{0}^{2}+3y_{0}^{2}}{(1+x_{0}+y_{0})^{2}}=1. \tag{3.9}\] We can rearrange (3.9) into \[x_{0}^{2}+y_{0}^{2}-x_{0}y_{0}-x_{0}-y_{0}+1=0\] which can be expressed as a degree 2 equation in \(y_{0}\): \[y_{0}^{2}-y_{0}(x_{0}+1)+x_{0}^{2}-x_{0}+1=0.\] Solving for \(y_{0}\), we obtain \[y_{0}=\frac{x_{0}+1+(x_{0}-1)\gamma}{2} \tag{3.10}\] where \(\gamma\) satisfies \(\gamma^{2}=-3\). We compute \(y_{0}^{q}\) using (3.10): \[y_{0}^{q}=\frac{x_{0}^{q}+1+(x_{0}^{q}-1)\gamma^{q}}{2}. \tag{3.11}\] We also compute \(y_{0}^{2}\) using (3.10): \[y_{0}^{2}=\frac{(x_{0}^{2}+2x_{0}+1)+2(x_{0}+1)(x_{0}-1)\gamma+(x_{0}^{2}-2x_{ 0}+1)\cdot(-3)}{4}\] which simplifies to: \[y_{0}^{2}=\frac{-x_{0}^{2}+4x_{0}-1+(x_{0}^{2}-1)\gamma}{2}. \tag{3.12}\] Since \(y_{0}^{q-2}=1\) by (3.7), we know that \(y_{0}^{q}=y_{0}^{2}\). Equating (3.11) and (3.12), \[\frac{-x_{0}^{2}+4x_{0}-1+(x_{0}^{2}-1)\gamma}{2}=\frac{x_{0}^{q}+1+(x_{0}^{q} -1)\gamma^{q}}{2}. \tag{3.13}\] We proceed by analyzing two cases, depending on whether \(\gamma\in\mathbb{F}_{q}\) or \(\gamma\notin\mathbb{F}_{q}\). **Case 1.**\(\gamma\in\mathbb{F}_{q}\). In this case, we have \(\gamma^{q}=\gamma\). Using \(x_{0}^{q}=x_{0}^{2}\) which is implied by (3.7), the equation (3.13) yields, \[\frac{-x_{0}^{2}+4x_{0}-1+(x_{0}^{2}-1)\gamma}{2}=\frac{x_{0}^{2}+1+(x_{0}^{2} -1)\gamma}{2}.\] which simplifies to \((x_{0}-1)^{2}=0\), and so \(x_{0}=1\). Using (3.10), we obtain \(y_{0}=1\) as well. This results in the singular point \([1:1:1]\) of the curve \(C\). **Case 2.**\(\gamma\notin\mathbb{F}_{q}\). In this case, \(\gamma\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) because \(\gamma^{2}=-3\). Since \(\gamma^{q}\) is the Galois conjugate of \(\gamma\), we have \(\gamma^{q}=-\gamma\). Thus, (3.13) yields, \[\frac{-x_{0}^{2}+4x_{0}-1+(x_{0}^{2}-1)\gamma}{2}=\frac{x_{0}^{q}+1-(x_{0}^{q} -1)\gamma}{2}.\] This simplifies (due to \(x_{0}^{q}=x_{0}^{2}\)) to, \[(x_{0}-1)^{2}=(x_{0}^{2}-1)\gamma.\] We can eliminate the case \(x_{0}=1\) because that will only bring us back to the singular point \([1:1:1]\) already analyzed in the previous case. After dividing both sides of the preceding equation by \(x_{0}-1\), and solving for \(x_{0}\), we get \[x_{0}=\frac{1+\gamma}{1-\gamma}. \tag{3.14}\] Using the relation \(\gamma^{2}=-3\), the formula (3.14) simplifies to, \[x_{0}=\frac{\gamma-1}{2}. \tag{3.15}\] Applying (3.10), we obtain \[y_{0}=\frac{-\gamma-1}{2}. \tag{3.16}\] Since \(\gamma^{2}=-3\), we have two solutions (once \(\gamma\) is chosen, \(-\gamma\) is also a solution). Thus, (3.15) and (3.16) allow us to conclude that there are two _potential_ singular points on the curve \(C\): \[\left[\frac{\gamma-1}{2}:\frac{-\gamma-1}{2}:1\right]\quad\text{and}\quad \left[\frac{-\gamma-1}{2}:\frac{\gamma-1}{2}:1\right]\] However, both of these points above satisfy \(x_{0}+y_{0}+1=0\). By equation (3.6), none of these two points is singular on the curve \(C\). We conclude that Case 2 does not occur after all, and the point \([1:1:1]\) is the unique singular point of \(C\). ### Irreducibility of the curve We begin with a general irreducibility criterion for a plane curve of degree at least \(3\) with a unique singular point. **Lemma 3.5**.: _Suppose that \(D=\{F=0\}\) is a plane curve defined over a field \(K\) with \(\deg(F)\geq 3\) and a unique singular point \(P_{0}\in D(\overline{K})\). After dehomogenizing \(f(x,y):=F(x,y,1)\) and applying translation, we may assume that \((0,0)\) is the singular point of the affine curve \(\{f=0\}\). Assume that the quadratic term \(A_{2}(x,y)\) in the expansion of \(f\) around \((0,0)\) cannot be written as \(L(x,y)^{2}\) for some \(L(x,y)\in\overline{K}[x,y]\) (in other words, the equation \(A_{2}(x,y)=0\) has precisely two solutions in \(\mathbb{P}^{1}(\overline{K})\)). Then the plane curve \(D\) is irreducible over \(\overline{K}\)._ Proof.: Since \((0,0)\) is a singular point of \(\{f=0\}\), we can then express \[f(x,y)=A_{2}(x,y)+A_{3}(x,y)+\ldots\] where \(A_{i}(x,y)\) is a homogeneous polynomial of degree \(i\) in \(x\) and \(y\). By hypothesis, \(A_{2}(x,y)\) splits over \(\overline{K}\) as a product \(L_{1}(x,y)\cdot L_{2}(x,y)\) of two distinct nonzero linear forms. If \(f(x,y)=g(x,y)\cdot h(x,y)\) where \(g(0,0)=h(0,0)=0\), then we claim that the component curves \(\{g=0\}\) and \(\{h=0\}\) meet at the point \((0,0)\) with multiplicity \(1\). Indeed, the expansions of \(g(x,y)\) and \(h(x,y)\) around the origin \((0,0)\) must necessarily take the form (after multiplication by a suitable nonzero constant): \[g(x,y)=L_{1}(x,y)+B_{2}(x,y)+B_{3}(x,y)+\ldots\] and \[h(x,y)=L_{2}(x,y)+C_{2}(x,y)+C_{3}(x,y)+\ldots\] respectively, where \(B_{i}(x,y)\) and \(C_{i}(x,y)\) are homogeneous polynomials of degree \(i\) in \(x\) and \(y\). Since \(L_{1}(x,y)\) and \(L_{2}(x,y)\) are distinct linear forms which generate the maximal ideal of \(\overline{K}[x,y]\) at \((0,0)\), then the two curves \(\{g=0\}\) and \(\{h=0\}\) meet with multiplicity \(1\) at \((0,0)\). We show that the plane curve \(D=\{F=0\}\) is irreducible over \(\overline{K}\). Assume, to the contrary, that \(F=G\cdot H\) for some homogeneous polynomials \(G\) and \(H\) with positive degrees \(d_{1}\) and \(d_{2}\), respectively. Let \(g(x,y):=G(x,y,1)\) and \(h(x,y):=H(x,y,1)\). After applying Bezout's theorem, \(d_{1}d_{2}\) intersection points (counted with multiplicity) of \(\{G=0\}\) and \(\{H=0\}\) must be singular points of \(D\). Since \(D\) has a unique singular point, namely \((0,0)\) in the affine chart \(z=1\), the local intersection multiplicity at the origin must be at least \(d_{1}d_{2}\geq 2\). This contradicts the fact that \(\{g=0\}\) and \(\{h=0\}\) meet with multiplicity exactly \(1\) at \((0,0)\). **Proposition 3.6**.: _The curve \(C\) defined by (3.4) is geometrically irreducible._ Proof.: By Proposition 3.4, the curve \(C\) has the unique singular point \([1:1:1]\). Expanding the equation \(x^{q-1}+y^{q-1}+1-3(x+y+1)^{q-1}=0\) around the point \((1,1)\), we are led to analyze: \[(1+(x-1))^{q-1}+(1+(y-1))^{q-1}+1-3(3+(x-1)+(y-1))^{q-1}\] After expanding, the first nonzero homogeneous form in \((x-1)\) and \((y-1)\) has degree \(2\), and is given by: \[2\cdot 3^{q-2}\cdot\left[(x-1)^{2}-(x-1)(y-1)+(y-1)^{2}\right].\] Since the discriminant of the quadratic \(s^{2}-st+t^{2}\) is \(-3\neq 0\) in \(\mathbb{F}_{q}\), the hypothesis of Lemma 3.5 is satisfied. Thus, \(C\) is irreducible over \(\overline{\mathbb{F}_{q}}\). ### Tangent-filling property In this final subsection, we give the proof that the curve \(C\) defined by (3.4) is tangent-filling over \(\mathbb{F}_{q}\). Proof of Theorem 1.3.: Let \(P=[a_{0}:b_{0}:c_{0}]\) be an arbitrary point in \(\mathbb{P}^{2}(\mathbb{F}_{q})\). We want to find a smooth \(\mathbb{F}_{q}\)-point \(Q=[x_{0}:y_{0}:z_{0}]\) of \(C\) such that \(P\) is contained in the tangent line \(T_{Q}C\). From Lemma 3.2, we know that an \(\mathbb{F}_{q}\)-point \([x_{0}:y_{0}:z_{0}]\) is a point on the curve \(C\) if and only if \[x_{0}y_{0}z_{0}(x_{0}+y_{0}+z_{0})\neq 0 \tag{3.17}\] Note that \(P\) is contained in the tangent line \(T_{Q}C\) if and only if \[a_{0}\cdot\left(3s_{0}^{q-2}-x_{0}^{q-2}\right)+b_{0}\cdot\left(3s_{0}^{q-2}- y_{0}^{q-2}\right)+c_{0}\cdot\left(3s_{0}^{q-2}-z_{0}^{q-2}\right)=0 \tag{3.18}\] where \(s_{0}=x_{0}+y_{0}+z_{0}\). Using the fact that \(s^{q-1}=1\) for each \(s\in\mathbb{F}_{q}^{*}\), we rewrite (3.18) as \[\frac{3(a_{0}+b_{0}+c_{0})}{x_{0}+y_{0}+z_{0}}=\frac{a_{0}}{x_{0}}+\frac{b_{0} }{y_{0}}+\frac{c_{0}}{z_{0}}. \tag{3.19}\] Note that all the denominators in (3.19) are nonzero because Lemma 3.2 guarantees that \(xyz(x+y+z)\neq 0\) for any \(\mathbb{F}_{q}\)-point \([x:y:z]\) of the curve \(C\). **Case 1.** Suppose \(a_{0}b_{0}c_{0}(a_{0}+b_{0}+c_{0})\neq 0\) and \([a_{0}:b_{0}:c_{0}]\neq[1:1:1]\). In this case, the point \(P=[a_{0}:b_{0}:c_{0}]\) is already smooth in \(C(\mathbb{F}_{q})\) by Lemma 3.2 and Proposition 3.4. Hence, we may take \(Q=P\) because \(P\) always belongs to \(T_{P}C\). **Case 2.** Suppose \(a_{0}+b_{0}+c_{0}=0\). In this case, (3.19) yields \[\frac{a_{0}}{x_{0}}+\frac{b_{0}}{y_{0}}+\frac{c_{0}}{z_{0}}=0. \tag{3.20}\] We search for a solution \([x_{0}:y_{0}:z_{0}]\neq[1:1:1]\) satisfying (3.17). **Subcase 2.1**.: \(a_{0}+b_{0}+c_{0}=0\) and \(a_{0}b_{0}c_{0}\neq 0\). Since \(\mathrm{char}(\mathbb{F}_{q})>3\), we cannot have \(a_{0}=b_{0}=c_{0}\). We may assume, without loss of generality, that \(b_{0}\neq c_{0}\). Let \(z_{0}=1\) and \(y_{0}=-1\), and solve for \(x_{0}\) according to the equation (3.20): \[x_{0}=\frac{a_{0}}{b_{0}-c_{0}}\in\mathbb{F}_{q}^{*}\] Clearly, \([x_{0}:y_{0}:z_{0}]\neq[1:1:1]\) and (3.17) is satisfied. **Subcase 2.2.**\(a_{0}+b_{0}+c_{0}=0\) and \(a_{0}b_{0}c_{0}=0\). By symmetry, we may assume that \(a_{0}=0\); since \(a_{0}+b_{0}+c_{0}=0\), then we have \([a_{0}:b_{0}:c_{0}]=[0:1:-1]\) and so, equation (3.20) yields \(y_{0}=z_{0}\). The point \([x_{0}:y_{0}:z_{0}]=[2:1:1]\) satisfies both (3.17) and (3.20). This concludes our proof that all points \([a_{0}:b_{0}:c_{0}]\) for which \(a_{0}+b_{0}+c_{0}=0\) belong to a tangent line at a smooth \(\mathbb{F}_{q}\)-point of \(C\). **Case 3.**\(a_{0}+b_{0}+c_{0}\neq 0\) and \(a_{0}b_{0}c_{0}=0\). Since we seek points \([x_{0}:y_{0}:z_{0}]\) with \(x_{0}+y_{0}+z_{0}\neq 0\), we can scale \([a_{0}:b_{0}:c_{0}]\) and \([x_{0}:y_{0}:z_{0}]\) so that \[a_{0}+b_{0}+c_{0}=3\qquad\text{ and }\qquad x_{0}+y_{0}+z_{0}=9\] The equation (3.19) now reads, \[1=\frac{a_{0}}{x_{0}}+\frac{b_{0}}{y_{0}}+\frac{3-a_{0}-b_{0}}{9-x_{0}-y_{0}}; \tag{3.21}\] Since \(a_{0}b_{0}c_{0}=0\), we may assume by symmetry that \(a_{0}=0\). As a result, (3.21) reads \[1=\frac{b_{0}}{y_{0}}+\frac{3-b_{0}}{z_{0}}. \tag{3.22}\] If \(b_{0}\notin\{0,-3,3\}\), then we let \(z_{0}=6\), \(y_{0}=6b_{0}/(3+b_{0})\) and \(x_{0}=(9-3b_{0})/(3+b_{0})\). Note that \([x_{0}:y_{0}:z_{0}]\neq[1:1:1]\) and satisfies both (3.22) and (3.17). If \(b_{0}=0\), then we simply choose \([x_{0}:y_{0}:z_{0}]=[2:4:3]\neq[1:1:1]\) which satisfies both (3.22) and (3.17). If \(b_{0}=-3\), then we get the solution \([x_{0}:y_{0}:z_{0}]=[-1:6:4]\neq[1:1:1]\) which satisfies both (3.22) and (3.17). If \(b_{0}=3\), then we get the solution \([x_{0}:y_{0}:z_{0}]=[2:3:4]\neq[1:1:1]\) which satisfies both (3.22) and equation (3.17). **Case 4.**\([a_{0}:b_{0}:c_{0}]=[1:1:1]\). We can assume \(a_{0}=b_{0}=c_{0}=1\), and also \(x_{0}+y_{0}+z_{0}=9\) after scaling \([x_{0}:y_{0}:z_{0}]\). Then equation (3.19) yields, \[1=\frac{1}{x_{0}}+\frac{1}{y_{0}}+\frac{1}{9-x_{0}-y_{0}}. \tag{3.23}\] Our goal is to find a solution \((3,3)\neq(x_{0},y_{0})\in\mathbb{F}_{q}^{*}\times\mathbb{F}_{q}^{*}\) to (3.23). After multiplying (3.23) by \(x_{0}y_{0}(9-x_{0}-y_{0})\), we obtain \[9x_{0}y_{0}-x_{0}^{2}y_{0}-x_{0}y_{0}^{2}=9y_{0}-x_{0}y_{0}-y_{0}^{2}+9x_{0}-x_ {0}^{2}-x_{0}y_{0}+x_{0}y_{0},\] which we rearrange as follows: \[y_{0}^{2}(x_{0}-1)+y_{0}(x_{0}-1)(x_{0}-9)-x_{0}(x_{0}-9)=0.\] Our goal is to show that the number of \(\mathbb{F}_{q}\)-points on the affine curve \(Y\) given by the equation: \[y^{2}(x-1)+y(x-1)(x-9)-x(x-9)=0 \tag{3.24}\] is strictly more than the number of points which we want to avoid from the set: \[\{(0,9),(0,0),(9,0),(3,3)\}. \tag{3.25}\] Indeed, besides the point \((3,3)\), the points \((x_{0},y_{0})\) on the curve (3.24) which we have to avoid are the ones satisfying the equation: \[x_{0}y_{0}\cdot(x_{0}+y_{0}-9)=0.\] We note that there are only three such points on the curve (3.24): \((0,0)\), \((0,9)\) and \((9,0)\); this follows easily from the equation (3.24) after substituting either \(x=0\), or \(y=0\), or \(x=9-y\). Now, for each \(\mathbb{F}_{q}\)-point \((x_{0},w_{0})\neq(1,0)\) on the affine conic \(\tilde{Y}\) given by the equation \[w^{2}=(x-1)(x-9),\] we have the \(\mathbb{F}_{q}\)-point \((x_{0},y_{0})\) on \(Y\) given by \[(x_{0},y_{0}):=\left(x_{0},\frac{-(x_{0}-1)(x_{0}-9)+(x_{0}-3)w_{0}}{2(x_{0}-1 )}\right). \tag{3.26}\] Since there are \(q-2\) points \((x_{0},w_{0})\neq(1,0)\) on \(\tilde{Y}(\mathbb{F}_{q})\) (because we have \(q+1\) points on its projective closure in \(\mathbb{P}^{2}\) and only two such points are on the line at infinity), we obtain \((q-2)\)\(\mathbb{F}_{q}\)-points on \(Y\). Now, if \((x_{1},w_{1})\neq(x_{0},w_{0})\) are distinct points on \(\tilde{Y}(\mathbb{F}_{q})\setminus\{(1,0)\}\), then we get the corresponding points on \(Y(\mathbb{F}_{q})\) are also distinct _unless_\(x_{0}=x_{1}=3\) as can be seen from (3.26). There are at most \(2\) points on \(\tilde{Y}(\mathbb{F}_{q})\) having \(x\)-coordinate equal to \(3\) (which in fact happens when \(q=7\), in which case \((3,\pm 3)\in\tilde{Y}(\mathbb{F}_{7})\)). Thus, we are guaranteed to have at least \((q-3)\) distinct points in \(Y(\mathbb{F}_{q})\). Hence, as long as \(q>7\), we are guaranteed to avoid the points listed in (3.25). Therefore, the curve \(C\) is tangent-filling under the hypothesis \(q>7\) and \(\operatorname{char}(\mathbb{F}_{q})>3\). _Remark 3.7_.: The result in Theorem 1.3 is sharp in a sense that when \(q=7\), the curve \(x^{q-1}+y^{q-1}+z^{q-1}-3(x+y+z)^{q-1}=0\) is _not_ tangent-filling. Indeed, one can check that for the point \(P=[1:1:1]\), there is no smooth \(\mathbb{F}_{7}\)-point \(Q\) on this curve \(C\) such that \(P\in T_{Q}C\).
2310.03422
On Equicontinuity and Related Notions in Nonautonomous Dynamical Systems
In this work, we investigate the dynamics of a general non-autonomous system generated by a commutative family of homeomorphisms. In particular, we investigate properties such as periodicity, equicontinuity, minimality and transitivity for a general non-autonomous dynamical system. In \cite{sk2}, the authors derive necessary and sufficient conditions for a system to be minimal. We claim the result to be false and provide an example in support of our claim. Further, we correct the result to derive necessary and sufficient conditions for a non-autonomous system to be minimal. We prove that for an equicontinuous flow generated by a commutative family, while the system need not exhibit almost periodic points, if $x$ is almost periodic then every point in $\overline{\mathcal{O}_H(x)}$ is almost periodic. We further prove that in such a case, the set $\overline{\mathcal{O}_H(x)}$ is uniformly almost periodic and hence provide an analogous extension to a result known for the autonomous systems. We prove that a system generated by a commutative family is transitive if and only if it exhibits a point with dense orbit. We also prove that any minimal system generated by commutative family is either equicontinuous or has a dense set of sensitive points.
Sushmita Yadav, Puneet Sharma
2023-10-05T09:57:59Z
http://arxiv.org/abs/2310.03422v1
# On equicontinuity and related notions in nonautonomous dynamical systems ###### Abstract. In this work, we investigate the dynamics of a general non-autonomous system generated by a commutative family of homeomorphisms. In particular, we investigate properties such as periodicity, equicontinuity, minimality and transitivity for a general non-autonomous dynamical system. In [10], the authors derive necessary and sufficient conditions for a system to be minimal. We claim the result to be false and provide an example in support of our claim. Further, we correct the result to derive necessary and sufficient conditions for a non-autonomous system to be minimal. We prove that for an equicontinuous flow generated by a commutative family, while the system need not exhibit almost periodic points, if \(x\) is almost periodic then every point in \(\overline{O_{H}(x)}\) is almost periodic. We further prove that in such a case, the set \(\overline{O_{H}(x)}\) is uniformly almost periodic and hence provide an analogous extension to a result known for the autonomous systems. We prove that a system generated by a commutative family is transitive if and only if it exhibits a point with dense orbit. We also prove that any minimal system generated by commutative family is either equicontinuous or has a dense set of sensitive points. Key words and phrases:nonautonomous dynamical system, equicontinuity, minimal system, almost periodic point 2020 Mathematics Subject Classification: 37B20, 37B55, 37C35 ## Introduction Dynamical systems have been long used to investigate various natural and physical processes around us. While mathematical investigations have enriched the literature with qualitative results determining long term behavior of such systems, the field has also found a variety of applications in areas such as complex systems, control theory, biomechanics and cognitive sciences [3, 8, 11]. Although the theory has been used extensively in various fields, most of problems have been approximated using autonomous systems (systems with Introduction Let \(X\) be a compact metric space and let \(\mathbb{F}=\{f_{n}:n\in\mathbb{N}\}\) be a family of homeomorphisms on \(X\). For any given initial state of the system \(x_{0}\), any such family generates a _non-autonomous_ dynamical system via the relation \(x_{n}=\left\{\begin{array}{ll}f_{n}(x_{n-1})&:n\geq 1,\\ f_{n}^{-1}(x_{n+1})&:n<0.\end{array}\right.\) In other words, the non-autonomous system generated by the family \(\mathbb{F}\) can be visualized as orbit of \(x_{0}\) under the ordered set \(\{\ldots,f_{2}^{-1},f_{1}^{-1},I,f_{1},f_{2},\ldots,\}\). For a given initial state \(x_{0}\) of the system, let \(\omega_{n}(x_{0})\) denote the state of the system at time \(n\). The set \(\mathcal{O}(x)=\{\omega_{n}(x):n\in\mathbb{Z}\}\) is called the _orbit_ of any point \(x\) in \(X\). Further, we refer to the set \(\mathcal{O}_{H}(x)=\{(\omega_{k_{n}}\omega_{k_{n-1}}\circ\ldots\circ\omega_{k_ {1}})(x):k_{i}\in\mathbb{Z},n\in\mathbb{N}\}\) as the orbital hull of the point \(x\). Let \(\mathcal{O}_{H}^{k}(x)=\{\omega_{r_{n}}\omega_{r_{n-1}}...\omega_{r_{2}}\omega _{r_{1}}(x):n\in\mathbb{N},r_{i}\in\{-k,-k+1,\ldots,1,2,...,k\}\}\) denote the truncation (of order \(k\)) of the orbitall hull of the point \(x\). It may be noted that orbital hull of a point \(x\) is the smallest invariant set containing \(x\). A point \(x\in X\) is said to be _periodic_ of period \(n\in\mathbb{N}\) if \(\omega_{nk}(x)=x\)\(\forall k\in\mathbb{Z}\). A point \(x\in X\) is called _almost periodic_ if for any \(\epsilon>0\), the set \(\{n\in\mathbb{Z}:d(\omega_{n}(x),x)<\epsilon\}\) is syndetic. If every point \(x\in X\) is _almost periodic_ then \((X,\mathbb{F})\) is said to be _pointwise almost periodic_. \((X,\mathbb{F})\) is said to be _uniformly almost periodic_ if for every \(\epsilon>0\) there exists \(M>0\) such that the set \(\{n\in Z:d(\omega_{n}(x),x)<\epsilon\}\) is \(M\)-syndetic for all \(x\in X\). A set \(Y\subseteq X\) is said to be _invariant_ if \(\omega_{k}(Y)\subseteq Y\), for all \(k\in\mathbb{Z}\). We say \((Y,\mathbb{F})\) to be a _minimal subsystem_ of \((X,\mathbb{F})\) if it is a non-empty, closed, invariant subsystem of \((X,\mathbb{F})\) with no proper non-empty subset having these properties. A system \((X,\mathbb{F})\) is said to be _equicontinuous_ if for each \(\epsilon>0\), there exists \(\delta>0\) such that \(d(x,y)<\delta\) implies \(d(\omega_{n}(x),\omega_{n}(y))<\epsilon\) for all \(n\in\mathbb{Z}\), \(x,y\in X\). A pair \((x,y)\) is _proximal_ for \((X,\mathbb{F})\) if \(\liminf_{n}\ d(\omega_{n}(x),\omega_{n}(y))=0\). Let \(P(X)\) denote the set of proximal pairs of system \((X,\mathbb{F})\), then \((X,\mathbb{F})\) is said to be _distal_ if \(P(X)=\Delta\), where \(\Delta\) denotes the diagonal in the space \(X\times X\). A system \((X,\mathbb{F})\) is said to be _point transitive_ if there exists a point \(x\in X\) such that \(\overline{O(x)}=X\). In this case, the point \(x\) is referred as a _transitive point_. The system is said to be _topologically transitive_ if for every pair of non-empty open sets \(U,V\) in \(X\), there exists \(k\in\mathbb{Z}\) such that \(\omega_{k}(U)\cap V\neq\emptyset\). The system \((X,\mathbb{F})\) is called \(r\)-transitive if the system generated by the family \(\mathbb{F}_{r}=\{f_{(k+1)r}\circ f_{(k+1)r-1}\circ\ldots\circ f_{kr+1}:k\in \mathbb{N}\}\) is transitive. A system \((X,\mathbb{F})\) is totally transitive if it is \(r\)-transitive for all \(r\in\mathbb{N}\). A system \((X,\mathbb{F})\) is said to be _sensitive_ at a point \(x\) if there exists \(\delta_{x}>0\) such that for each neighborhood \(U_{x}\) of \(x\) there exists \(k\in\mathbb{Z}\) such that \(diam(\omega_{k}(U_{x}))>\delta_{x}\). A system \((X,\mathbb{F})\) is said to be _sensitive_ if there exists \(\delta>0\) such that for each \(x\in X\) and each neighborhood \(U\) of \(x\) there exists \(k\in\mathbb{Z}\) such that \(diam(\omega_{k}(U))>\delta\). It may be noted that in case the \(f_{n}\)'s coincide, the above definitions coincide with the known notions of an autonomous dynamical system [4, 5, 6]. Some basic concepts and recent works in this area can be found in literature [1, 2, 7, 9, 10, 12]. In this paper, we investigate properties such as periodicity, equicontinuity, minimality and transitivity for a non-autonomous system generated by a commutative family of homeomorphisms. We prove that every point in the orbital hull of periodic point is periodic. We give example to show that a pair of periodic points may form a Li-Yorke pair and hence need not exhibit simple dynamical behavior. In [10], the authors claim that system is minimal if and only if orbit of each point is dense in \(X\) (page 84, line \(-\)7). Also, the authors claim that a non-autonomous system is minimal if and only if for non-empty every open set \(U\) in \(X\), there exists \(k\in\mathbb{N}\) such that trajectory of every point meets \(U\) in at most \(k\) iterations (Lemma 2.2). We establish both the claims to be false. While we provide an example of a minimal system void of any points with dense orbit, we correct the result to derive necessary and sufficient conditions for a non-autonomous system to be minimal. We prove that a non-autonomous system is minimal if and only if for non-empty every open set \(U\) in \(X\), there exists \(k\in\mathbb{N}\) such that \(\mathcal{O}_{H}^{k}(x)\) meets \(U\) for every point \(x\in X\). We prove that for equicontinuous systems, if \(x\) is almost periodic then every point in \(\overline{\mathcal{O}_{H}(x)}\) is almost periodic. In such a setting, we establish \(\overline{\mathcal{O}_{H}(x)}\) to be uniformly almost periodic. We prove that a system is transitive if and only if it exhibits points with dense orbit. We prove that while minimal systems need not be transitive, an equicontinuous transitive system is necessarily minimal. We also prove that a minimal system is either equicontinuous or has a dense set of points of sensitivity. ## Main Results **Proposition 1**.: _For any system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, \(x\) is periodic for \((X,\mathbb{F})\implies\) each point of \(\overline{\mathcal{O}_{H}(x)}\) is periodic (with same period)._ Proof.: Let \((X,\mathbb{F})\) be generated by a commutative family of homeomorphisms and let \(x\) be periodic for \((X,\mathbb{F})\) (with period r). Then, \(\omega_{nr}(x)=x\) for all \(n\in\mathbb{Z}\). For any point \(\omega_{r_{n}}\circ\omega_{r_{n-1}}\circ\ldots\omega_{r_{1}}(x)\) in orbital hull of \(x\), as \(\omega_{nr}(\omega_{r_{n}}\circ\omega_{r_{n-1}}\circ\ldots\omega_{r_{1}}(x))= \omega_{r_{n}}\circ\omega_{r_{n-1}}\circ\ldots\omega_{r_{1}}(\omega_{nr}(x))= \omega_{r_{n}}\circ\omega_{r_{n-1}}\circ\ldots\omega_{r_{1}}(x)\) (as \(\mathbb{F}\) is commutative), every point in \(\mathcal{O}_{H}(x)\) is periodic (with same period). Finally, as the limit of periodic points of period \(k\) is a periodic point of period \(k\), each point of \(\overline{\mathcal{O}_{H}(x)}\) is periodic point of period \(k\) and the proof is complete. **Remark 1**.: _The above result establishes the periodicity of the elements of the closure of the orbital hull of a point \(x\), when the point \(x\) is itself periodic. It may be noted that as the governing rule is time variant, the periodicity of \(x\) need not guarantee the periodicity of the members of the orbital hull (or even orbit itself). Also, if the governing rule is time variant, a periodic point may have infinite orbit and hence need not attribute to simpler dynamical behavior. In fact, if the governing rule is time variant, a pair of periodic points may behave in an unexpected manner and form a Li-Yorke pair. We now give examples in support of our claim._ **Example 1**.: _Let \(X=[0,1]\) be the unit interval and let \(f_{n}:X\to X\) be defined as \(f_{2n-1}(x)=\left\{\begin{array}{ll}\frac{x}{2}&:0\leq x\leq\frac{1}{2},\\ \frac{3}{2}x-\frac{1}{2}&:\frac{1}{2}\leq x\leq 1\end{array}\right\}.\) where \(k\in\mathbb{N}\) and \(f_{2n}(x)=1-\sqrt{x}\) for all \(n\in\mathbb{N}\)._ _Then, \(\frac{1}{2}\in X\) is a periodic point of period 2, but \(f_{1}(\frac{1}{2})=\frac{1}{4}\) is not periodic for \((X,\mathbb{F})\). Thus, periodicity need not be preserved in the elements of the orbital hull when the generating maps do not commute._ **Example 2**.: _Let \(X=[0,1]\) and define \(f_{n}:X\to X\) such that \(f_{2n-1}(x)=x^{2n}\) and \(f_{2n}(x)=x^{\frac{1}{2n}}\) for \(n\in\mathbb{N}\). Then, every point is periodic (with period \(2\)). Also as \(x^{n}\) converges to \(0\) for all \(x\) in \((0,1)\), the pair \((x,y)\) forms a Li-Yorke pair for all \(x,y\in(0,1)\). Consequently, periodic points may form a Li-Yorke pair and hence need not attribute to simpler dynamical behavior._ **Proposition 2**.: _For any system \((X,\mathbb{F})\), \((X,\mathbb{F})\) is minimal if and only if \(\overline{O_{H}(x)}=X\) for all \(x\in X\)._ Proof.: As orbital hull of \(x\) is smallest invariant subset of \(X\) containing \(x\), \((X,\mathbb{F})\) is minimal if and only if \(\overline{O_{H}(x)}=X\) for all \(x\in X\). **Remark 2**.: _The above result provides necessary and sufficient criteria for a non-autonomous system to be minimal. It may be noted that \(X\) is itself invariant for the system generated by the family \(\mathbb{F}\), a simple application of Zorn's lemma establishes the existence of minimal sets for non-autonomous systems. In [10], the authors claim that system is minimal if and only if orbit of each point is dense in \(X\) (page \(84\), line \(-7\)). Further, the authors claim that a non-autonomous system is minimal if and only if for non-empty every open set \(U\) in \(X\), there exists \(k\in\mathbb{N}\) such that trajectory of every point meets \(U\) in at most \(k\) iterations (Lemma 2.2). However, we claim that both of the observations fail to hold. It may be noted that as minimality in non-autonomous systems is equivalent to orbital hull of each point being dense in \(X\), minimality of a system does not guarantee orbit of each point to be dense in \(X\) (Example 3). Also, as orbit of a point is a non-invariant set, the second assertion also fails to hold good (Example 3). In fact, as orbital hull of \(x\) is the smallest invariant set containing \(x\), a non-autonomous system is minimal if and only if for non-empty every open set \(U\) in \(X\), there exists \(k\in\mathbb{N}\) such that \(O_{H}^{k}(x)\) meets \(U\) for every point \(x\in X\) (Proposition 3). Also, while minimality of a system ensures each of its points to be almost periodic in the autonomous case, non-invariance of the governing rule forces such an implication not to hold true for non-autonomous systems. The proof follows from the fact that if the governing rule varies with time, any initial point \(x_{0}\) may fail to return to its neighborhood even in the absence of proper invariant sets. In fact contrary to the autonomous case, a minimal set in a non-autonomous system may contain periodic points in the non-trivial sense (periodic points whose orbit is proper in the minimal set). We now give examples in support of our claim._ **Example 3**.: _Let \(X=\mathbb{S}^{1}\) be the unit circle and let \(f_{1}(\theta)=\theta+\frac{1}{2}\), \(f_{2}(\theta)=\theta-\frac{1}{2^{2}}\). For \(n\geq 3\), define \(f_{n}(\theta)=\left\{\begin{array}{ll}\theta+\frac{1}{2^{k}}&:n=2k+1,\\ \theta-\frac{1}{2^{k}}-\frac{1}{2^{k+1}}&:n=2k.\end{array}\right.\) where \(k\in\mathbb{N}\) _As closure of the orbital hull of each \(x\) is \(X\), the non-autonomous system generated by the family \((f_{n})\) is minimal. However as each point settles at the diametrically opposite end (of the initial point \(x_{0}\)), none of the points in the system are almost periodic. Consequently, almost periodic points are not guaranteed to exist in the non-autonomous setting._ **Example 4**.: _Define \(f_{n}:\mathbb{S}^{1}\to\mathbb{S}^{1}\) as follows:_ \[f_{n}(\theta)=\left\{\begin{array}{ll}\theta+\frac{1}{2^{k}}&:n=4k\text{ or }4k-3,\\ \theta-\frac{1}{2^{k}}&:n=4k-1\text{ or }4k-2.\end{array}\right.\text{ where }k\in \mathbb{N}\] _It is clear that every element in \(\mathbb{S}^{1}\) is periodic with period 2. However as orbital hull of every point in dense in \(S^{1}\), the system \((X,\mathbb{F})\) is minimal and contains periodic points in the non-trivial sense._ **Proposition 3**.: _Any system \((X,\mathbb{F})\) is minimal if and only if for every non-empty open set \(U\) in \(X\) there exists \(k\in\mathbb{N}\) such that set \(\mathcal{O}_{H}^{k}(x)\cap U\neq\emptyset\) for all \(x\in X\)._ Proof.: Let \((X,\mathbb{F})\) be minimal and let \(U\) be a non-empty open set in \(X\). Firstly, note that if the claim does not hold then for each \(k\in\mathbb{N}\) there exists \(x_{k}\) such that \(\mathcal{O}_{H}^{k}(x_{k})\cap U=\emptyset\). Let \(x\) be a limit point of \((x_{k})\) and let \((r_{1},r_{2},\ldots,r_{m})\) be any tuple (of any fixed length \(m\)). Let \(p=\max\{r_{i}:i=1,2,\ldots,m\}\). As \(\mathcal{O}_{H}^{k}(x_{k})\cap U=\emptyset\), we have \((\omega_{r_{m}}\circ\omega_{r_{m-1}}\circ\ldots\circ\omega_{r_{1}})(x_{n})\notin U\) for all \(n\geq p\) and hence \(\omega_{r_{m}}\circ\omega_{r_{m-1}}\circ\ldots\circ\omega_{r_{1}}(x)\notin U\). As the argument holds for any tuple \((r_{1},r_{2},\ldots,r_{m})\), we have \(\mathcal{O}_{H}(x)\cap U=\emptyset\) (which contradicts minimality of \(X\)). Consequently, there exists \(k\in\mathbb{N}\) such that set \(\mathcal{O}_{H}^{k}(x)\cap U\neq\emptyset\) for all \(x\in X\) and the proof of forward part is complete. Conversely, if there exists \(k\in\mathbb{N}\) such that set \(\mathcal{O}_{H}^{k}(x)\cap U\neq\emptyset\) for all \(x\in X\), then orbital hull of any point \(x\) intersects every non-empty open set \(U\) and hence \(X\) is minimal. **Proposition 4**.: _For any system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, \((X,\mathbb{F})\) is equicontinuous \(\implies(X,\mathbb{F})\) is distal._ Proof.: Let \((X,\mathbb{F})\) be equicontinuous and \(x\) and \(y\) be proximal. Then thee exists sequence \((n_{k})\) of integers such that \(\lim\limits_{k\to\infty}d(\omega_{n_{k}}(x),\omega_{n_{k}}(y))=0\). For any \(\epsilon>0\), there exists \(\delta>0\) such that \(d(a,b)<\delta\implies d(\omega_{n}(a),\omega_{n}(b))<\epsilon\ \forall n\in \mathbb{Z}\). As \(d(\omega_{n_{r}}(x),\omega_{n_{r}}(y))<\delta\) (for some \(n_{r}\)), we have \(d(x,y)<\epsilon\). As the argument holds for any \(\epsilon>0\), we have \(x=y\) and hence the system \((X,\mathbb{F})\) is distal. **Proposition 5**.: _For any non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \(x\) is almost periodic then every point in \(\mathcal{O}(x)\) is almost periodic._ Proof.: Let \((X,\mathbb{F})\) be generated by a commutative family of homeomorphisms and let \(x\) be an almost periodic point. Let \(k\in\mathbb{Z}\) and let \(U\) be a neighborhood of \(\omega_{k}(x)\). Then as \(x\) is almost periodic, the set \(\{r\in\mathbb{Z}:\omega_{r}(x)\in\omega_{k}^{-1}(U)\}\) is syndetic and hence the set \(\{r\in\mathbb{Z}:(\omega_{k}\circ\omega_{r})(x)\in U\}\) is syndetic. Consequently, the set \(\{r\in\mathbb{Z}:\omega_{r}(\omega_{k})(x)\in U\}\) is syndetic and thus \(\omega_{k}(x)\) is almost periodic. As the argument holds for any \(k\), every point in the orbit of \(x\) is almost periodic. **Remark 3**.: _The above remark establishes the almost periodicity of elements in the orbit when the initial point \(x\) is almost periodic. The proof uses the fact that if the initial point \(x\) is almost periodic then the syndetic bound of some neighborhood of \(x\) carries forward to the neighborhood of the given point in the orbit. As the arguments establish the almost periodicity of elements in the orbit of \(\omega_{k}(x)\), a similar argument ensures almost periodicity of elements in the orbit of \(\omega_{k}(x)\). Thus, almost periodicity of \(x\) ensures almost periodicity of elements of \(\mathcal{O}_{H}(x)\). Further for any equicontinuous system, a similar argument establishes almost periodicity of any limit of almost periodic points and hence we get the following result._ **Proposition 6**.: _For any non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \(x\) is almost periodic then every point in \(\mathcal{O}_{H}(x)\) is almost periodic. Further, if \((X,\mathbb{F})\) is equicontinuous then every point in \(\overline{\mathcal{O}_{H}(x)}\) is almost periodic._ Proof.: Let \((X,\mathbb{F})\) be a non-autonomous system and let \(x\in X\). As almost periodicity of \(x\) guarantees \(\omega_{k}(x)\) to be almost periodic for all \(k\in\mathbb{Z}\) (Proposition 3), every point in \(\mathcal{O}_{H}(x)\) is almost periodic. Further, let \((X,\mathbb{F})\) be equicontinuous, \(y\in\overline{\mathcal{O}_{H}(x)}\backslash\mathcal{O}_{H}(x)\) be fixed and \(\epsilon>0\) be given. As \((X,\mathbb{F})\) is equicontinuous, there exists \(\delta<\frac{\epsilon}{3}\) such that \(d(x,y)<\delta\implies d(\omega_{k}(x),\omega_{k}(y))<\frac{\epsilon}{3}\) for all \(k\in\mathbb{Z}\). Also as \(y\in\overline{\mathcal{O}_{H}(x)}\setminus\mathcal{O}_{H}(x)\), there exists \(p(x)=\omega_{k_{t}}\omega_{k_{t-1}}...\omega_{k_{2}}\omega_{k_{1}}(x)\in \mathcal{O}_{H}(x)\) such that \(d(p(x),y)<\delta\). As \(p(x)\) is almost periodic, the set \(\{k_{t}:d(\omega_{k_{t}}(p(x)),p(x))<\frac{\epsilon}{3}\}\) is syndetic. Further, as \(d(\omega_{k_{t}}(y),y)\leq d(\omega_{k_{t}}(y),\omega_{k_{t}}(p(x)))+d(\omega_ {k_{t}}(p(x)),p(x))+d(p(x),y)<\epsilon\) the set \(\{m:d(\omega_{m}(y),y)<\epsilon\}\) is syndetic. Consequently every point in \(\overline{\mathcal{O}_{H}(x)}\) is almost periodic and the proof is complete. **Proposition 7**.: _For any equicontinuous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \(x\in X\) is an almost periodic point then \(\overline{\mathcal{O}_{H}(x)}\) is uniformly almost periodic._ Proof.: Let \(\epsilon>0\) and \(u\in\overline{O_{H}(x)}\) be any arbitrary element. As \((X,\mathbb{F})\) is equicontinuous, there exists a \(\delta>0\) such that \(d(x,y)<\delta\implies d(\omega_{k}(x),\omega_{k}(y))<\frac{\epsilon}{4},\ \forall x,y\in X,k\in \mathbb{Z}\). Let \(\eta=min\{\frac{\epsilon}{4},\delta\}\) and let \(F=\{x_{1},x_{2},...x_{n}\}\) be a \(\eta\)-dense set in \(\overline{O_{H}(x)}\). As \(F\) is \(\eta\)-dense, there exists \(x_{r}\in F\) such that \(d(u,x_{r})<\eta\) and hence \(d(\omega_{k}(x_{r}),\omega_{k}(u))<\frac{\epsilon}{4}\) for all \(k\in\mathbb{Z}\). Consequently, if orbit of \(x_{r}\) returns to its \(\frac{\epsilon}{4}\)-neighborhood syndetically (at times \((n_{r})\)), \(d(u,\omega_{n_{r}}(u))\leq d(u,x_{i})+d(x_{i},\omega_{n_{r}}(x_{i}))+d(\omega_{ n_{r}}(x_{i}),\omega_{n_{r}}(u))<\epsilon\) and hence orbit of \(u\) returns to its \(\epsilon\)-neighborhood syndetically (at same set of times \((n_{r})\)). As \((X,\mathbb{F})\) is equicontinuous, there exists a common syndetic set for \(\{x_{1},x_{2},\ldots,x_{r}\}\) and hence every point returns to its \(\epsilon\)-neighborhood sydetically with the same syndetic set. Thus, \(\overline{O_{H}(x)}\) is uniformly almost periodic and the proof is complete. **Proposition 8**.: _For any non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \((X,\mathbb{F})\) is equicontinuous then \(\overline{O_{H}(x)}=\overline{O_{H}(y)}\) for all \(y\in\overline{O(x)}\)._ Proof.: Let \((X,\mathbb{F})\) be an equicontinuous system generated by a commutative family of homeomorphisms and let \(x\in X\). Let \(y\in\overline{O(x)}\) and \(\epsilon>0\) be given. As \((X,\mathbb{F})\) is equicontinuous, there exists \(\delta>0\) such that \(d(a,b)<\delta\) ensures \(d(\omega_{n}(x),\omega_{n}(y))<\epsilon\ \forall n\in\mathbb{Z}\). Also \(y\in\overline{O(x)}\) forces some \(n_{k}\in\mathbb{Z}\) such that \(d(\omega_{n_{k}}(x),y)<\delta\) and hence \(d(x,\omega_{-n_{k}}(y))<\epsilon\). As the argument holds for any \(\epsilon>0\), we have \(\overline{O(x)}=\overline{O(y)}\). Further, as \(x\in\overline{O(y)}\subset\overline{O_{H}(y)}\) implies \(\overline{O_{H}(x)}\subset\overline{O_{H}(y)}\) (as orbital hull of \(x\) is the smallest invariant set containing \(x\)), \(x\) and \(y\) have identical orbital hulls and hence elements in the orbit closures generate orbital hulls identical to the original point and the proof is complete. **Remark 1**.: The above result establishes that if the system \((X,\mathbb{F})\) is equicontinuous, \(\overline{O_{H}(y)}=\overline{O_{H}(x)}\) for any point \(y\) in \(\overline{O(x)}\). As the arguments can repeated for elements of the orbit, \(\overline{O_{H}(y)}=\overline{O_{H}(x)}\) for any point \(y\) in \(\overline{O_{H}(x)}\). It may be noted that if \(\{(\omega_{k_{n}}\circ\omega_{k_{n-1}}\circ\ldots\circ\omega_{k_{1}}):k_{i}\in \mathbb{Z},n\in\mathbb{N}\}\) is an equicontinuous family, then similar arguments establish \(\overline{O_{H}(y)}=\overline{O_{H}(x)}\) for any point \(y\) in \(\overline{O_{H}(x)}\). Consequently, all elements in \(\overline{O_{H}(x)}\) generate the same set (equal to \(\overline{O_{H}(x)}\)) and hence the system is minimal. However, if the family under discussion fails to be equicontinuous, the non-autonomous system may fail to be minimal (even when the generating family is commutative). Thus we get the following results. **Corollary 1**.: _For any non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \(\{(\omega_{k_{n}}\circ\omega_{k_{n-1}}\circ\ldots\circ\omega_{k_{1}}):k_{i}\in \mathbb{Z},n\in\mathbb{N}\}\) is equicontinuous then every point in \(X\) generates a minimal subsystem of \((X,\mathbb{F})\)._ Proof.: The proof follows from discussions in Remark 1. **Example 5**.: _Let \(X=[0,1]\) and define \(f_{n}:X\to X\) such that \(f_{2n-1}(x)=x^{2}\) and \(f_{2n}(x)=\sqrt{x}\) for \(n\in\mathbb{N}\). Then as every point is periodic (with finite orbit), the system \((X,\mathbb{F})\) is equicontinuous. However, as \(0\) is a fixed point such that \(0\in\overline{O_{H}(\frac{1}{2})}\), we have \(\overline{O_{H}(0)}\neq\overline{O_{H}(\frac{1}{2})}\) and the above result cannot be generalized to elements in \(\overline{O_{H}(x)}\)._ **Proposition 9**.: _For any non-autonomous dynamical system \((X,\mathbb{F})\) generated by a commutative family \(\mathbb{F}\), \((X,\mathbb{F})\) is transitive if and only if it has a point with dense orbit._ Proof.: Let \((X,\mathbb{F})\) be a transitive system such that no point in \(X\) has dense orbit in \((X,\mathbb{F})\). As \(X\) is compact, for each \(k\in\mathbb{N}\) there exists a finite subset \(F_{k}\) such that \(F_{k}\) is \(\frac{1}{k}\)-dense in \(X\). As no point in \(X\) has dense orbit, for each \(x\in X\) there exists \(r\in\mathbb{N}\) and \(x_{r}\in F_{r}\) such that \(\mathcal{O}(x)\cap S(x_{r},\frac{1}{r})=\emptyset\). However, as \((X,\mathbb{F})\) is transitive, \(O_{x,r}=\bigcup\limits_{n=0}^{\infty}\omega_{n}^{-1}(S(x_{r},\frac{1}{r}))\) is an open dense subset of \(X\). Thus \(C_{x,r}=O_{x,r}^{c}\) is a non-empty closed nowhere dense subset of \(X\). As choices of \(x_{r}\) are countable, \(X\) is countable union nowhere dense subsets of \(X\) which contradicts compactness of \(X\) and hence \(X\) must have a point with dense orbit. Conversely, let \(x\in X\) such that \(\mathcal{O}(x)\) is dense in \(X\) and let \(U\) and \(V\) be non-empty open subsets of \(X\). As orbit of \(x\) is dense, there exists \(k\in\mathbb{N}\) such that \(\omega_{k}(x)\in U\). Further, note that \(\mathcal{O}(\omega_{k}(x))=\{\omega_{n}(\omega_{k}(x)):n\in\mathbb{N}\}=\{ \omega_{k}(\omega_{n}(x)):n\in\mathbb{N}\}=\omega_{k}(\mathcal{O}(x))\) is dense in \(X\) (as \(\mathbb{F}\) is commutative and surjective). Thus there exists \(r\in\mathbb{N}\) such that \(\omega_{r}(\omega_{k}(x))\in V\) and hence \(\omega_{r}(U)\cap V\neq\emptyset\). Consequently, \((X,\mathbb{F})\) is transitive and the proof is complete. **Remark 2**.: The above proof establishes a necessary and sufficient criteria to establish transitivity of a non-autonomous dynamical system \((X,\mathbb{F})\). In particular, the result establishes existence of a dense orbit to be an equivalent criteria for a non-autonomous system to be transitivity. In addition, if the system is equicontinuous then the points in the space move in a synchronized manner (which may be visualized better via uniform almost periodicity) and hence denseness of an orbit (in \(X\)) forces denseness of all the orbits (in \(X\)). Hence we get the following result. **Proposition 10**.: _For any equicontinuous non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, \((X,\mathbb{F})\) is transitive if and only if \(\mathcal{O}(x)\) is dense in \(X\) for all \(x\in X\) (and hence \((X,\mathbb{F})\) is minimal)._ Proof.: Let \((X,\mathbb{F})\) be an equicontinuous system generated by a commutative family of homeomorphisms and let \((X,\mathbb{F})\) be transitive. By Proposition 9, there exists \(x\in X\) such that \(\mathcal{O}(x)\) is dense in \(X\). Let \(y\in X\) be arbitrary and let \(U=S(z,\epsilon)\) be any non-empty open subset of \(X\). As \((X,\mathbb{F})\) is equicontinuous, there exists \(\delta>0\) such that \(d(a,b)<\delta\) ensures \(d(\omega_{k}(a),\omega_{k}(b))<\frac{\epsilon}{4}\) for all \(a,b\in X\),\(k\in\mathbb{Z}\). As \(\mathcal{O}(x)\) is dense in \(X\), there exists \(u\in\mathcal{O}(x)\) such that \(d(u,y)<\delta\) and hence \(d(\omega_{k}(u),\omega_{k}(y))<\frac{\epsilon}{4}\) for all \(k\in\mathbb{Z}\). As denseness of \(\mathcal{O}(x)\) (in \(X\)) forces \(\overline{\mathcal{O}(u)}=X\), we have \(d(z,\omega_{r}(u))<\frac{\epsilon}{4}\) for some \(r\in\mathbb{Z}\). Thus, \(d(z,\omega_{r}(y))<d(z,\omega_{r}(u))+d(\omega_{r}(u),\omega_{r}(y))<\epsilon\) and hence orbit of \(y\) intersects \(S(z,\epsilon)\). As the argument holds for any \(y\in X\), orbit of any point is dense in \(X\). As the proof of converse is trivial, \(\mathcal{O}(x)\) is dense in \(X\) for some \(x\in X\) if and only if \(\mathcal{O}(x)\) is dense in \(X\) for all \(x\in X\). Finally, as denseness of orbit of a point forces denseness of orbital hull (of the same point), \((X,\mathbb{F})\) is minimal and the proof is complete. **Remark 3**.: The above proof establishes a necessary and sufficient criteria for an equicontinuous system to be transitive. It may be noted that as minimality of a system does not guarantee transitivity in the non-autonomous case, a minimal equicontinuous system mail fail to be transitive (as shown in Example 4). Further, as the above arguments hold good when the element of the orbit is replaced by element of the orbital hull, a similar set of arguments establish equivalence of denseness of orbital hulls (among elements of the space \(X\)). Also, for autonomous systems, it is known that a transitive system with dense set of periodic points is necessarily sensitive. However, as the governing rule may vary significantly in a non-autonomous setting, an analogous version of the result fails to hold good in a non-autonomous setting. However, as totally transitive systems guarantee visiting times to intersect multiples of each integer, totally transitive systems with dense set of transitive points are necessarily sensitive. We now establish our claims below. **Proposition 11**.: _For any equicontinuous non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \(\mathcal{O}_{H}(x)\) is dense (in \(X\)) for some \(x\in X\) then \(\mathcal{O}_{H}(x)\) is dense (in \(X\)) for all \(x\in X\)_ Proof.: The proof follows from discussions in Remark 12. **Example 6**.: _Let \(S^{1}\) be the unit circle and let the sequence \((f_{n})\) be defined as_ \[f_{n}(\theta)=\left\{\begin{array}{ll}\theta+2\pi\sum\limits_{i=1}^{k}\frac{1}{ i}&:n=2k-1,\\ \theta-2\pi\sum\limits_{i=1}^{k}\frac{1}{i}&:n=2k.\end{array}\right.\] _Firstly, note that as \(\sum\limits_{i=1}^{\infty}\frac{1}{i}=\infty\), any point traverses the circle infinitely often (and hence passes across the origin infinitely often). Also, at the end of \(n=2k+1\) iterations, any point \(\theta\) rotates effectively by an angle \((2\pi\sum\limits_{i=1}^{k+1}\frac{1}{i})(mod2\pi)\). As rotations after \(n\) iterations are of magnitude less than \(\frac{1}{n}\), orbit of any point is \(\frac{1}{n}\)-dense in \(X\) (for any \(n\in\mathbb{N}\)) and hence dense in \(X\) and thus the system is transitive. Also, as \(\omega_{2n}(x)=x\) for all \(n\in\mathbb{Z}\), every point in \(S^{1}\) is periodic (of period \(2\)). However, as the maps involved are isometries, the system is equicontinuous. Thus, transitive system with dense set of periodic points need not exhibit sensitive dependence on initial conditions._ **Proposition 12**.: _For any non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, if \((X,\mathbb{F})\) is totally transitive with dense set of periodic points then \((X,\mathbb{F})\) is sensitive._ Proof.: Let \((X,\mathbb{F})\) be totally transitive with dense set of periodic points, \(x\in X\) and \(U\) be any \(\frac{1}{n}\)-neighborhood of \(X\). As set of periodic points is dense, \(U\) contains a periodic point \(p\) (say of order \(r\)). Also, if \(diam(X)>k\) there exists \(y\in X\) such that \(d(x,y)>\frac{k}{2}\). As \((X,\mathbb{F})\) is \(r\)-transitive, there exists \(u\in U,\ n_{y}\in\mathbb{Z}\) such that \(d(\omega_{m_{y}}(u),y)<\frac{1}{n}\). Also as \(\omega_{m_{y}}(p)=p\), for a sufficiently large \(n\), we have \(d(\omega_{m_{y}}(u),\omega_{m_{y}}(p))>\frac{k}{4}\). Consequently, neighborhood of any point in \(X\) expands to diameter greater than \(\frac{k}{4}\) and hence \((X,\mathbb{F})\) is sensitive. **Proposition 13**.: _For any minimal non-autonomous system \((X,\mathbb{F})\) generated by a commutative family of homeomorphisms, \((X,\mathbb{F})\) is either equicontinuous or exhibits a dense set of sensitive points._ Proof.: Let \((X,\mathbb{F})\) be minimal system generated by a commutative family of homeomorphisms. If \((X,\mathbb{F})\) is equicontinuous then the result holds trivially. If not, let \(x\) be a point of sensitivity (with sensitivity constant \(\eta\)) and let \(k\in\mathbb{Z}\) be fixed. Let \(\epsilon>0\) be given and let \(U=S(\omega_{k}(x),\epsilon)\). As \(\omega_{k}\) is continuous, there exists \(\delta>0\) such that \(\omega_{k}(S(x,\delta))\subset S(\omega_{k}(x),\epsilon)\). As each \(\omega_{k}\) is a homeomorphism, there exists \(\eta^{\prime}>0\) such that \(d(a,b)\geq\eta\) implies \(d(\omega_{k}(a),\omega_{k}(b))\geq\eta^{\prime}\). Further, as \(x\) is a point of sensitivity, there exists \(y\in S(x,\delta)\) and \(r\in\mathbb{Z}\) such that \(d(\omega_{r}(x),\omega_{r}(y))\geq\eta\). Thus, we have \(d(\omega_{k}(\omega_{r}(x)),\omega_{k}(\omega_{r}(y)))\geq\eta^{\prime}\) and hence \(d(\omega_{r}(\omega_{k}(x)),\omega_{r}(\omega_{k}(y)))>\eta^{\prime}\) (as \(\mathbb{F}\) is commutative). As \(\omega_{k}(y)\in U\), \(\omega_{k}(x)\) is a point of sensitivity and hence sensitivity of a point ensures sensitivity at each element of the orbit. As the arguments can repeated for elements of the orbit, sensitivity at a point \(x\) ensures sensitivity at elements of the orbital hull. Finally, as \(X\) is minimal, orbital hull of \(x\) is dense in \(X\) and the proof is complete. ## Acknowledgement The first author thanks MHRD for financial support. The second author thanks National Board for Higher Mathematics (NBHM) for financial support.
2302.06993
Colossal reversible barocaloric effects in a plastic crystal mediated by lattice vibrations and ion diffusion
Solid-state methods for cooling and heating promise a more sustainable alternative to current compression cycles of greenhouse gases and inefficient fuel-burning heaters. Barocaloric effects (BCE) driven by hydrostatic pressure ($p$) are especially encouraging in terms of large adiabatic temperature changes ($|\Delta T| \sim 10$ K) and colossal isothermal entropy changes ($|\Delta S| \sim 100$ JK$^{-1}$kg$^{-1}$). However, BCE typically require large pressure shifts due to irreversibility issues, and sizeable $|\Delta T|$ and $|\Delta S|$ seldom are realized in a same material. Here, we demonstrate the existence of colossal and reversible BCE in LiCB$_{11}$H$_{12}$, a well-known solid electrolyte, near its order-disorder phase transition at $\approx 380$ K. Specifically, for $\Delta p \approx 0.23$ $(0.10)$ GPa we measured $|\Delta S_{\rm rev}| = 280$ $(200)$ JK$^{-1}$kg$^{-1}$ and $|\Delta T_{\rm rev}| = 32$ $(10)$ K, which individually rival with state-of-the-art barocaloric shifts obtained under similar pressure conditions. Furthermore, over a wide temperature range, pressure shifts of the order of $0.1$ GPa yield huge reversible barocaloric strengths of $\approx 2$ JK$^{-1}$kg$^{-1}$MPa$^{-1}$. Molecular dynamics simulations were carried out to quantify the role of lattice vibrations, molecular reorientations and ion diffusion on the disclosed colossal BCE. Interestingly, lattice vibrations were found to contribute the most to $|\Delta S|$ while the diffusion of lithium ions, despite adding up only slightly to the accompanying entropy change, was crucial in enabling the molecular order-disorder phase transition. Our work expands the knowledge on plastic crystals and should motivate the investigation of BCE in a variety of solid electrolytes displaying ion diffusion and concomitant molecular orientational disorder.
Ming Zeng, Carlos Escorihuela-Sayalero, Tamio Ikeshoji, Shigeyuki Takagi, Sangryun Kim, Shin-ichi Orimo, María Barrio, Josep-Lluís Tamarit, Pol Lloveras, Claudio Cazorla, Kartik Sau
2023-02-14T11:56:06Z
http://arxiv.org/abs/2302.06993v1
Colossal reversible barocaloric effects in a plastic crystal mediated by lattice vibrations and ion diffusion ###### Abstract Solid-state methods for cooling and heating promise a more sustainable alternative to current compression cycles of greenhouse gases and inefficient fuel-burning heaters. Barocaloric effects (BCE) driven by hydrostatic pressure (\(p\)) are especially encouraging in terms of large adiabatic temperature changes (\(|\Delta T|\sim 10\) K) and colossal isothermal entropy changes (\(|\Delta S|\sim 100\) J K\({}^{-1}\) kg\({}^{-1}\)). However, BCE typically require large pressure shifts due to irreversibility issues, and sizeable \(|\Delta T|\) and \(|\Delta S|\) seldom are realized in a same material. Here, we demonstrate the existence of colossal and reversible BCE in LiCB\({}_{11}\)H\({}_{12}\), a well-known solid electrolyte, near its order-disorder phase transition at \(\approx 380\) K. Specifically, for \(\Delta p\approx 0.23\) (0.10) GPa we measured \(|\Delta S_{\rm rev}|=280\) (200) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T_{\rm rev}|=32\) (10) K, which individually rival with state-of-the-art barocaloric shifts obtained under similar pressure conditions. Furthermore, over a wide temperature range, pressure shifts of the order of 0.1 GPa yield huge reversible barocaloric strengths of \(\approx 2\) J K\({}^{-1}\) kg\({}^{-1}\) MPa\({}^{-1}\). Molecular dynamics simulations were carried out to quantify the role of lattice vibrations, molecular reorientations and ion diffusion on the disclosed colossal BCE. Interestingly, lattice vibrations were found to contribute the most to \(|\Delta S|\) while the diffusion of lithium ions, despite adding up only slightly to the accompanying entropy change, was crucial in enabling the molecular order-disorder phase transition. Our work expands the knowledge on plastic crystals and should motivate the investigation of BCE in a variety of solid electrolytes displaying ion diffusion and concomitant molecular orientational disorder. ## I Introduction Solid-state methods for cooling and heating are energy efficient and ecologically friendly techniques with potential for solving the environmental problems posed by conventional refrigeration and heat pump technologies relying on compression cycles of greenhouse gases and inefficient traditional fuel-burning heaters [1]. Under moderate magnetic, electric or mechanical field variations, aspicious caloric materials experience large adiabatic temperature variations (\(|\Delta T|\sim\) 1-10 K) as a result of phase transformations entailing large isothermal entropy changes (\(|\Delta S|\sim\) 10-100 J K\({}^{-1}\) kg\({}^{-1}\)) [2; 3]. Solid-state cooling and heat pumping capitalize on such caloric effects for engineering refrigeration and heating cycles. From a practical point of view, large and reversible \(|\Delta T|\) and \(|\Delta S|\) are both necessary for achieving rapid and efficient devices under recursive application and removal of the driving fields. In terms of largest \(|\Delta T|\) and \(|\Delta S|\), mechanocaloric effects induced by uniaxial stress (elastocaloric effects) and hydrostatic pressure (barocaloric effects -BCE-) are among the most promising [4; 5; 6]. Recently, colossal and reversible BCE (\(|\Delta S_{\rm rev}|\geq\) 100 J K\({}^{-1}\) kg\({}^{-1}\)) have been measured in several families of materials displaying order-disorder phase transitions under pressure shifts of the order of 0.1 GPa [7; 8; 9; 10; 11; 12; 13; 14; 15]. On one hand, there are plastic crystals like neopentane derivatives [7; 8; 9], adamantane derivatives [10; 15] and exporanes [11] in which the underlying phase transitions involve molecular orientational disorder stabilized under increasing temperature. On the other hand, there are polymers (e.g., acetoxy silicone rubber) [12] and layered hybrid organic-inorganic perovskites (e.g., [C\({}_{10}\)H\({}_{21}\)NH\({}_{3}\)]\({}_{2}\)MnCl\({}_{4}\)) [13; 14] in which the accompanying phase transformations entail significant atomic rearrangements in the organic components. Another family of disordered materials presenting also great barocaloric promise are solid electrolytes (e.g., AgI, Li\({}_{3}\)N and Cu\({}_{2}\)Se) [16; 17; 18; 19], although in this latter case the experimentally reported \(|\Delta S_{\rm rev}|\) fall slightly below the colossal threshold value of 100 J K\({}^{-1}\) kg\({}^{-1}\)[16]. In spite of these recent developments, finding barocaloric materials with well-balanced and suitable features for developing thermal applications, e.g., 20 K and \(|\Delta S_{\rm rev}|\geq 100\) J K\({}^{-1}\) kg\({}^{-1}\) driven by \(\Delta p\lesssim 0.1\) GPa, is proving extremely difficult. From the hundred of barocaloric materials known to date [6], to the best of our knowledge only four fulfill the conditions specified above, namely, the spin-crossover complex Fe\({}_{3}\)(bntrz)\({}_{6}\)(tcnset)\({}_{6}\) (\(|\Delta T_{\rm rev}|=35\) K and \(|\Delta S_{\rm rev}|=120\) J K\({}^{-1}\) kg\({}^{-1}\) for \(\Delta p=0.26\) GPa) [21], the layered hybrid perovskite [C\({}_{10}\)H\({}_{21}\)NH\({}_{3}\)]\({}_{2}\)MnCl\({}_{4}\) (\(|\Delta T_{\rm rev}|=27\) K and \(|\Delta S_{\rm rev}|=250\) J K\({}^{-1}\) kg\({}^{-1}\) for \(\Delta p=0.19\) GPa) [13; 14], the plastic crystal 1-Br-adamantane (\(|\Delta T_{\rm rev}|=20\) K and \(|\Delta S_{\rm rev}|=120\) J K\({}^{-1}\) kg\({}^{-1}\) for \(\Delta p=0.10\) GPa) [10], and the elastomer acetoxy silicone (\(|\Delta T_{\rm rev}|=22\) K and \(|\Delta S_{\rm rev}|=182\) J K\({}^{-1}\) kg\({}^{-1}\) for \(\Delta p=0.17\) GPa) [12]. Moreover, studies addressing a fundamental and quantitative understanding of the atomistic mechanisms that bring on such colossal BCE are very scarce [22; 23; 24; 25], thus hindering the rational design of disordered materials with enhanced barocaloric performances. In this work, we experimentally and theoretically demonstrate the existence of colossal and reversible BCE in the monocarba-_closo_-dodecaborate LiCB\({}_{11}\)H\({}_{12}\) (LCBH) near its order-disorder phase transition occurring at \(T_{t}\approx 380\) K [20]. LCBH is a well-known solid electrolyte in which at temperatures above \(T_{t}\) the lithium cations are highly mobile and the molecular anions [CB\({}_{11}\)H\({}_{12}\)]\({}^{-}\) reorient disorderly [26; 27] (Fig. 1); thus, LCBH combines phase-transition features of both plastic crystals and superionic compounds, two families of materials for which colossal and giant BCE, respectively, have been previously reported [7; 8; 9; 16]. In par Figure 1: **Sketch of the order-disorder phase transition occurring in LCBH upon increasing temperature.** (a) Ball-stick representation of the low-\(T\) ordered (O) and high-\(T\) disordered (D) phases. Lithium, carbon, boron and hydrogen atoms are represented with red, brown, green and blue spheres, respectively. In the high-\(T\) phase, the Li\({}^{+}\) cations diffuse throughout the crystalline matrix while the [CB\({}_{11}\)H\({}_{12}\)]\({}^{-}\) anions reorient disorderly [20]; the volume increases significantly during the \(T\)-induced phase transition. (b) Outline of the order-disorder phase transition in terms of Gibbs free energies. The red dotted lines represent internal energies and the blue solid lines Gibbs free energies; \(T_{t}\) denotes the phase transition temperature. ticular, we measured colossal values of \(|\Delta T_{\rm rev}|=32\) K and \(|\Delta S_{\rm rev}|=280\) JK\({}^{-1}\)kg\({}^{-1}\) for a pressure shift of 0.23 GPa, and large and reversible barocaloric strengths of \(\approx 2\) J K\({}^{-1}\) kg\({}^{-1}\) MPa\({}^{-1}\) over a wide temperature interval of several tens of degrees. Likewise, for a smaller pressure shift of 0.10 GPa assuring values of \(|\Delta S_{\rm rev}|=200\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T_{\rm rev}|=10\) K were obtained. Atomistic molecular dynamics simulations were performed to reveal key phase transition mechanisms and quantify the role played by the vibrational, molecular orientational and ion diffusive degrees of freedom on the disclosed BCE. Very interestingly, the contribution of the lattice vibrations to \(\Delta S\) was found to be the dominant at all pressures, instead of the typically assumed one resulting from molecular reorientational motion [22; 23; 24]. Our results provide new valuable insights into the physical behavior and functionality of plastic crystals and suggest that colossal BCE similar to those reported here for LCBH could also exist in other akin _closo_-borate materials like NaCB\({}_{11}\)B\({}_{12}\)[20; 26], KCB\({}_{11}\)B\({}_{12}\)[28], and LiCB\({}_{9}\)H\({}_{10}\)[29; 30]. ## II Results and discussion ### LiCB\({}_{11}\)H\({}_{12}\) general properties In a recent X-ray powder diffraction study [20], it has been shown that at room temperature LiCB\({}_{11}\)H\({}_{12}\) (LCBH) presents an ordered orthorhombic structure (space group \(Pca2_{1}\)) in which the Li\({}^{+}\) cations reside near trigonal-planar sites surrounded by molecular \([\)CB\({}_{11}\)H\({}_{12}]^{-}\) anions arranged in a cubic sublattice. An order-disorder phase transition occurs at \(T_{t}\approx 380\) K that stabilizes a disordered phase in which the Li\({}^{+}\) cations are highly mobile and the molecular anions present fast reorientational motion (Fig. 1a). At normal pressure, the lithium ion conductivity measured just above \(T_{t}\) exceeds values of 0.1 S cm\({}^{-1}\)[20] and the reorientational motion of the molecular anions can reach frequencies of \(10^{11}\) s\({}^{-1}\)[20; 31]. Meanwhile, the \(T\)-induced order-disorder phase transition is accompanied by a huge volume increase of the order of \(\approx 10\%\)[31] that, based on the Clausius-Clapeyron (CC) equation \(\Delta S_{t}=\Delta V_{t}\frac{dp}{dT}\), suggests great barocaloric potential. The described order-disorder phase transition can be qualitatively understood in terms of the Gibbs free energy difference between the high-\(T\) disordered (D) and Figure 2: **Experimental phase diagram of bulk LCBH and corresponding phase transition entropy changes.** (a) Volume per formula unit measured as a function of temperature at normal pressure. (b) Isobaric heat flow data expressed as a function of applied pressure and temperature; data collected during heating (cooling) are represented in the positive (negative) y-axis. (c) Pressure and temperature phase diagram; transition temperatures are determined from the peaks in panel (b). (d) Phase transition entropy changes as a function of pressure and transition path. \(\Delta S_{t}\) remains practically constant from atmospheric pressure all the way up to the triple point. At \(p\simeq 0.13\) GPa, \(\Delta S_{\rm II\to I}\approx\Delta S_{\rm II\to III}+\Delta S_{\rm III \to I}\), while above the triple point \(\Delta S_{\rm II\to III}\approx\Delta S_{\rm III\to I}\). Straight lines at pressures above the triple point are linear fits to \(\Delta S_{\rm II\to III}+\Delta S_{\rm III\to I}\). low-\(T\) ordered (O) phases, \(\Delta G\equiv G^{D}-G^{O}\) (Fig. 1b). This free energy difference consists of an internal energy (\(\Delta E\)), entropy (\(-T\Delta S\)), and volume (\(p\Delta V\)) terms. The internal energy remains more or less constant during the phase transition while the volume term disfavors the stabilization of the disordered phase since \(\Delta V>0\). Thus, the LCBH order-disorder phase transition appears to be governed by the change in entropy, \(\Delta S\), which in view of the ion conductivity and molecular reorientational frequency measured above \(T_{t}\) should be fairly large. ### Experimental barocaloric results Conventional X-ray powder diffraction experiments performed at normal pressure and under varying temperature confirmed the expected structures of the low-\(T\) and high-\(T\) phases (orthorhombic and cubic symmetry, respectively). Pattern matching analysis of the obtained data yielded the temperature-dependent volume of LCBH (see Fig. 2a), which shows a huge \(\approx 13\%\) relative volume increase at the endothermic transition corresponding to \(\Delta V\approx 12\cdot 10^{-5}\) m\({}^{3}\) kg\({}^{-1}\). High-pressure differential thermal analysis (HP-DTA) was carried out in the pressure interval \(0\leq p\leq 0.23\) GPa (Fig. 2b). At pressures below \(\approx 0.13\) GPa, a single peak in the heat flow was measured corresponding to the aforementioned orthorhombic (ordered phase, II) \(\leftrightarrow\) cubic (disordered phase, I) first-order phase transition. At pressure above \(\approx 0.13\) GPa, the HP-DTA signals exhibit two peaks thus indicating the appearance of a new phase that we label here as III (high-pressure enantiotropy). To the best of our knowledge, phase III has not been previously reported in the literature and its specific crystalline structure remains unknown since we did not resolve it. Interestingly, a broad peak was previously detected in differential scanning calorimetry experiments [20] that hints at the stabilization of phase III. Transition temperatures were determined from the maximum of the HP-DTA peaks (Fig. 2c) which allowed to estimate an upper threshold for the triple point at \(\approx\) (425 K, 0.13 GPa) given the width of the peaks obtained under the chosen scanning rate. Considering only the data measured near atmospheric pressure, the pressure dependence of the II\(\rightarrow\)I transition was determined to be \(\frac{dT}{dp}\approx 420\) K GPa\({}^{-1}\), which slightly decreases under in Figure 3: **Experimentally measured colossal barocaloric effects in bulk LCBH.** (a)–(c) Isothermal entropy change, \(\Delta S\), and (b)–(d) adiabatic temperature change, \(\Delta T\), obtained upon the application and removal of pressure, \(p\), considering (a)–(b) irreversible and (c)–(d) reversible processes. creasing pressure due to the small convexity of the coexistence line. For the II\(\rightarrow\)III and III\(\rightarrow\)I transitions, linear fits to the obtained coexistence lines yielded \(\frac{dT}{dp}\approx 135\) K GPa\({}^{-1}\) and \(\frac{dT}{dp}\approx 310\) K GPa\({}^{-1}\), respectively. Phase transition entropy changes were calculated via integration of the \(\frac{1}{T}\frac{dQ}{dT}\) function after baseline subtraction. As it was already expected, the \(\Delta S_{\rm II\to I}\) values associated to the LCBH order-disorder phase transition are noticeably large, namely, \(\approx 208\) J K\({}^{-1}\) kg\({}^{-1}\) (Fig. 2d). By plugging the measured \(\frac{dT}{dp}\) and \(\Delta S_{\rm II\to I}\) values at atmospheric pressure in the CC equation we obtain \(\Delta V_{\rm CC}\approx 9\cdot 10^{-5}\) m\({}^{3}\) kg\({}^{-1}\), which is in reasonable agreement with the \(\Delta V\) determined directly in the experiments. Above \(p\approx 0.13\) GPa, due to the overlapping between the II\(\leftrightarrow\)III and III\(\leftrightarrow\)I peaks, the contribution associated to each phase transition was decided at the inflection point of the cumulative entropy change function \(\int_{T_{1}}^{T}\frac{1}{T^{\prime}}\frac{dQ}{dT^{\prime}}dT^{\prime}\). \(\Delta S_{\rm t}\) remains practically constant from atmospheric pressure all the way up to the triple point. At \(p\simeq 0.13\) GPa, we obtained \(\Delta S_{\rm II\to I}\approx\Delta S_{\rm II\to III}+\Delta S_{\rm III \to I}\), as it is required by the condition of thermodynamic equilibrium, while above the triple point \(\Delta S_{\rm II\to III}\approx\Delta S_{\rm III\to I}\). Splitting of the II\(\rightarrow\)I phase transition into II\(\rightarrow\)III and III\(\rightarrow\)I might be associated to the decoupling of the diffusive and orientational degrees of freedom right at the stabilization of the high-\(T\) phase, although further investigations are necessary for a more conclusive assessment of phase III. HP-DTA measurements along with experimental differential scanning calorimetry (Supplementary Fig. S1), heat capacity (Supplementary Fig. S2) and theoretical equations of state \(V(T,p)\) (i.e., obtained from molecular dynamics simulations, Sec. II.3) were used to determine the isobaric entropy curves \(S(T,p)\) (Supplementary Fig. S3), from which the BC effects can be directly calculated (Methods). Figures 3a,b show representative isothermal entropy changes, \(|\Delta S|\), and adiabatic temperature changes, \(|\Delta T|\), obtained upon the first application and removal of the driving pressure shift. It is worth noticing that a small \(\Delta p\approx 0.03\) GPa already produced colossal values of \(|\Delta S|=100\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T|=8\) K, and similarly \(\Delta p\approx 0.08\) GPa yielded \(|\Delta S|=250\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T|=16\) K. For the largest pressure shift considered in this study, namely, \(\Delta p\approx 0.23\) GPa, the resulting \(|\Delta S|\) and \(|\Delta T|\) amount to 300 J K\({}^{-1}\) kg\({}^{-1}\) and 40 K, respectively. Operation of solid-state cooling and heating devices requires cyclic application and removal of the driving fields, for which reversible caloric effects, \(|\Delta S_{\rm rev}|\) and Figure 4: **Compendium of experimentally measured reversible BCE.** The size of the symbols represents the reversible barocaloric strength defined as the ratio of \(|\Delta S_{\rm rev}|\) by the corresponding pressure change \(\Delta p\). Material names are indicated near each symbol or in the right side of the panel. NPG: neopentylglycol; PG: pentaglycerine; NPA: Neopentyl alcohol; o-carb: ortho-carb; m-carb: metacarborane; p-carb: paracarborane; 1-Br-ada: 1-Bromoadamantane; 1-Cl-ada: 1-Chloroadamantane; 1ada-ol: 1-adamantanol; 2ada-ol: 2-adamantanol; 2m2ada-ol: 2-methyl-2-adamantanol; ASR: Acetoxy Silicone Rubber. Numerical details and references can be found in the Supplementary Table S1. \(|\Delta T_{\rm rev}|\), must be considered. By reversible caloric effects we mean acquitted of phase transition hysteresis effects [9]. The obtained results are shown in Figs. 3c,d. Colossal \(|\Delta S_{\rm rev}|\) were already obtained for a minimum pressure shift of \(\approx 0.08\) GPa. For instance, under a moderate pressure change of \(\approx 0.10\) GPa LCBH renders \(|\Delta S_{\rm rev}|=200\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T_{\rm rev}|=10\) K. Meanwhile, for the largest pressure shift considered in this study we measured outstanding values of \(|\Delta S_{\rm rev}|=280\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T_{\rm rev}|=32\) K. Figure 4 compares most of the experimental \(|\Delta S_{\rm rev}|\) and \(|\Delta T_{\rm rev}|\) reported thus far in the literature for barocaloric materials. Additionally, the size of the symbols therein account for the materials BC strength, which is defined as the ratio of \(|\Delta S_{\rm rev}|\) by the corresponding pressure shift \(\Delta p\). The best performing barocaloric materials, therefore, should appear in the top right side of the panel and with the largest possible symbol area. Each material has been represented with one or two points that best illustrate their overall barocaloric performance, while for LCBH we have selected a set of barocaloric measurements. Although LCBH is not the best performing material in terms of a single quality, it displays an unprecedentedly well-balanced and accomplished barocaloric portfolio consisting of colossal \(|\Delta S_{\rm rev}|\), large \(|\Delta T_{\rm rev}|\) and large BC strength obtained under moderate pressure shifts of the order of 0.10 GPa. For instance, in terms of largest \(|\Delta S_{\rm rev}|\) the plastic crystal neopentylglycol (NPG) emerges as the clear winner since it holds a gigantic value of \(\approx 400\) J K\({}^{-1}\) kg\({}^{-1}\)[9]; however, as regards \(|\Delta T_{\rm rev}|\) the same material becomes a poor contestant in the presence of LCBH (that is, \(\approx 8\) K versus 32 K). Likewise, the \(|\Delta T_{\rm rev}|\) record holder, namely, the spin-crossover complex Fe\({}_{3}\)(brutz)\({}_{6}\)(tcnet)\({}_{6}\)[21], presents \(|\Delta S_{\rm rev}|\) and BC strength values that roughly are halves of the LCBH maxima (for instance, \(\approx 120\) J K\({}^{-1}\) kg\({}^{-1}\) versus 280 J K\({}^{-1}\) kg\({}^{-1}\)). Therefore, LCBH can be deemed as one of the most thorough and promising barocaloric materials reported to date owing to its unique parity between sizable \(|\Delta S_{\rm rev}|\) and \(|\Delta T_{\rm rev}|\) obtained under moderate pressure shifts. Figure 5: **Colossal BCE estimated for bulk LCBH with MD simulations.** (a) Volume change per formula unit across the phase transition expressed as a function of temperature and pressure. (b) Total entropy curves expressed as a function of pressure and temperature. _Inset_: theoretically calculated \(p\)–\(T\) phase diagram. (c) Isothermal entropy and (d) adiabatic temperature changes expressed as a function of temperature and pressure. Results were obtained from \(NpT\)-MD simulations. ### Atomistic simulation of barocaloric effects Figures 5a,b show the theoretical equation of state \(V(T,p)\) and \(p\)-\(T\) phase diagram of bulk LCBH obtained from molecular dynamics (MD) simulations (Methods). We determined the coexistence line of the high-\(T\) (disordered) and low-\(T\) (ordered) phases by conducting numerous MD simulations at small \(p\)-\(T\) shifts of 0.025 GPa and 12.5 K. Each phase coexistence point in Fig. 5b (_inset_) corresponds to sharp and simultaneous changes in the volume, Li\({}^{+}\) diffusion coefficient (\(D_{\rm Li}\)), and molecular [CB\({}_{11}\)H\({}_{12}\)]\({}^{-}\) orientational frequency (\(\lambda_{\rm CBH}\)), as identified in the MD simulations (Figs. 6a,b). At zero pressure, we estimated a huge volume increase of about 11% at the theoretical transition temperature \(T_{t}\approx 400\) K, along with the order parameter changes \(\Delta D_{\rm Li}=1.13\cdot 10^{-6}\) cm\({}^{2}\) s\({}^{-1}\) and \(\Delta\lambda_{\rm CBH}=0.33\cdot 10^{11}\) s\({}^{-1}\). It was found that the pressure dependence of the transition temperature could be precisely reproduced by the second-order polynomial curve \(T_{t}(p)=412+438p-610p^{2}\) (red line in the inset of Fig. 5b), in which the temperature and pressure are expressed in units of K and GPa, respectively. Figure 6: **Atomistic insights into the order-disorder phase transition in LCBH from MD simulations.** (a) Lithium ion diffusion coefficient, \(D_{\rm Li}\). (b) Anionic reorientational frequency, \(\lambda_{\rm CBH}\). Solid lines correspond to Arrhenius law fits. (c)–(d) Cumulative function of the vibrational entropy as a function of the phonon energy and atomic species, calculated for the ordered (\(T=400\) K) and disordered (\(T=412\) K) phases at zero pressure. Dashed lines indicate analogous asymptotic values reached in the ordered phase. (e)–(f) Angular probability density function estimated for the molecular (CB\({}_{11}\)H\({}_{12}\))\({}^{-}\) anions calculated in the ordered (\(T=350\) K) and disordered (\(T=550\) K) phases at zero pressure, expressed as a function of the polar (\(\theta\)) and azimuthal (\(\phi\)) angles. Dark and bright areas represent low and high probability regions, respectively. The slight \(\frac{dT}{dp}\) decrease under increasing compression is consistent with the \(p\)-induced reduction of the transition volume change since \(\Delta S_{t}\) is roughly independent of pressure, in agreement with our experiments. It is worth noting that phase-transition hysteresis effects cannot be reproduced by the equilibrium MD approach employed in this study [25]. The LCBH \(p\)-\(T\) phase diagram obtained from MD simulations (Fig. 5b) is in quantitative good agreement with the experiments performed below the triple point found at \(\approx 0.13\) GPa (Fig. 2b), although the transition temperatures are slightly overestimated by theory. For example, at zero pressure and \(p=0.10\) GPa the MD simulations yielded \(T_{t}=410\pm 15\) and \(440\pm 15\) K (Fig. 5), respectively, to be compared with the corresponding experimental values \(390\pm 10\) and \(410\pm 10\) K (Fig. 2b). The agreement between the predicted and measured volumes for the ordered and disordered phases at zero pressure is also notable, finding only small relative discrepancies of \(\sim 1\%\) for the low-\(T\) phase (Figs. 2a and 5a). Meanwhile, the triple point observed in the experiments was not reproduced by the MD simulations. It is worth noting, however, that under \(p\neq 0\) conditions and close to \(T_{t}\) we observed pre-transitional effects in our simulations consisting of few slowly diffusing Li ions in the ordered phase (Supplementary Fig. S4). Figures 5c,d show the theoretical barocaloric \(|\Delta S|\) and \(|\Delta T|\) deduced from the entropy curves \(S(p,T)\) enclosed in Fig. 5b, which were obtained from data generated in the MD simulations. The agreement between these theoretical results and the corresponding experimental values is remarkably good for pressures below the experimental triple point. For example, for a pressure shift of 0.10 GPa we estimated an isothermal entropy change of 227 J K\({}^{-1}\) kg\({}^{-1}\) and an adiabatic temperature change of 32 K from the MD simulations, to be compared with the corresponding experimental values 250 J K\({}^{-1}\) kg\({}^{-1}\) and 24 K (Fig. 3a,b). In view of such a notable agreement, we characterized with MD simulations the contributions to the phase transition entropy change stemming from the vibrational, molecular orientational and cation diffusive degrees of freedom, a highly valuable atomistic insight that in principle cannot be obtained from the experiments. Figures 6a,b reveal synchronized surges in \(D_{\rm Li}\) and \(\lambda_{\rm CBH}\) at the order-disorder phase transition points. Thus, both ion diffusion and molecular anion orientational disorder (Figs. 6e,f) contribute to the transition entropy change and barocaloric effects disclosed in LCBH. Nevertheless, there is a third possible source of entropy in the crystal which is related to the lattice vibrations, \(S_{\rm vib}\) (Supplementary Figs. S5-S6). Figures 6c,d show examples of the cumulative \(S_{\rm vib}\) function expressed as a function of the vibrational phonon energy, calculated for LCBH in the ordered and disordered phases at zero pressure and evaluated for each atomic species. Therein, it is appreciated that the largest contribution to the \(S_{\rm vib}\) difference between the order and disordered phases comes from the B atoms (followed by hydrogen). This outcome can be rationalized in terms of the relative great abundance of this species in LCBH (\(\approx 45\%\)) and its larger mass as compared to that of H atoms (10 times heavier): B ions have a predominant weight on the low-frequency vibrational modes (Fig. 6c-d) that most significantly contribute to \(S_{\rm vib}\) near ambient temperature. Figure 7 shows the relative contributions of the vibrational, molecular orientational and ion diffusion degrees of freedom to the phase transition entropy change estimated at different pressures with MD simulations. Interestingly, in all the analyzed cases the largest contribution stems from changes in the lattice vibrations, \(\Delta S_{\rm vib}\), followed by the molecular reorientations, \(\Delta S_{\rm ori}\), and finally ion diffusion, \(\Delta S_{\rm diff}\). For example, at zero pressure the vibrational, molecular orientational and ion diffusive degrees of freedom respectively contribute in \(\approx 48\), 32 and 20% to \(\Delta S_{t}\). The entropy preeminence of the lattice vibrations can be rationalized in terms of (1) the huge volume expansion accompanying the order-disorder phase transition (\(\sim 10\%\), Fig. 5a), which further curtails the frequency of the low-energy phonon bands in the disordered phase (Supplementary Fig. S5), and (2) the intensification and amplitude broadening of the molecular libration modes in the disordered phase (inferred from the angular probability density variations around the equilibrium positions in Figs. 6e-f). These outcomes are highly valuable and insightful since thus far molecular reorientations were thought to be the primary source of entropy variation in plastic crystals undergoing order-disorder phase transitions [22; 23; 24]. The vibrational and orientational entropy changes re Figure 7: **Partial contributions to the entropy change accompanying the order-disorder phase transition in LCBH expressed as a function of pressure.** Entropy changes stem from the vibrational, \(\Delta S_{\rm vib}\), molecular orientational, \(\Delta S_{\rm ori}\), and cation diffusive, \(\Delta S_{\rm diff}\), degrees of freedom. Results were obtained from comprehensive molecular dynamics simulations and Gibbs free energy calculations (Methods). main more or less constant for pressures \(\leq 0.1\) GPa, whereas \(\Delta S_{\rm diff}\) significantly decreases under compression. For instance, at 0.1 GPa the diffusive degrees of freedom contribute to \(\Delta S_{t}\) in less than 4%. These outcomes can be understood in terms of the small fraction of diffusive ions in LCBH (i.e., one Li atom per formula unit) and the marked decline in \(D_{\rm Li}\) induced by pressure (Fig. 6a). The appearance of pre-transitional effects in our MD simulations, specially under \(p\neq 0\) conditions (Supplementary Fig. S4), also contributes to the noticeable \(\Delta S_{\rm diff}\) drop caused by compression. Nonetheless, it is worth noting that despite the relative minuteness of \(\Delta S_{\rm diff}\), cation disorder was found to play a critical role on triggering molecular orientational disorder, which by contrast contributes very significantly to \(\Delta S_{t}\). In particular, we conducted constrained MD runs in which we fixed the positions of the lithium ions so that they could not diffuse. It was found then that molecular orientational disorder only emerged at temperatures well above 550 K (Supplementary Fig. S7). Therefore, it can be concluded that cation disorder crucially assists on the realization of colossal BCE through the order-disorder phase transition, a characteristic trait that differentiates LCBH from other molecular plastic crystals bearing also great barocaloric promise. ## III Conclusions Colossal barocaloric effects (BCE) driven by pressure shifts of the order of 0.10 GPa were experimentally and theoretically disclosed in bulk LiCB\({}_{11}\)H\({}_{12}\) (LCBH), a compound that at high temperatures presents disorder features characteristic of both plastic crystals and superionic materials, namely, molecular reorientational motion and ion diffusion. Reversible peaks of \(|\Delta S_{\rm rev}|=280\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T_{\rm rev}|=32\) K were experimentally measured around 400 K for a pressure shift of 0.23 GPa, yielding huge and reversible barocaloric strengths of \(\approx 2\) J K\({}^{-1}\) kg\({}^{-1}\) MPa\({}^{-1}\) over tens of degrees intervals. Likewise, for a smaller pressure shift of 0.10 GPa we obtained very promising values of \(|\Delta S_{\rm rev}|=200\) J K\({}^{-1}\) kg\({}^{-1}\) and \(|\Delta T_{\rm rev}|=10\) K. These results place LCBH among the best-known barocaloric materials in terms of huge and reversible isothermal entropy and adiabatic temperature changes, two quantities that rarely are found simultaneously in a same material. Atomistic molecular dynamics simulations yielded theoretical \(|\Delta S|\) and \(|\Delta T|\) in very good agreement with the experimental values, and allowed to quantify the importance of vibrational, molecular orientational, and ion diffusive degrees of freedom on the disclosed colossal BCE. It was found that the contribution to the phase transition entropy change stemming from the lattice vibrations was the largest, followed by that of molecular reorientations and both being much superior than the entropy associated to lithium diffusion alone. Nevertheless, cationic disorder was found to have a critical influence on the stabilization of orientational disorder thus, in spite of its small contribution to \(\Delta S_{t}\), lithium diffusion appears to be essential for the emergence of colossal BCE in bulk LCBH. These results are of high significance since reveal the preeminence of the vibrational degrees of freedom in the phase transition entropy change of a plastic crystal, and demonstrate atomistic BCE mechanisms other than molecular reorientational disorder (i.e., lattice vibrations and ion diffusion). LCBH belongs to the family of _closo_-borate materials, a promising class of solid electrolytes for all-solid-state batteries. Examples of akin compounds that have been already synthesized in the laboratory and tested for electrochemical energy storage applications are NaClB\({}_{11}\)H\({}_{12}\)[20; 26], KCB\({}_{11}\)H\({}_{12}\)[28], and LiCB\({}_{9}\)H\({}_{10}\)[29; 30]. Colossal BCE could also exist in these materials and in other similar compounds harboring both ion diffusion and molecular orientational disorder at or near room temperature. Thus, the present combined experimental-theoretical study opens new horizons in solid-state cooling and heating and advances knowledge in the realization of colossal BCE in plastic crystals. ## Methods ### Experimental techniques _Materials synthesis._ LiCB\({}_{11}\)H\({}_{12}\) was obtained by drying the hydrated compound LiCB\({}_{11}\)H\({}_{12}\)\(\cdot\)xH\({}_{2}\)O (Katchem, Ltd.) under vacuum (\(<5\times 10^{-4}\) Pa) at 160 \({}^{\circ}\)C for 12 h. _X-ray powder diffraction_. High-resolution X-ray powder diffraction measurements were performed using the Debye-Scherrer geometry and transmission mode with a horizontally mounted cylindrical position-sensitive INEL detector (CPS-120). Monochromatic Cu-K\(\alpha_{1}\) radiation was selected by means of a curved germanium monochromator. Temperature-dependent measurements were performed using a liquid nitrogen 700 series Oxford Cryostream Cooler. Powder samples were introduced into 0.5 mm diameter Lindemann capillaries. Volume was obtained by pattern matching procedure. _Quasi-direct barocaloric measurements._ A Q100 thermal analyzer (TA Instruments) was used to perform differential scanning calorimetry experiments at atmospheric pressure with \(\sim 10\) mg of sample hermetically encapsulated in Aluminum pans (Supplementary Fig. S1). The standard mode (at 3, 5 and 10 K min\({}^{-1}\)) was used to determine the transition properties whereas the modulated mode (isothermal conditions, modulation amplitude 1 \({}^{\circ}\)C, modulation period 120 s) was used to measure the heat capacity in each phase (Supplementary Fig. S2). Pressure-dependent calorimetry was performed with a custom-built high-pressure differential thermal analyzer (from Irimo, Bellota Herramientas S.A.) that uses Bridgman thermocouples as thermal sensors. The nominal operational pressure range is from atmospheric to 0.3 GPa and the temperature range is from room temperature up to 473 K. Heating ramps were performed at 3 K min\({}^{-1}\) using a resistive heater whereas cooling were carried out at \(\sim-2\) K min\({}^{-1}\) by an air stream. A few hundreds of mg of LiCB\({}_{11}\)H\({}_{12}\) were mixed with an inert perfluorinated fluid (Galden Bioblock Scientist) to remove air and sealed within tin capsules. The pressure-transmitting fluid was Therm240 (Lauda). Isobaric entropy functions \(S(T,p)\) were determined with respect to a reference temperature \(T_{0}\) below the transition using the method explained in Ref.[32] (Supplementary Fig. S3). The procedure is based on the following thermodynamic equation: \[S(T,p)=S(T_{0},p)+\int_{T_{0}}^{T}\frac{1}{T}\left(C_{p}+\frac{dQ}{dT}\right)dT\, \tag{1}\] where \(\frac{dQ}{dT}\) is the heat flow in temperature due to the first-order phase transition measured by pressure-dependent calorimetry. In each phase, \(C_{p}\) is the corresponding heat capacity and was considered independent of pressure as indicated by the approximately linear behavior of volume with temperature obtained in the two phases from MD simulations (Fig. 5) along with the thermodynamic equation: \[\left(\frac{\partial C_{p}}{\partial p}\right)_{T}=-T\left(\frac{\partial^{2 }V}{\partial T^{2}}\right)_{p}. \tag{2}\] In the transition region \(C_{p}\) was calculated as an average weighted according to the fraction of each phase. To take into account the dependence of the transition region with pressure, the overall \(C_{p}\) function at atmospheric pressure obtained in each phase and across the transition was extrapolated to higher temperatures according to the experimental value of \(\frac{dT}{dp}\Delta p\), where \(\Delta p\) is the pressure change applied in each particular case. Experimental measurement of \(C_{p}\) at atmospheric pressure and the calculated curves at different pressures are shown in Supplementary Fig. S2. The pressure dependence of \(S(T,p)\) was evaluated using the thermodynamic equation: \[S(T,p)=S(T,p_{0})-\int_{p_{0}}^{p}\left(\frac{\partial V}{\partial T}\right)_{ T,p^{\prime}}dp^{\prime}\, \tag{3}\] where \(p_{0}\) was selected equal to \(p_{\rm atm}=1\) bar. Here, we make use of the approximation \(\left(\frac{\partial V}{\partial T}\right)_{T,p}\simeq\left(\frac{\partial V}{ \partial T}\right)_{T,p_{0}}\), which is reasonable based on the \(\left(\frac{\partial V}{\partial T}\right)_{T,p}\) data obtained from the MD simulations (Fig. 5). Once the entropy function \(S(T,p)\) was determined for both heating and cooling runs independently (Supplementary Fig. S3), BC effects obtained upon first application or removal of the field were calculated as: \[\Delta S(T,p_{0}\to p_{1})=S(T,p_{1})-S(T,p_{0})\ {\rm and} \tag{4}\] \[\Delta T(T_{s},p_{0}\to p_{1})=T(S,p_{1})-T_{s}(T,p_{0})\, \tag{5}\] where \(T_{s}\) is the starting temperature of the heating/cooling process. Here, it must be considered that for materials with \(\frac{dT}{dp}>0\) BC effects on compression (\(p_{0}=p_{\rm atm}\), \(p_{1}>p_{\rm atm}\)) and decompression (\(p_{0}>p_{\rm atm}\), \(p_{1}=p_{\rm atm}\)) are calculated from \(S(T,p)\) functions obtained on cooling and heating, respectively [9]. In turn, BC effects obtained reversibly on cyclic compression-decompression processes were calculated from the \(S(T,p)\) curves obtained on heating at atmospheric pressure and cooling at high pressure. ### Simulation techniques _Molecular dynamics simulations_. Force-field based molecular dynamics (MD) simulations were performed using a previously reported interatomic potential for LCBH [31]. This force field is a combination of Coulomb-Buckingham (CB), harmonic bond, and angle-type potentials, namely: \[U(r,\theta)=U_{\rm CB}(r)+U_{\rm bond}(r)+U_{\rm angle}(\theta)\, \tag{6a}\] \[U_{\rm CB}(r)=\frac{q_{i}q_{j}}{4\pi\epsilon_{0}r}+A_{ij}\exp(-r/ \rho)-\frac{C_{ij}}{r^{6}}\,\] (6b) \[U_{\rm bond}(r)=\frac{1}{2}k_{r}(r-r_{0})^{2}\ {\rm and}\] (6c) \[U_{\rm angle}(\theta)=\frac{1}{2}k_{\theta}(\theta-\theta_{0})^{2}\, \tag{6d}\] where \(q_{i}\) denotes the charge of the ion labeled \(i\), \(\epsilon_{0}\) the vacuum permittivity, \(A_{ij}\) and \(\rho\) the short-range repulsive energy and length scales for the pairs of atoms \(ij\), and \(C_{ij}\) the corresponding dispersion interaction coefficient. \(r_{0}\) and \(\theta_{0}\) are an equilibrium bond distance and angle, respectively, and \(k_{r}\) and \(k_{\theta}\) the spring constants of the harmonic bond and angle potentials. The numerical value of these potential parameters can be found in the Supplementary Table S2. We performed \(NpT\)-MD simulations in the temperature range \(325\leq T\leq 525\) K at intervals of 12.5 K, and pressure range \(0\leq p\leq 0.15\) GPa at intervals of 0.025 GPa. The temperature and pressure in the system were controlled with thermostating and barostating techniques, in which some dynamic variables are coupled with the particle velocities and simulation box dimensions. The simulation supercell comprised a total of 6400 atoms. A time step of 0.5 fs was employed for integration of the atomic forces along with the velocity Verlet algorithm. A typical \(NpT\)-MD run lasted for about 2 ns and the atomic trajectories were stored at intervals of 500 fs. Detailed analyses and statistical time averages were performed over the last 1 ns of such simulations. To guarantee proper convergence of the estimated thermodynamic properties, in few instances longer simulation times of 10 ns were carried out. Periodic boundary conditions were applied along the three Cartesian directions and the Ewald summation technique was used for evaluation of the long-range Coulomb interactions with a short-range cut-off distance of 13 A. All the \(NpT\)-MD simulations were carried out with the LAMMPS software package [33]. _Density functional theory and ab initio molecular dynamics simulations._ First-principles calculations based on density functional theory (DFT) were performed to analyze the energy, structural and vibrational properties of bulk LCBH. The DFT calculations were carried out with the VASP code [34] by following the generalized gradient approximation to the exchange-correlation energy due to Perdew _et al._ (PBE) [35]. The projector augmented-wave method was used to represent the ionic cores [36], and the electronic states \(1s\)-\(2s\) Li, \(2s\)-\(2p\) C, \(2s\)-\(2p\) B and \(1s\) H were considered as valence. Wave functions were represented in a plane-wave basis truncated at 650 eV. By using these parameters and dense k-point grids for Brillouin zone integration, the resulting energies were converged to within 1 meV per formula unit. In the geometry relaxations, a tolerance of 0.005 eV A\({}^{-1}\) was imposed in the atomic forces. _Ab initio_ molecular dynamics (AIMD) simulations based on DFT were carried out to assess the reliability of the interatomic potential model employed in the MD simulations on the description of the vibrational degrees of freedom of bulk LCBH (Supplementary Fig. S6). The AIMD simulations were performed in the canonical ensemble \((N,V,T)\) considering constant number of particles, volume and temperature. The constrained volumes were equal to the equilibrium volumes determined at zero temperature, an approximation that has been shown to be reasonable at moderate temperatures [37]. The temperature in the AIMD simulations was kept fluctuating around a set-point value by using Nose-Hoover thermostats. A large simulation box containing 800 atoms was employed in all the simulations, and periodic boundary conditions were applied along the three Cartesian directions. Newton's equations of motion were integrated by using the customary Verlet's algorithm and a time-step length of \(\delta t=10^{-3}\) ps. \(\Gamma\)-point sampling for integration within the first Brillouin zone was employed in all the AIMD simulations. The AIMD simulations comprised long simulation times of \(\approx 200\) ps and temperatures in the range \(200\leq T\leq 500\) K. _Estimation of key quantities with MD simulations._ The mean square displacement of the lithium ions was estimated with the formula [38]: \[\mathrm{MSD_{Li}}(\tau) = \frac{1}{N_{\mathrm{ion}}\left(N_{\mathrm{step}}-n_{\tau}\right)}\times\] \[\sum_{i=1}^{N_{\mathrm{ion}}}\sum_{j=1}^{N_{\mathrm{step}}-n_{\tau }}|\mathbf{r}_{i}(t_{j}+\tau)-\mathbf{r}_{i}(t_{j})|^{2}\,\] where \(\mathbf{r}_{i}(t_{j})\) is the position of the migrating ion \(i\) at time \(t_{j}\) (\(=j\cdot\delta t\)), \(\tau\) represents a lag time, \(n_{\tau}=\tau/\delta t\), \(N_{\mathrm{ion}}\) is the total number of mobile ions, and \(N_{\mathrm{step}}\) the total number of time steps. The maximum \(n_{\tau}\) was chosen equal to \(N_{\mathrm{step}}/2\), hence we could accumulate enough statistics to reduce significantly the fluctuations in \(\mathrm{MSD_{Li}}(\tau)\) at large \(\tau\)'s. The diffusion coefficient of lithium ions was calculated with the Einstein's relation: \[D_{\mathrm{Li}}=\lim_{\tau\rightarrow\infty}\frac{\mathrm{MSD_{Li}}(\tau)}{6 \tau}\, \tag{8}\] by performing linear fits to the averaged \(\mathrm{MSD_{Li}}\) values calculated at long \(\tau\). The angular autocorrelation function of the molecular \([\mathrm{CB_{11}H_{12}}]^{-}\) anions was estimated using the expression [25]: \[\phi_{\mathrm{CBH}}(\tau)=\langle\hat{\mathbf{r}}(t)\cdot\hat{\mathbf{r}}(t+ \tau)\rangle\, \tag{9}\] where \(\hat{\mathbf{r}}\) is a unitary vector connecting the center of mass of each closoborane unit with one of its edges and \(\langle\cdots\rangle\) denotes statistical average in the \((N,p,T)\) ensemble considering all the molecular anions. This autocorrelation function typically decays as \(\propto\exp{[-\lambda_{\mathrm{CBH}}\cdot\tau]}\), where the parameter \(\lambda_{\mathrm{CBH}}\) represents a characteristic reorientational frequency. For significant anion reorientational motion, that is, large \(\lambda_{\mathrm{CBH}}\), the \(\phi_{\mathrm{CBH}}\) function decreases rapidly to zero with time. The temperature dependence of the lithium diffusion coefficient was assumed to follow an Arrhenius law at any pressure of the form: \[D_{\mathrm{Li}}(T)=D_{0}\cdot e^{-(\frac{E_{a}}{k_{B}T})}\, \tag{10}\] where \(D_{0}\) and \(E_{a}\) are parameters that depend on \(p\) and \(k_{B}\) represents the Boltzmann constant. The reorientational frequency of closoborane units, \(\lambda_{\mathrm{CBH}}\), was assumed to follow a similar dependence on temperature. The entropy of each phase was calculated as a function of temperature and pressure, \(S(p,T)\), by fully considering the vibrational, molecular orientational and ion diffusive degrees of freedom: \[S(p,T)=S_{\mathrm{vib}}(p,T)+S_{\mathrm{ori}}(p,T)+S_{\mathrm{diff}}(p,T). \tag{11}\] In the low-\(T\) phase, \(S_{\mathrm{ori}}\) and \(S_{\mathrm{diff}}\) are null while in the high-\(T\) phase are finite and positive. The vibrational density of states (VDOS), \(g(\omega)\), was calculated via the Fourier transform of the velocity-velocity autocorrelation function obtained directly from the \(NpT\)-MD simulations, namely: \[g(\omega)=\frac{1}{N_{ion}}\sum_{i}^{N_{ion}}\int_{0}^{\infty}\langle\mathbf{v }_{i}(\tau)\cdot\mathbf{v}_{i}(0)\rangle e^{i\omega\tau}d\tau\, \tag{12}\] where \(\mathbf{v}_{i}(t)\) represents the velocity of the atom labeled \(i\) at time \(t\), and \(\langle\cdots\rangle\) denotes statistical average in the \((N,p,T)\) ensemble. The vibrational entropy was subsequently estimated with the formula [39]: \[S_{\mathrm{vib}}(p,T) = -\int_{0}^{\infty}k_{B}\ln{\left[2\sinh{\left(\frac{\hbar\omega}{ 2k_{B}T}\right)}\right]}\hat{g}(\omega)d\omega+ \tag{13}\] \[\int_{0}^{\infty}\frac{\hbar\omega}{2T}\tanh^{-1}{\left(\frac{ \hbar\omega}{2k_{B}T}\right)}\hat{g}(\omega)d\omega\,\] where \(\hat{g}(\omega)\) is the normalized vibrational density of states (\(\int_{0}^{\infty}\hat{g}(\omega)d\omega=3N_{ion}\)) and the dependence on pressure (and also temperature) is implicitly contained in \(\hat{g}(\omega)\). The orientational entropy of the molecular anions, \(S_{\rm ori}\), was directly calculated from the angular probability density, \(\rho(\theta,\phi)\), like [40]: \[S_{\rm ori}(p,T)=-k_{B}\int_{0}^{\pi}\int_{0}^{2\pi}\rho(\theta,\phi)\ln\rho( \theta,\phi)\ d\theta d\phi\, \tag{14}\] where \(\rho(\theta,\phi)\) was obtained from the \(NpT\)-MD simulation runs in the form of average histograms (Fig. 6). The ion diffusive entropy difference was estimated at the phase transition points via equalization of the Gibbs free energies of the low-\(T\) (O) and high-\(T\) (D) phases, namely, \(G^{D}(p,T_{t})=G^{O}(p,T_{t})\), thus leading to the expression: \[\Delta S_{\rm diff}(p,T_{t})=\frac{\langle\Delta E\rangle}{T_{t}}+p\frac{ \langle\Delta V\rangle}{T_{t}}-\Delta S_{\rm vib}-\Delta S_{\rm ori}\, \tag{15}\] where \(\Delta X\equiv X^{D}-X^{O}\) and \(E\) represents the internal energy of the system. For any pressure, \(\Delta S_{\rm diff}\) was assumed to be constant at temperatures \(T_{t}\leq T\).
2303.09989
Finding Competence Regions in Domain Generalization
We investigate a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution. Assuming a mild distribution shift, we wish to accept out-of-distribution (OOD) data from a new domain whenever a model's estimated competence foresees trustworthy responses, instead of rejecting OOD data outright. Trustworthiness is then predicted via a proxy incompetence score that is tightly linked to the performance of a classifier. We present a comprehensive experimental evaluation of existing proxy scores as incompetence scores for classification and highlight the resulting trade-offs between rejection rate and accuracy gain. For comparability with prior work, we focus on standard DG benchmarks and consider the effect of measuring incompetence via different learned representations in a closed versus an open world setting. Our results suggest that increasing incompetence scores are indeed predictive of reduced accuracy, leading to significant improvements of the average accuracy below a suitable incompetence threshold. However, the scores are not yet good enough to allow for a favorable accuracy/rejection trade-off in all tested domains. Surprisingly, our results also indicate that classifiers optimized for DG robustness do not outperform a naive Empirical Risk Minimization (ERM) baseline in the competence region, that is, where test samples elicit low incompetence scores.
Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Köthe
2023-03-17T14:04:51Z
http://arxiv.org/abs/2303.09989v3
# Finding Competence Regions in Domain Generalization ###### Abstract We propose a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution. Assuming a mild distribution shift, we wish to accept out-of-distribution (OOD) data whenever a model's estimated competence foresees trustworthy responses, instead of rejecting OOD data outright. Trustworthiness is then predicted via a proxy _incompetence score_ that is tightly linked to the performance of a classifier. We present a comprehensive experimental evaluation of incompetence scores for classification and highlight the resulting trade-offs between rejection rate and accuracy gain. For comparability with prior work, we focus on standard DG benchmarks and consider the effect of measuring incompetence via different learned representations in a closed versus an open world setting. Our results suggest that increasing incompetence scores are indeed predictive of reduced accuracy, leading to significant improvements of the average accuracy below a suitable incompetence threshold. However, the scores are not yet good enough to allow for a favorable accuracy/rejection trade-off in all tested domains. Surprisingly, our results also indicate that classifiers optimized for DG robustness do not outperform a naive Empirical Risk Minimization (ERM) baseline in the competence region, that is, where test samples elicit low incompetence scores. ## 1 Introduction Although modern deep learning exhibits excellent generalization, it is prone to silent failures when the actual data distribution differs from the distribution during training (Sanner et al., 2021; Yang et al., 2021). We address this problem in a "learning to reject" framework (Hendrickx et al., 2021; Zhang et al., 2023): _Given a pre-trained model and potentially problematic data instances, can we determine if the model's responses are still trustworthy?_ In answering this question, we consider the case where the distribution shift is mild (e.g., from one hospital to the next), so that the model remains competent on many instances in spite of the shift. Thus, we do not want to reject all out-of-distribution (OOD) instances outright, but only those for which the estimated model competence falls below some acceptance threshold. In line with previous research on _Domain Generalization_(DG; Gulrajani Lopez-Paz, 2020), we assume that we do not possess data beyond the training and validation sets. Therefore, we can neither determine out-of-domain competence directly, nor define the acceptance threshold in a Bayes-optimal way (Chow, 1970). Instead, we investigate proxy scores that are negatively correlated with competence: We call them _incompetence scores_ and they should monotonically decrease as the model accuracy increases (see Section3). For a simple example of such a score, we may consider the distance of a new data point to the nearest neighbor of the training data in the model's learned feature space. In this case, we expect the performance to drop with increasing distance. Interestingly, our experiments demonstrate that the monotonicity property typically holds for well-known choices of these scores. To define a plausible trade-off between accuracy and acceptance rate without out-of-domain data, we decided to place the acceptance threshold so that 5% of the training distribution are rejected (see Figure1). In comparison to a naive method not rejecting any data, this threshold considerably improves the accuracy on the accepted subset from the shifted domain. Ideally, the threshold would define a competence region where the accuracy is indistinguishable from in-distribution (ID) data, but even the best incompetence scores do not yet achieve this goal on all data sets and domains, calling for further research. Our notion of incompetence also differs from established OOD detection methods (Yang et al., 2021) in another crucial aspect: It is always relative to a specific _task_. The following example illustrates why this is important: A robust classifier should be able to ignore noise and recognize correct labels even for noisy data. In contrast, a system monitoring the health of the data acquisition hardware may find crucial clues in the noise and should not ignore it. Thus, _competence depends on context-specific data characteristics and cannot be defined in absolute terms_. In this work, we focus solely on classification tasks, but our method should likewise apply to other tasks such as regression. Furthermore, we present a comprehensive experimental evaluation of incompetence scores. For comparability with prior work, we focus on standard data sets from the domain generalization literature (Gulrajani and Lopez-Paz, 2020) and consider the closed vs. open world setting (i.e., new appearances of known classes vs. hitherto unknown classes) and the effect of measuring incompetence through different data representations. We investigate whether state-of-the-art classifiers (SOTA) that are optimized specifically for domain shift robustness exhibit more accurate competence regions than naively trained ones. Moreover, we investigate whether it is possible to estimate an incompetence threshold, such that a classifier can recover its ID accuracy in the corresponding competence region under domain shift. In summary, we make the following contributions: 1. We show experimentally that accuracy decreases as incompetence scores increase and highlight the resulting trade-offs between rejection rate and accuracy gain. 2. We find that both feature- and logit-based scores are competitive in the closed world, whereas feature-based approaches work best in the open world setting. 3. We propose an approach to determine an incompetence threshold from ID data and demonstrate its utility for most domain shifts considered in this work. 4. We observe that robust classifiers do not outperform a naive baseline in terms of generalization performance in the elicited competence regions. Figure 1: The main principle behind _incompetence scores_ for improved domain generalization: We reject instances above the incompetence threshold, which is located at the 95% quantile of the training distribution. ## 2 Related Work ### OOD Detection and Generalization Dealing with anomalous (i.e., out-of-distribution; OOD) instances that differ from the ones contained in the training set (i.e., our proxy for the in-distribution; ID) is a widely discussed and conceptually overloaded topic in the machine and statistical learning literature (Han et al., 2022; Shen et al., 2021; Yang et al., 2022; Yang et al., 2021). OOD detection address the problem of flagging unusual data points which could undermine the reliability of machine learning systems (Yang et al., 2021); OOD generalization addresses the need to make predictions even when the test distribution is completely unknown or known to be different than the training distribution (Shen et al., 2021). In this work, we are interested in analyzing established domain-robust classifiers. Thus, we focus on OOD detection methods that do not modify the classifier architecture or training. Such methods are called _post-hoc_ scores (Yang et al., 2021), as they do not intervene on the downstream classifier, but only "post-process" its feature or logit space. Post-hoc OOD scores have been shown to perform well across a variety of OOD detection benchmarks (Yang et al., 2022). Previous work analyzed post-hoc OOD detection scores to predict the accuracy of a classifier on novel inputs (Techapanurak and Okatani, 2021) or to detect ID failure cases (Xia and Bouganis, 2022). In addition, Techapanurak and Okatani (2021) compute an aggregated OOD-score over an entire ID data set to predict the global accuracy of a classifier on OOD data. Differently, we aim to predict the likelihood of error from individual incompetence score values and show that this approach provides us with a finer control over the trade-off between coverage and accuracy (see Section 4.5). Despite the large volume of literature focusing on OOD detection and generalization, there are no extensive studies applying OOD scores to domain generalization benchmarks. Thus, one of the main goals of this work was to provide such a comprehensive analysis on the utility of OOD scores for improving domain generalization. ### Domain Generalization The goal of domain generalization (DG) is to train models that generalize well under covariate shifts (Zhou et al., 2022), such as adversarial attacks (Goodfellow et al., 2014) or style changes (Gatys et al., 2016), for which the label space remains unchanged during testing (Yang et al., 2021). In DG settings, we assume that we have access to different environments or data sets (e.g., art and sketch images) and Figure 2: An incompetence score is able to sort out-of-distribution (OOD) images from the PACS data set, so that higher incompetence scores result in lower classification accuracy. _(Left)_ Example images from the training domains. _(Right)_ Images from the test domains resulting in lowest and highest incompetence scores (using a Deep-KNN scoring function) in the feature space of a baseline ERM classifier. Green and red frames denote correctly and incorrectly classified images, respectively. Higher incompetence scores correlate with a decrease in the classifier’s accuracy. the goal is to make good predictions in completely unknown environments (e.g., real world images). Compared to Domain Adaptation (DA; Wang and Deng, 2018), where we have unlabeled data from the test domain, the DG problems assume that we have no knowledge about the test domain(s). Moreover, it has been shown that classifiers can assign high likelihoods under domain shift even when they are plainly wrong which makes it hard to detect failure cases (Nalisnick et al., 2019; Nguyen et al., 2015). Thus, proxy "incompetence" OOD scores appear to be good candidates for spotlighting such failures. However, to the best of our knowledge, there are no extensive studies which attempt to quantify the competence of domain-robust models. Many benchmark data sets in DG have been established, on which researchers can study generalization performance beyond a single training environment (Gulrajani and Lopez-Paz, 2020; Koh et al., 2021). In this work, we consider the main data sets contained in the DomainBed benchmark (Gulrajani and Lopez-Paz, 2020). We additionally distinguish between a _closed world_ setting, where only instances of known classes are encountered in the test domain, and an _open world_ setting, where instances of unknown classes are also present in the test domain. We believe the open world setting to be of practical interest, even though typical DG problems are formulated under a closed world assumption (Zhou et al., 2022). Some DG methods explicitly exploit domain labels in order to train robust classifiers (Arjovsky et al., 2019; Muller et al., 2021; Shi et al., 2021). However, it is not clear which DG methods can achieve consistently robust performance across different data sets. On the one hand, it has been suggested that a strong standard classifier trained with empirical risk estimation (ERM) performs favorably across multiple DG data sets (Gulrajani and Lopez-Paz, 2020). On the other hand, some DG methods have been shown to outperform an ERM baseline on several benchmark data sets (Koh et al., 2021). Here, we complement the existing literature by examining whether the competence regions of different DG classifiers differ in terms of the achieved improvements in accuracy. ### Selective Classification Inference with a reject option (aka _selective classification_, El-Yaniv et al., 2010; Geifman and El-Yaniv, 2017) enables classifiers to refrain from making a prediction under ambiguous or novel conditions (Hendrickx et al., 2021). Moreover, Zhang et al. (2023) outline the three main reasons why a reject option could be a reasonable choice: 1) failure cases; 2) unknown cases; and 3) fake inputs. For instance, Kamath et al. (2020) train natural language processing (NLP) models for selective question answering under domain shift. Varshney et al. (2022) investigate the utility of MaxProb (a common OOD detection score) as a rejection criterion across several NLP data sets. Ren et al. (2022) use the Mahalanobis distance as OOD detection method to filter inputs to NLP models for conditional text generation and Mesquita et al. (2016) showcase the reject option for catching software defects. The main challenge selective classifiers face is how to reduce the error rate by "rejecting" instances for which no reliable prediction can be made, while keeping coverage (i.e., the number of "accepted" instances) as high as possible (Chow, 1970; Nadeem et al., 2009). And while the theoretical characteristics of the resulting trade-off have been systematically studied (El-Yaniv et al., 2010; Wiener and El-Yaniv, 2011), the empirical utility of OOD "rejection scores" for ensuring robust performance in the DG setting remains largely unclear. In this work, we perform an extensive evaluation of this trade-off across a wide variety of state-of-the-art OOD scores, domain-robust classifiers, DG data sets and environments. ## 3 Method ### Notation We denote with \(c_{\theta}\) an arbitrary classifier with a vector of trainable parameters \(\theta\) (e.g., neural network weights) which we typically suppress for readability. To evaluate a classifier, we consider its accuracy, which we denote as \(\mathrm{A_{dist}}\), based on queries from some reference distribution \(x\sim p_{\mathrm{dist}}(x)\). ### Incompetence Scores The goal of an incompetence score \(s_{c}\colon\mathbb{R}^{D}\to\mathbb{R}\) is to indicate whether a classifier \(c\) is familiar with some input \(x\in\mathcal{X}\). We consider familiarity with the input to be equivalent to competence. The fundamental principle in this work is that instances eliciting a high incompetence score are intrinsically hard to predict and _vice versa_. Due to the close conceptual connection between competence and familiarity or incompetence and OOD, we employ OOD scores as proxy for incompetence. ### Admissible Incompetence Scores Detecting out-of-competence means checking whether some given incompetence score \(s_{c}(x)\in\mathbb{R}\) falls below some threshold \(\alpha\) (classified as in-competence) or above (classified as out-of-competence). We consider scores \(s_{c}(x)\) that depend on the classifier \(c\) and the input \(x\) at hand. The threshold \(\alpha\) trades off accuracy (how well does the classifier perform on accepted data) with coverage (how many samples does the score accept). In this section, we describe how a useful (ideal) incompetence score should affect downstream classification as a function of the threshold \(\alpha\). In particular, consider the subset of input space where the classifier is deemed competent given a fixed threshold \(\alpha\). It is the region of input space with incompetence score less than \(\alpha\): \[X_{c}(\alpha):=\{x:s_{c}(x)\leq\alpha\}. \tag{1}\] We use the ID data to determine a suitable threshold for the competence region, for instance we later pick \(\alpha=\alpha_{95}\) such that 95% of the ID data is in \(X_{c}(\alpha_{95})\). We consider the performance of the classifier \(c\) restricted to the competence region \(X_{c}(\alpha)\) as a function of \(\alpha\). Therefore, we filter out unseen OOD data \(p_{\text{OOD}}\) and keep samples with incompetence scores lower than \(\alpha\) to compute the resulting accuracy \(\text{A}_{\text{OOD}}(\alpha)\) of our classifier. For low \(\alpha\approx\min_{x\sim p_{\text{OOD}}}s_{c}(x)\), we expect the competence region to contain samples \(x\) which the classifier can recognize even under a distribution shift. As \(\alpha\) increases, the classifier faces samples beyond its competence. For \(\alpha>\max_{x\sim p_{\text{OOD}}}s_{c}(x)\), we evaluate the classifier on the entire OOD test data. We summarize the above description in the fundamental principle of this work: **An admissible incompetence score must assign low incompetence to those regions where the downstream accuracy is high.** In contrast, OOD detection is agnostic to the downstream task and focuses merely on flagging anomalous inputs. To show some fundamental properties of the downstream accuracy \(\text{A}_{\text{OOD}}(\alpha)\) as a function of \(\alpha\), we formalize this principle as follows: **Definition 3.1**.: _An incompetence score \(s_{c}(x)\) is called admissible if the downstream accuracy \(\text{A}_{\text{OOD}}(\alpha)\) decreases monotonically as \(\alpha\) is increased for any distribution of interest._ Figure 3: The accuracy of the ERM classifier on OOD data \(\text{A}_{\text{OOD}}(\alpha)\) as the competence region is enlarged by increasing the incompetence threshold \(\alpha\). As predicted in Proposition3.1, the accuracy starts off \(\text{A}_{\text{OOD}}(\alpha)\geq\text{A}_{\text{ID}}\) and then falls off monotonically with \(\alpha\). At same time, the fraction of data the classifier is applied to increases. The classifier accuracy and fraction of considered data can easily be traded off using this figure. This monotonic trend requires that the incompetence score \(s_{c}(x)\) is closely related to the performance of the classifier. Such a connection allows us to make strong predictions on the downstream accuracy as a function of \(\alpha\): **Proposition 3.1**.: _Given a classifier \(c_{\theta}(x)\) and its corresponding in-distribution \(p_{\mathrm{ID}}\). Then, for a test distribution of interest \(p_{\mathrm{OOD}}\) and a corresponding admissible score \(s_{c}(x)\) as in Definition3.1:_ 1. _If there is a threshold_ \(\alpha^{*}\in\mathbb{R}\) _such that for all_ \(\alpha\leq\alpha^{*}\) _ID and OOD have the same support and classification accuracy, that is_ \(X_{c}(\alpha)\cap\mathrm{supp}(p_{\mathrm{ID}})=X_{c}(\alpha)\cap\mathrm{supp }(p_{\mathrm{OOD}})\) _and if_ \(\mathrm{A}_{\mathrm{ID}}(\alpha)=\mathrm{A}_{\mathrm{OOD}}(\alpha)\)_, then,_ \(\mathrm{A}_{\mathrm{OOD}}(\alpha)\geq\mathrm{A}_{\mathrm{ID}}\) _for_ \(\alpha<\alpha^{*}\)_._ 2. _In the limit of_ \(\alpha\to\infty\)_, we find that_ \(\mathrm{A}_{\mathrm{OOD}}(\alpha)\to\mathrm{A}_{\mathrm{OOD}}\)_._ The first statement states that we expect a classifier's accuracy on OOD test data \(x\sim p_{\mathrm{OOD}}(x)\) restricted to the corresponding competence region \(X_{c}(\alpha)\) to be at least as good as its expected accuracy on ID data \(x\sim p_{\mathrm{ID}}(x)\) if \(\alpha\) is sufficiently small. The additional assumption in (a) requires that the classifier has the same competence for ID and OOD data within the high competence region \(X_{c}(\alpha^{*})\). In that case, we can even surpass the ID accuracy, which we observe in practice (see Section4.2). The second statement states that as \(\alpha\) increases, we attain the expected performance of the classifier on the OOD data. Between these two extremes, the accuracy decreases monotonically by Definition3.1. Thus, monotonicity describes the expected behavior of the OOD accuracy \(\mathrm{A}_{\mathrm{OOD}}(\alpha)\) as a function of the incompetence threshold \(\alpha\). We give the proof in AppendixA.6. ## 4 Experiments In our experiments, we analyze the effect of an incompetence threshold \(\alpha\) (see Section3.3) on DG performance. In the following, we first describe our experimental protocol. Then, we analyze the competence region in dependence on the threshold \(\alpha\) and show that the competence region behaves as predicted in Proposition3.1. Finally, we carry out an extensive investigation of the competence region for closed and open world settings, where we show the utility of the concept for various incompetence scores and point out current weaknesses. As an introductory example to the competence region, we consider Figure2 which depicts the experimental procedure on the PACS data set for a standard classifier trained with Empirical Risk Minimization (ERM; Vapnik, 1999). We train the classifier on the domains Art, Photo, and Sketch, and apply the trained classifier in the unknown Cartoon domain. The samples in the test domain are ordered by the predicted incompetence score \(s_{c}(x)\). As expected, the classifier still performs well on Cartoon samples with low incompetence scores (9 out of 9 classified correctly in the example), but the accuracy drops for high scores (only 2 out of 9 correct classification). Qualitatively, the score correctly notices that images with significantly different characteristics are much harder to classify. In the following sections, we quantify this behavior systematically for a number of different classifiers, incompetence scores, data sets, and domain. But first, we give details on our experimental setup. ### Experimental Setup We consider all combinations of nine pre-trained classifiers \(c_{\theta}(x)\), varying both in architecture and training, nine OOD post-hoc scores \(s_{c}(x)\) as incompetence scores on a total of 32 DG tasks from six different DG data sets. The pre-trained classifiers are obtained as follows. We train various state-of-the-art classifiers from DG literature, namely Fish (Shi et al., 2021), GroupDRO (Sagawa et al., 2019), SD (Pezeshki et al., 2021), SagNet (Nam et al., 2021), Mixup (Yan et al., 2020) and VREx (Krueger et al., 2021). Furthermore, we train three different neural network architectures with empirical-risk-minimization (Vapnik, 1999): A ResNet based architecture which we denote by ERM (He et al., 2016), a Vision Transformer (Dosovitskiy et al., 2020) and a Swin Transformer (Liu et al., 2021). Training details and hyperparameter settings are listed in AppendixA.5. These models are trained on six domain generalization data sets from the DomainBed repository (Gulrajani and Lopez-Paz, 2020): PACS (Li et al., 2017), OfficeHome (Venkateswara et al., 2017), VLCS (Fang et al., 2013), TerraIncognita (Beery et al., 2018), DomainNet (Peng et al., 2019) and SVIRO (Cruz et al., 2020). Each DG data set consists of four to ten different domains from which we construct different DG tasks: We train a classifier on all but one domain. The one left out during training is then the OOD test domain where the competence region is evaluated. As an example consider the DG task behind the earlier example in Figure 2: If we train a model on the domains Photos, Art images, and Sketches, the DG task asks for an accurate model on the domain Cartoons which constitute the OOD test domain (see Figure 2). Overall we consider 32 DG tasks which result in 288 trained networks. On each of these models we compute various incompetence scores via a number of _post-hoc_ methods. The _post-hoc_ methods used in this paper can be grouped into the following categories: * Feature-based: Virtual-logit Matching (ViM; Wang et al., 2022), Deep-KNN (Deep-KNN; Sun et al., 2022); * Density-based: Gaussian mixture models (GMM), minimum Mahalanobis distance between features and class-wise centroids (Mahalanobis; Lee et al., 2018); * Reconstruction-based: reconstruction error of PCA in feature space (Aggarwal, 2017); * Logit-based: energy score (Energy; Liu et al., 2020), maximum logit (Logit; Hendrycks et al., 2019), maximum softmax (Softmax; Hendrycks and Gimpel, 2016), and energy-react (Energy-React; Sun et al., 2021). Note, that we interpret higher scores as indicative of incompetence (e.g., we consider the negative of the maximum softmax and the maximum logit). For each DG task, we distinguish four data sets. For the ID distribution, we consider a training set, a validation set for hyperparameter optimization, and a test set that has no influence on the optimization process for the subsequent evaluation. The classifiers are trained on the ID training set. We compute the scores for the ID distribution on the ID validation set and the ID accuracy on the ID test set. The OOD test set is given by the DG task (e.g., as in Figure 2). After training, we apply all post-hoc methods to the penultimate feature layer or the ouput (logits) layer of the classifier, as is typical in the OOD detection literature. If the post-hoc method needs to fit the data (as for instance with GMMs), we fit the score function on the ID training data. ### Competence Threshold In this section, we analyze the performance of the classifiers as a function of the threshold \(\alpha\) which determines their competence region (see Equation 1 in Section 3). To this end, we compute the incompetence scores on the ID validation data set and on all OOD data samples. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{In Percentages (\%)} & \multicolumn{3}{c|}{PACS} & \multicolumn{3}{c|}{OfficeHome} & \multicolumn{3}{c|}{VLCS} \\ \cline{2-13} & OOD-Gain \(\uparrow\) & ID-Gain & Coverage \(\uparrow\) & OOD-Gain \(\uparrow\) & ID-Gain \(\uparrow\) & Coverage \(\uparrow\) & OOD-Gain \(\uparrow\) & OOD-Gain \(\uparrow\) & Coverage \(\uparrow\) \\ \hline \hline Deep-KNN & 1-[1-18] & **0** [2-12] & 66 [9-59] & 8 [3-16] & 14 [3-21] & 8 [6-94] & **2** [0-5] & **-9** [2-17.4] & 8 [72-59] \\ ViM & 9 [1-19] & **0** [1-17] & 66 [9-59] & 5 [2-14] & 14 [3-21] & 8 [6-95] & **2** [0-5] & **-8** [2-14] & 85 [2-99] \\ Softmax & 7 [1-14] & -4 [1-15] & 8 [6-59] & 8 [3-15] & 12 [3-12] & 8 [4-95] & 2 [0-4] & -10 [2-73] & 93 [8-79] \\ Logit & 9 [1-12] & -3 [12-5] & 80 [9-16] & **9** [2-16] & -13 [13-30] & 81 [6-96] & **2** [0-5] & -10 [2-74] & 92 [3-89] \\ Energy & 9 [1-12] & -3 [1-25] & 79 [6-19] & 8 [2-16] & -14 [3-30] & 82 [6-96] & **2** [0-4] & -10 [2-74] & 93 [8-29] \\ Energy-React & 9 [1-12] & -3 [1-25] & 79 [6-96] & 8 [2-16] & -14 [3-30] & 82 [6-96] & 2 [0-4] & -10 [2-74] & 93 [8-29] \\ Mahalanobis & 1 [1-12] & -8 [2-4] & 80 [5-96] & 1 [0-97] & -17 [4-12] & 91 [7-95] & 0 [1-43] & -11 [-28] & 93 [7-39] \\ GMM & 2 [0-13] & -8 [2-21] & 76 [5-96] & 0 [0-7] & -18 [4-20] & 92 [76-95] & 0 [1-13] & -12 [2-84] & 85 [33-99] \\ PCA & 1 [1-110] & -12 [4-22] & 78 [7-97] & 0 [0-7] & 18 [1-42] & 93 [78-96] & 0 [1-12] & -17 [28-14] & 88 [6-49] \\ \hline \multicolumn{13}{|c|}{Terra Imagen} & \multicolumn{3}{c|}{DomainNet} & \multicolumn{3}{c|}{SVM} \\ \cline{2-13} & OOD-Gain \(\uparrow\) & ID-Gain \(\uparrow\) & Coverage \(\uparrow\) & OOD-Gain \(\uparrow\) & ID-Gain \(\uparrow\) & Coverage \(\uparrow\) & OOD-Gain \(\uparrow\) & ID-Gain \(\uparrow\) & Coverage \(\uparrow\) \\ \hline Deep-KNN & **32** [2-15] & -8 [3-46] & 37 [1-35] & 4 [0-6] & -6 [4-50] & 8 [5-96] & 4 [1-28] & 0 [1-40] & 28 [8-64] \\ ViM & **28** [2-15] & -13 [3-35] & 41 [-57] & 2 & 0 [-8] & -8 [5-80] & 90 [68] & 90 [68] & 4 [1-30] & **0** [0-6] & 19 [6-62] \\ Softmax & 4 [1-12] & -38 [5-24] & 85 [9-80] & 3 [0-5] & -8 [2-52] & 94 [77-98] & 4 [0-24] & **0** [1-10] & 60 [28-88] \\ Logit & 5 [1-18] & -34 [5-25] & 85 [9-80] & 2 [0-5] & -8 [5-14] & 53 [77] & 2 [0-21] & 9 [1-94] & 67 [40-57] \\ Energy & 5 [1-19] & -38 [5-25] & 85 [9-80] & 2 [0-5] & -9 [1-51] & 94 [78-98] & 2 [2-21] & **0** [2-24] & 67 [40-87] \\ Energy-React & 5 [1-19] & -38 [5-25] & 85 [9-80] & 2 [0-5] & -9 [1-51] & 94 [78-98] & 2 [2-21] & **0** [2-24] & 67 [41-98] \\ Mahalanobis & 5 [1-38] & -38 [5-66] & 16 [2-79] & -1 [3-35] & -11 [1-53] & 92 [79-97] & 2 [1-28] & **0** [1-19] & 20 [5-96] \\ GMM & 7 [1-38] & -28 [5-10] & 56 [7-84] & -1 [3-4] & -12 [53-3] & 93 [79-98] & 3 [1-28] & **0** [1-14] & 20 [5-67] \\ PCA & 1 [1-120] & -38 [5-324] & 87 [35-99] & -1 [3-2] & -12 [53-2] & 92 [84-98] & 0 [1-15] & -2 [36-40] & 87 [47-96] \\ \hline \end{tabular} \end{table} Table 1: Accuracy on competence region of OOD domain for different domain generalization data sets and incompetence scores. As the threshold for the competence regions, we choose the 95% percentile of the ID validation set. For all metrics, a higher value means better performance (\(\uparrow\)). All displayed values are medians over different domain roles and classifiers, brackets indicate 90% confidence interval. Figure 3 depicts the resulting score distributions and accuracy \(\mathrm{A_{OOD}}(\alpha)\) as a function of the threshold \(\alpha\) for a single classifier (ERM). Here, we consider four incompetence scores on one of the DG tasks provided by PACS and TerraIncognita, respectively. We find that the considered incompetence scores fulfill the requirement for a competence detector in Definition3.1 that the accuracy must decrease monotonically as the threshold \(\alpha\) increases. We then find the theoretical results in Section3 confirmed: For low \(\alpha\), the accuracy \(\mathrm{A_{OOD}}(\alpha)\) is high, and even exceeds the average accuracy on the ID data \(\mathrm{A_{ID}}\) (see Proposition3.1 (a)). It eventually decreases until \(\mathrm{A_{OOD}}(\alpha)\rightarrow\mathrm{A_{OOD}}\) for large \(\alpha\) (see Proposition3.1b). Figure 3 also depicts the fraction of ID and OOD data that is considered (i.e., not rejected) as we increase the incompetence threshold \(\alpha\): \[\mathrm{Coverage}_{\mathrm{dist}}(\alpha)=\frac{|\mathcal{D}_{\mathrm{dist}} \cap X_{c}(\alpha)|}{|\mathcal{D}_{\mathrm{dist}}|}. \tag{2}\] For instance, we can compare the methods at the \(\alpha_{95}\) which includes 95% of the ID data (vertical gray line in Figure 3). Here, Logit keeps a significantly larger fraction of test data compared to the other incompetence scores. However, this results in a lower accuracy in the competence region \(\mathrm{A_{OOD}}(\alpha_{95})\). Unfortunately, due to the nature of the DG problem, the accuracy curve \(\mathrm{A_{OOD}}(\alpha)\) in Figure 3 is not accessible during inference, which makes it difficult to choose a suitable threshold \(\alpha\). In Figure 3 we can observe that ViM and KNN achieve at the 95% percentile (w.r.t. the ID validation set) an accuracy that is comparable to the ID accuracy, rendering the predictions in this competence region very accurate and trustworthy. GMM and Logit obtain very high accuracies in the competence region \(X_{c}(\alpha)\cap\mathcal{D}_{\mathrm{OOD}}\) for small \(\alpha\) values, but exhibit a larger drop in accuracy at the 95% percentile (w.r.t. the ID distribution). We show the accuracies \(\mathrm{A_{OOD}}(\alpha)\) for different threshold values \(\alpha\) for all data sets and DG tasks in AppendixA.2. ### Extensive Survey In the following, we evaluate all nine incompetence scores on all six DG data sets using the nine classifiers. Since each data set features 32 different DG tasks, we perform a total of \(32\cdot 9\cdot 9=2592\) experiments. For each experiment, we obtain accuracy curves as in Figure2 as a function of \(\alpha\). To summarize and compare the performance of each score on each data set, we need to deal with the trade-off between accuracy and coverage. Thus, we measure accuracy \(\mathrm{A_{OOD}}(\alpha_{95})\) at the score \(\alpha_{95}\), such that 95% of ID validation data fall below this threshold, that is \(\mathrm{Frac}_{\mathrm{ID}}(\alpha_{95})=95\%\). As mentioned in Section4.2, choosing \(\alpha\) in DG is notoriously difficult, since we have no access to the test domain(s) during training. The following quantities provide useful summary statistics for comparing our results across all experiments: 1. OOD-Gain = \(\mathrm{A_{OOD}}(\alpha_{95})-\mathrm{A_{OOD}}\): The performance gain in the OOD domain by considering only the data in the competence region \(X_{c}(\alpha_{95})\). 2. ID-Gain= \(\mathrm{A_{OOD}}(\alpha_{95})-\mathrm{A_{ID}}\): Expresses the performance gap between the accuracy on OOD data in the competence region \(X_{c}(\alpha_{95})\) and the accuracy on the entire ID data \(\mathrm{A_{ID}}\). 3. Coverage = \(\mathrm{Coverage}_{\mathrm{OOD}}(\alpha_{95})\) as given by Equation2: The proportion of OOD data that falls within the competence region. For each quantity, a higher value indicates better performance (\(\uparrow\)). Note that the coverage of the competence region alone is not informative. A naive approach that includes all samples in the competence region would achieve the largest competence region but would fall short in terms of OOD-Gain or ID-Gain. Table 1 summarizes the results from our extensive sweep over classifiers, data sets, and incompetence scores. The displayed values are the medians over different domain roles and classifiers. Overall, we observe that in the competence region, higher accuracy is achieved compared to the naive application on all OOD data instances. This confirms that incompetence score and accuracy are indeed tightly linked. However, for most DG data sets and incompetence scores, we are not able to replicate the ID accuracy. This indicates that we cannot naively expect the classifier to attain the same accuracy as observed in the ID distribution in the 95% percentile \(\alpha_{95}\). Further important findings are: * In general, feature-based (Deep-KNN, ViM), as well as logit-based incompetence scores (Softmax, Logit, Energy, Energy-React) obtain significantly higher accuracy on OOD data (higher OOD-Gain) by filtering the data to the competence region \(X_{c}(\alpha_{95})\) than the density- and reconstruction-based approaches (Mahalanobis, GMM, PCA). * The feature-based scores achieve a significant performance boost on TerralIncognita. TerraIncognita contains DG tasks that suffer from a particularly huge drop in accuracy from ID to OOD distribution (see Appendix A.5). * The proportion of OOD data that falls inside the competence region (i.e., coverage) is smallest for feature-based methods, but they also provide the highest accuracy across all DG data sets. ### Open World Performance In this section, we study how different incompetence scores shape the competence region when instances of unknown classes are present in the ID distribution. Accordingly, for each domain in PACS, VLCS, Office-Home, and TerraIncognita data sets, we create a matching "open world" domain containing only instances of unknown classes. In total, we create 16 open world domains. For example, if we evaluate a model on the PACS Sketch domain, we create an open world domain containing only sketches of classes that are not in the PACS data set. We describe the procedure for creating the open world domains in detail in Appendix A.4. In the following, we restrict our analysis to the 16 domains for which an open world twin exists. We enrich the existing test domains with 0%, 5%, 10%, 15%, 20%, and 25% instances with unknown classes. A good incompetence score should mark instances of unknown classes with a high value and therefore render them outside of the competence region \(X_{c}(\alpha_{95})\). In this case, the OOD-Gain would increase as more open world instances find their way into the test set. In Figure 4 we observe that this behavior is achieved particularly well for the ViM score. The Logit and Softmax scores are less successful in delineating unknown class instances from the competence region and therefore the OOD-Gain is less pronounced. Indeed, to test whether this observation holds statistically across all classifiers, we fit a hierarchical linear regression (Stephen & Anthony, 2002) on OOD Gain with Classifier, Percentage Open World, and Incompetence Score as well as their interactions as fixed factors, together with Data Set and Test Domain as random factors (to account for the fact that the same classifier is evaluated in multiple data sets and test domains). The statistical results confirm the general trends visible in Figure 4. First, we find significant main effects of Percentage Open World (i.e., overall OOD Gain increases with increasing number of open world instances) and Incompetence Score (i.e., ViM and Deep-KNN achieve a higher overall OOD Gain). Importantly, the only significant interaction revealed by the hierarchical regression model suggests that ViM is able to achieve the largest OOD Gain as the Figure 4: Performance of Logit and Softmax scores (logit-based) against Deep-KNN and ViM (feature-based) for an increasing fraction of open world data (unknown classes) in the test domain. The performance gain on the OOD data (_OOD-Gain_, higher is better) for the logit-based methods is less pronounced compared to ViM and Deep-KNN. fraction of open world samples increases. Note, that the same trend is present for Deep-KNN, but it fails to achieve statistical significance due to its high variability (see Figure 4). Moreover, none of the effects involving the factor Classifier turn out to be significant predictors of OOD Gain, suggesting that the results are largely classifier-independent. In the closed world setting differences between logit- and feature-based scores are for most DG data sets small (see e.g. Table 1). However, we have shown that it is very relevant in the setting where instances of unknown classes occur. In Appendix A.4 we show the open world behavior for all incompetence scores. ### Estimating the Incompetence Threshold Choosing the 95% percentile of the ID distribution as incompetence threshold can be considered as weighting the trade-off between accuracy and coverage towards coverage - only 5% of ID data are rejected. We now seek a slightly different incompetence threshold which puts more weight on the accuracy. The question we want to address is whether _we can set a threshold such that a certain accuracy is achieved in the competence region?_ This question is of high practical relevance, but also particularly challenging for two reasons. First, many scores used so far have no out-of-the-box connection to the accuracy and second, we deal with a domain shift that might result in a completely new score-accuracy relationship. Thus, as a potential remedy, we suggest learning \(\widetilde{s}_{c}(x)=p_{\text{ID}}(c(x)\neq y\mid s_{c}(x))\) and using this conditional probability as a _transformed_ score. This score represents the probability of an incorrect prediction given the original score. If we define a competence region with an incompetence threshold of \(1-\text{A}_{\text{ID}}\), we can expect an accuracy of at least \(\text{A}_{\text{ID}}\) on ID data. We hope that this relation also holds under domain shift. To predict \(\widetilde{s}_{c}(x)=p_{\text{ID}}(c(x)\neq y\mid s_{c}(x))\) we rely on an architecture that is constrained to be monotonic as proposed in (Muller et al., 2021). Therefore, we do not change the order of the scores and equip the transformed score with an inductive bias that is consistent with Definition 3.1. The transformed score also has a predictable extrapolation behavior which is helpful Figure 5: ID-Gain and Coverage for Logit and ViM if transformed as described in Section 4.5. Threshold is set such that the ID-Gain should be at least 0. _Above:_ ID-Gain for Logit and ViM due to different data sets. _Below:_ Coverage for Logit and ViM due to different datasets. Medians and quantiles are in both figures over different domain roles and classifiers. The threshold is set as the ID accuracy. when the distribution shifts. Note that since the transformation is monotonic, a threshold for the original score is also a valid threshold for the transformed score and _vice versa_. Therefore, we can also interpret this approach as estimating an incompetence threshold such that a certain accuracy is achieved. Accordingly, Figure 5 depicts the ID-Gain and Coverage for ViM and Logit (transformed), if we select \(1-\mathrm{A_{ID}}\) as the incompetence threshold. The transformed ViM score suggests that we achieve in most cases at least the ID accuracy, but at the cost of small coverage. The transformed Logit score has higher coverage, but it often fails to reproduce the ID accuracy (e.g., in the TerralIncognita data set). However, while we attain the ID accuracy for most cases, we still observe some failure cases, which makes the approach only tentative. Note, that these results also suggest that the information contained in the logits is not sufficient to give suitable competence regions in the sense of our question. ## 5 Conclusion Accepting only predictions from the competence region of a classifier increases its accuracy dramatically under domain shift. Determining the fraction of samples where the classifier could be considered competent is a question of how to weigh the trade-off between accuracy and coverage. Choosing this trade-off via the incompetence threshold is application dependent and particularly challenging in the domain generalization (DG) setting where the test distribution is different from the training distribution. We showed that even in DG, it is possible to achieve higher than ID accuracy under domain shift - at the price of potentially little coverage (see Figure 2 or Appendix A.2). We investigated a coverage-oriented threshold that would reject only 5% of all instances from the training distribution. In this case, we achieved a considerable improvement under distribution shift compared to a naive application where no samples are rejected. However, at this particular threshold, we could recover the ID accuracy only in some settings. Thus, we also studied whether we can learn an accuracy-oriented threshold where some predefined ID accuracy is guaranteed in the competence region. This approach was able to replicate the ID accuracy in the competence region for most investigated domain shifts. However, for a few domains, OOD accuracy drops significantly below the expected ID, calling for a more detailed understanding of the behavior of incompetence scores in DG. Nonetheless, we showed that the accuracy of the competence region behaves monotonically with the threshold \(\alpha\) (see Proposition 3.1 and Section 4.2). Furthermore, we investigated differences between the closed and open world settings. We found that in the open world setting, feature-based methods, such as Deep-KNN (Sun et al., 2022) and ViM (Wang et al., 2022), elicit a particularly useful competence region. In a closed world DG setting, a clear winner does not emerge, but ViM and Deep-KNN seem to be competitive to logit-based approaches. We also analyzed whether we could find differences in the accuracy of the competence region with respect to different classifiers. We could not find statistically significant effects on the accuracy in the competence region, leaving the benefit of robust algorithms for DG and different architectures for enlarged competence regions questionable. All methods are comparably fast to evaluate and therefore easily accessible for practitioners. However, the resolution of the trade-off between accuracy and coverage is not yet satisfactory in all cases, calling for more research on better competence scores.
2305.12941
On the Correspondence between Compositionality and Imitation in Emergent Neural Communication
Compositionality is a hallmark of human language that not only enables linguistic generalization, but also potentially facilitates acquisition. When simulating language emergence with neural networks, compositionality has been shown to improve communication performance; however, its impact on imitation learning has yet to be investigated. Our work explores the link between compositionality and imitation in a Lewis game played by deep neural agents. Our contributions are twofold: first, we show that the learning algorithm used to imitate is crucial: supervised learning tends to produce more average languages, while reinforcement learning introduces a selection pressure toward more compositional languages. Second, our study reveals that compositional languages are easier to imitate, which may induce the pressure toward compositional languages in RL imitation settings.
Emily Cheng, Mathieu Rita, Thierry Poibeau
2023-05-22T11:41:29Z
http://arxiv.org/abs/2305.12941v1
# On the Correspondence between Compositionality and Imitation in Emergent Neural Communication ###### Abstract Compositionality is a hallmark of human language that not only enables linguistic generalization, but also potentially facilitates acquisition. When simulating language emergence with neural networks, compositionality has been shown to improve communication performance; however, its impact on imitation learning has yet to be investigated. Our work explores the link between compositionality and imitation in a Lewis game played by deep neural agents. Our contributions are twofold: first, we show that the learning algorithm used to imitate is crucial: supervised learning tends to produce more average languages, while reinforcement learning introduces a selection pressure toward more compositional languages. Second, our study reveals that compositional languages are easier to imitate, which may induce the pressure toward compositional languages in RL imitation settings. ## 1 Introduction Compositionality, a key feature of human language, makes it possible to derive the meaning of a complex expression from the combination of its constituents Szabo (2020). It has been suggested that more compositional languages are easier to acquire for both humans and artificial agents Ren et al. (2020); Li and Bowling (2019); Ren et al. (2020); Chaabouni et al. (2020). Therefore, to better understand the factors underlying language transmission, it is crucial to understand the relationship between ease-of-acquisition and compositionality. We study the link between compositionality and ease-of-acquisition in the context of emergent communication. In this setting, two deep artificial agents with asymmetric information, a Sender and a Receiver, must develop communication from scratch in order to succeed at a cooperative game Havrylov and Titov (2017); Lazaridou et al. (2017); Lazaridou and Baroni (2020). We will refer to this mode of language learning, in which agents develop language via feedback from mutual interaction, as _communication-based learning_. Several studies have linked compositionality to ease-of-acquisition in communication-based learning. Chaabouni et al. (2020) show compositionality predicts efficient linguistic transmission from Senders to new Receivers. Conversely, Li and Bowling (2019) re-pair a Sender periodically with new Receivers, and show this ease-of-teaching pressure improves compositionality. Communication-based learning is not the only possibility for language learning, however. Humans also crucially acquire language through _imitation-based learning_, in which they learn by observing other humans' language use Kymissis and Poulson (1990). Ren et al. (2020) and Chaabouni et al. (2020) employ imitation learning, where in the first study, agents undergo a supervised imitation stage before communication-based learning, and where in the second, agents alternate between communication-based learning and imitating the best Sender. However, the dynamics of imitation are not the focus in either study. For such an important vehicle of language acquisition, imitation-based learning thus remains under-explored in the emergent communication literature. We extend these lines of inquiry to systematically investigate compositionality in imitation-based learning.1 Our contributions are as follows: (1) We show that imitation can automatically select for compositional languages under a reinforcement learning objective; and (2) that this is likely due to ease-of-learning of compositional languages. Footnote 1:...for artificial agents. We do not test theories of human imitation learning. Setup We study imitation in the context of referential communication games Lewis (1969). In this setting, a Sender agent observes an object \(x\) and transmits a message \(m\) to a second Receiver agent. Using this message, the Receiver performs an action for which both agents receive a reward. Over the course of the game, agents converge to a referential system \((x,m)\), which we refer to as an emergent language. Measuring CompositionalityEvaluating compositionality in emergent languages is not straightforward given their grammars are a-priori unknown. Therefore, we quantify compositionality using topographic similarity (topsim) Kirby and Brighton (2006), a grammar-agnostic metric widely applied to emergent languages in the literature. Topsim is defined as the Spearman correlation \(\rho\) between Euclidean distances in the input space and Levenstein distances in the message space- that is, it captures the intuition that nearby inputs should be described with similar messages. While we consider other compositionality metrics such as positional disentanglement Chaabouni et al. (2020), we focus on topsim due to its high correlation with generalization accuracy (\(\rho=0.83\)) Rita et al. (2022). See appendix A.3 for extended experiments on compositionality metrics and generalization. ### Imitation Task To investigate whether compositional languages are selected for in imitation, we posit an imitation task where one new _Imitator_ Sender or Receiver simultaneously imitates several _Expert_ Senders or Receivers with varying topsims. Both Sender and Receiver agents are parameterized by single-layer GRUs Cho et al. (2014) that are deterministic after training (see appendix B for implementation).2 While we explore imitation for both agents, we focus on Sender imitation in the main text, and extend to Receiver imitation in appendix E. A minimal example of imitation learning with only one Expert Sender-Receiver pair is shown in fig. 1. Footnote 2: Experiments are implemented using EGG Kharitonov et al. (2021). Code may be found at [https://github.com/chengemily/EGG/tree/imitation](https://github.com/chengemily/EGG/tree/imitation). The Sender imitation task is as follows: given a set of \(k\) Expert Senders, we train an identical, newly initialized Sender on the Experts' inputs and outputs \((x,m)\). That is, for each round of training, all \(k\) Experts as well as the Imitator Sender receive input \(x\) and output \(m^{(1)}\cdots m^{(k)}\) and \(m^{I}\), respectively. The Imitator is then tasked to minimize the difference between their output and a uniform mixture of the \(k\) Expert outputs. DatasetData in the imitation task consists of inputs and outputs of pairs of Expert agents trained to convergence on a communication game- in our case, the two-agent reconstruction task of Kottur et al. (2017). To generate the Experts, we pre-train \(N=30\) Sender-Receiver pairs on this reconstruction task to high validation accuracy (\(0.99\pm 0.01\)) (task and training details in appendix A). Expert training produces the following data: 1) inputs \(x\); 2) messages \(m\) corresponding to Expert Senders' encodings of \(x\); and 3) outputs \(\hat{x}\), the Expert Receivers' reconstructions of \(x\) given \(m\). Each input \(x\) denotes an object in an "attribute-value world", where the object has \(n_{att}\) attributes, and each attribute takes \(n_{val}\) possible values. We represent \(x\) by a concatenation of \(n_{att}\) one-hot vectors, each of dimension \(n_{val}\). On the other hand, messages \(m\) are discrete sequences of fixed length \(L\), consisting of symbols taken from a vocabulary \(V\). We set \(n_{att}=6\), \(n_{val}=10\), \(|V|=10\), and \(L=10\), corresponding to a relatively large attribute-value setting in the literature (Table 1 of Galle et al. (2022)). We split the input data (\(n=10^{6}\)) into a training set and two holdout sets. Similar to Chaabouni et al. (2020), we define two types of holdouts: a zero-shot generalization set (\(n=354294\)), where Figure 1: Imitation setup for one Expert pair of agents. A newly initialized Imitator Sender (lower left) and Imitator Receiver (lower right) imitate an Expert Sender (top left) and an Expert Receiver (top right). The Expert Sender has been trained on a communication game with the Expert Receiver (bold arrows), so that when the Sender encodes input \(x\) into message \(m\) (e.g., “abcdcbd”), the Receiver decodes \(m\) into \(\hat{x}_{E}\), reconstructing \(x\). Imitation (dotted arrows) consists of training the Imitator on inputs/outputs of the respective Expert: (\(x\), \(m\)) for the Sender and (\(m\), \(\hat{x}_{E}\)) for the Receiver. one value is held-out during training, and an in-distribution generalization set (\(n=531441\)). The training set, on which we both train and validate, represents 1% of in-distribution data (\(n=5315\)). These data splits are used in Expert training and are inherited by the imitation task (see appendix B.2 for details on generating the data split). Imitation learning algorithmsWhile imitation is classically implemented as supervised learning, we test two imitation learning procedures: 1) supervised learning (SV) with respect to the cross-entropy loss between Imitator and Expert outputs; and 2) reinforcement learning with the REINFORCE algorithm (RF) [23], using per-symbol accuracy as a reward. When using REINFORCE, we additionally include an entropy regularization term weighted by \(\lambda\) to encourage exploration, and subtract a running mean baseline from the reward to improve training stability [23]. See appendix D for loss functions and B.2 for detailed hyperparameter settings. ### Evaluation To evaluate properties of imitation learning, we identify three descriptors of interest: validation accuracy, ease-of-imitation, and selection of compositional languages. AccuracyWe evaluate imitation performance between an Imitator and Expert by the average per-symbol accuracy between their messages given an input. When using REINFORCE, training accuracy is computed using the Imitators' sampled output while validation accuracy is computed using the Imitators' argmax-ed output. Ease-of-imitationWe evaluate ease-of-imitation of a language two ways. First, imitation sample complexity (\(T\)): the number of epochs needed to reach 99% validation accuracy, and second, imitation speed-of-learning (\(\mathsf{SOL}^{I}\)): the area under the validation accuracy curve, cut-off at \(t\) epochs chosen by visual inspection of convergence. Selection of compositional languagesSender imitation consists of learning one-to-one input-to-message mappings from a sea of one-to-many Expert mappings. Then, the Imitator's language will consist of a mixture of Expert languages, where the mixture weights reveal the extent of selection. In this mixture, we proxy the Imitator's learned weight for an Expert as the proportion of messages in the training set for which Imitator accuracy on the Expert message is the highest. Note that the coefficients may not add to one: if the highest Expert accuracy for a message does not exceed chance (10%), we consider the message unmatched. To quantify selection, we use the intuition that selection corresponds jointly to peakedness and asymmetry in the learned distribution over Expert languages sorted by topsim. We evaluate peakedness using the Shannon entropy and asymmetry using Fisher's moment coefficient of skew of Expert weights. Formally, let there be \(k\) Experts, where Experts are sorted in ascending order of top-sim (Expert \(i\)=1 is the least and \(i\)=\(k\) is the most compositional, respectively). The Imitator learns a mixture of the Expert languages with weights \(W:=(w_{i})_{1\leq i\leq k}\) (normalized). Given \(W\), we evaluate peakedness with: \[\mathcal{H}(W)=-\sum_{i=1}^{k}w_{i}\log(w_{i}). \tag{1}\] To quantify asymmetry of expert weights, we estimate the Fisher's moment coefficient of skew: \[\tilde{\mu}(W)=\frac{1}{k}\sum_{i=1}^{k}\left(\frac{w_{i}-\mu}{\sigma}\right) ^{3}, \tag{2}\] where \(\mu\) is the mean and \(\sigma\) is the standard deviation of \(W\). A skew of 0 implies perfect symmetry, positive skew corresponds to a right-tailed distribution, and negative skew corresponds to a left-tailed distribution. Intuitively, the more negative the skew of the Expert weights, the more weight lies on the right side of the distribution, hence the greater "compositional selection effect". We proxy selection, then, by a negative skew (more weight assigned to high-topsim Experts) and low entropy (peakedness) in the Expert weight distribution. ## 3 Imitation and Selection of Compositionality We present results for imitation on mixtures of \(k=2\)-\(5\) Expert Senders. First, we generate 30 Expert languages from the referential task, initially considering Expert distributions corresponding to evenly-spaced percentiles of topsim, including the minimum and maximum \((0.26,0.43)\). For example, when \(k=3\), we take the lowest, \(50^{th}\) percentile, and highest-topsim languages. All results are aggregated over \(5\) random seeds after \(2000\) training epochs. We find that (1) whether Imitators prefer compositional Experts depends crucially on the learning algorithm: imitation by reinforcement results in marked compositional selection compared to supervision; and (2) compositional selection also depends on variance of expert topsims, \(\lambda\) entropy regularization coefficient, and number of Experts. The distribution of learned Expert weights in fig. 2, as well as imitation validation accuracy curves in fig. 2, evidence that in imitation by supervision, the empirical mixture is closer to uniform than when imitating by reinforcement. Otherwise, when optimizing using reinforcement, the Imitator selects more compositional languages. The shape of the Expert weight distribution is tempered by the entropy regularization coefficient \(\lambda\): smaller \(\lambda\) results in greater compositional selection (that is, lower entropy and more negative skew) of the weight distribution (fig. 3). At the limit, imitation by supervision results in the highest entropy and the skew that is closest to zero. We then test the effect of Expert topsim distribution _asymmetry_ on the learned weights. To do so, for each \(k>2\), we generate \(10\) Expert topsim distributions with varying skew, following the procedure outlined in appendix D.2 (when \(k=2\), skew is mechanically \(0\)). We find that for both REINFORCE and supervision, holding \(k\) equal, the skew and entropy of the learned Expert weight distribution are robust (i.e., not correlated) to the skew of the underlying Expert topsim distribution (fig. D.2). This is desirable when imitating by reinforcement and undesirable when imitating by supervision: for example, consider Expert topsim distributions [low high high] (skew\(<0\)) and [low high] (skew\(>0\)). In both cases, REINFORCE will select a high-topsim Expert, and supervision will weight all Experts equally, that is, supervision is unable to de-select poor topsims. Using all Expert topsim distributions generated so far (those where topsim ranks are evenly spaced, and those exhibiting varying skews), we investigate the effect of topsim distribution _spread_, quantified by standard deviation, on the learned weights. In fig. 4, we note a significant negative effect of Expert topsim standard deviation on the degree of compositional selection. That is, the more disperse the Expert topsims, the more the Imitator can differ Figure 4: The skew of learned Expert Sender weights vs. the standard deviation of the Expert topsim (\(\pm 1\) std.) for RF (left) and SV (right) for \(k=2\)-\(5\) Experts. Expert weight skew and Expert topsim standard deviation are highly and significantly correlated (\(\alpha=1\)e-\(6\)), and the linear effect \(m\) is much (6x) higher for RF than for SV. Figure 3: Entropy (left) and skew (right) (\(\pm 1\) std.) of learned Expert weights by a Sender Imitator for \(k=2\) and \(3\) Experts. Expert languages’ topsims range evenly from 0.26 to 0.43. Both entropy and skew increase to the entropy of a uniform distribution, skew of a symmetric distribution (\(=0\)), respectively as exploration (\(\lambda\)) increases, attaining maxima in supervision (SV). Figure 2: Sender Imitator’s learned weights (\(\pm 1\) std.) on \(k=2\) (top) and \(k=3\) (bottom) Expert languages whose topsims range evenly from 0.26 to 0.43. The left two columns correspond to imitation by reinforcement (RF). As the entropy coefficient \(\lambda\) increases (left to middle), the weights are more uniform, and are most uniform in the supervised setting (right). Refer to fig. 3 for skews and entropies of the distributions. entiate between and select compositional Experts (shown by a more negative skew in learned Expert weights). Though this correlation is highly statistically significant for both REINFORCE and supervision, the effect is \(\sim 8\)x greater for REINFORCE, demonstrating that the spread between expert compositionalities plays a more important role in the degree of selection by reinforcement. Finally, selection is less salient as the number of Experts increases, seen by the increasing entropies and skews of Expert weights (figs. 3 and D.3). Results for \(k>3\) may be found in appendix D. Understanding why REINFORCE selects for compositional languagesThe different results between the optimization algorithms correspond to inherent differences in learning objective. Successful imitation minimizes the Kullback-Leibler divergence between the Imitator \(\pi^{I}\) and the Expert policies \(\pi^{E}\); supervision is classically known to minimize the _forward_ KL divergence \(D_{KL}(\pi^{E}||\pi^{I})\), while reinforcement minimizes the _reverse_ KL divergence \(D_{KL}(\pi^{I}||\pi^{E})\) with respect to \(\pi^{I}\). That is, imitation by supervision is mean-fitting while imitation by reinforcement is mode-fitting- the former learns a uniform mixture of Expert languages (see appendix D.4 for proof), and the latter selects the best Expert language. ## 4 Speed-of-Imitation May Explain Compositional Selection Thus far, we have seen that imitation by reinforcement selects compositional languages. This is likely because higher topsim languages are _easier to imitate_. We establish a positive and statistically significant relationship between topsim and ease-of-imitation, expanding the explorations in Ren et al. (2020); Li and Bowling (2019); Chaabouni et al. (2020) (see appendix C for experimental details). We evaluate ease-of-imitation using \(k=1\), after \(t=500\) (SV) and \(2000\) epochs (RF), where \(t\) is chosen based on validation accuracy convergence. Correlations between topsims of \(30\) Expert languages and Imitator performance (averaged over three random seeds) are shown in table 1. We find that for both imitation by supervision and reinforcement, topsim is (1) significantly negatively correlated to imitation sample complexity \(T\); (2) significantly positively correlated to speed-of-imitation SOL. Moreover, correlations between topsim and ease-of-imitation are stronger than those between Expert validation accuracy and ease-of-imitation (table C.1). This suggests that the positive relationship between compositionality and ease-of-imitation is not due to a confound of high validation accuracy. ## 5 Discussion Having (1) demonstrated a selection of compositional languages in imitation by reinforcement; (2) established a significant correlation between topsim and ease-of-imitation; we offer the following explanation for compositional selection: _mode-seeking behavior in reinforcement learning exploits ease-of-learning of compositional languages, resulting in a selection of compositionality._ While both imitation and ease-of-learning of compositional languages have been instrumentalized in population training, they are engineered in a top-down way: in Chaabouni et al. (2022), agents imitate the best-accuracy agent, who is algorithmically designated as the teacher; in Ren et al. (2020), imitation is stopped early to temporally select compositional features.3 Our work, using basic RL principles, proposes an alternative mechanism that selects compositional languages while requiring minimal engineering and assumptions. Footnote 3: We did not succeed in replicating results in Ren et al. (2020) (see appendix C). Selection by RL imitation, using the same ease-of-learning argument, applies to not only compositionality but also potentially to other traits, e.g., language entropy or message length. That is, RL imitation _naturally promotes any learnability advantage_ among candidate languages without manual intervention, while _agnostic to the signaling system_. This may then be leveraged alongside communication-based learning in population-based emergent communication, where imitation would persist easy-to-learn linguistic features. \begin{table} \begin{tabular}{l l|l l l l} & & \(T_{S}\) & \(T_{R}\) & \(\texttt{SO}_{S}^{I}\) & \(\texttt{SO}_{R}^{I}\) \\ \hline **SV** & \(\rho\) & -0.65 & -0.80 & 0.65 & 0.75 \\ & \(R^{2}\) & -0.66 & -0.80 & 0.65 & 0.76 \\ \hline **RF** & \(\rho\) & -0.66 & -0.60 & 0.45 & 0.59 \\ & \(R^{2}\) & -0.66 & -0.68 & 0.41* & 0.63 \\ \hline \end{tabular} \end{table} Table 1: Spearman \(\rho\) and Pearson’s \(R\) between Expert topsim and Imitator learning speed (Sender=_S_, Receiver=\(R\)). Unless otherwise stated, all correlations are significant using \(\alpha=1\)e-2. *(\(\alpha=0.05\)) ### Limitations There are several limitations to our work. First, although we choose the attribute-value dataset due to its high degree of interpretability and control, we acknowledge that its simplicity limits the impact of our findings. Though imitation by reinforcement is a data-agnostic mechanism, we have yet to explore how it behaves in more complex settings, such as using naturalistic image inputs or embodied communication. We defer to Chaabouni et al. (2022); Galke et al. (2022) for further discussion on scaling up communication settings. A second limitation of our results is that we do not explore how imitation-based learning scales to \(k>5\) Experts. In particular, our hyperparameter regime handles up to around \(k=5\) Experts- very preliminary analyses on \(k\geq 10\) Experts suggest a need to also scale up hyperparameters such as agent size and communication channel capacity. When training agents to imitate, one must therefore consider feasibility of the learning problem- for example, as a function of the imitation network topology, communication channel size, agent size, etc- in order for training to converge. Finally, although our work is inspired by imitation learning in humans, the extent to which simulations explain human linguistic phenomena is not clear. We intend for our work to only serve as a testbed to understand communication from a theoretical perspective. ## Ethics Statement Because our work uses synthetic data, it has little immediate ethical impact. However, our work may enable large populations of communicating agents down the line, which could have a range of civilian or military purposes. ## Acknowledgements We would like to greatly thank Marco Baroni for feedback on experiments and manuscript; Paul Michel and Rahma Chaabouni for early feedback on research direction; the three anonymous reviewers, Jeanne Bruneau-Bongard, Roberto Dessi, Victor Chomel, Lucas Weber and members of COLT UPF for comments on the manuscript. M.R. also would like to thank Olivier Pietquin, Emmanuel Dupoux and Florian Strub. This work was funded in part by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute), and by the ALiEN (Autonomous Linguistic Emergence in Neural Networks) European Research Council project no. 101019291. Experiments were conducted using HPC resources from TGCC-GENCI (grant 2022-AD011013547). M.R. was supported by the MSR-Inria joint lab and granted access to the HPC resources of IDRIS under the allocation 2021-AD11012278 made by GENCI.
2301.11767
CAPoW: Context-Aware AI-Assisted Proof of Work based DDoS Defense
Critical servers can be secured against distributed denial of service (DDoS) attacks using proof of work (PoW) systems assisted by an Artificial Intelligence (AI) that learns contextual network request patterns. In this work, we introduce CAPoW, a context-aware anti-DDoS framework that injects latency adaptively during communication by utilizing context-aware PoW puzzles. In CAPoW, a security professional can define relevant request context attributes which can be learned by the AI system. These contextual attributes can include information about the user request, such as IP address, time, flow-level information, etc., and are utilized to generate a contextual score for incoming requests that influence the hardness of a PoW puzzle. These puzzles need to be solved by a user before the server begins to process their request. Solving puzzles slow down the volume of incoming adversarial requests. Additionally, the framework compels the adversary to incur a cost per request, hence making it expensive for an adversary to prolong a DDoS attack. We include the theoretical foundations of the CAPoW framework along with a description of its implementation and evaluation.
Trisha Chakraborty, Shaswata Mitra, Sudip Mittal
2023-01-27T15:06:41Z
http://arxiv.org/abs/2301.11767v1
# CAPoW: Context-Aware AI-Assisted Proof of Work based DDoS Defense ###### Abstract Critical servers can be secured against distributed denial of service (DDoS) attacks using proof of work (PoW) systems assisted by an Artificial Intelligence (AI) that learns _contextual_ network request patterns. In this work, we introduce CAPoW, a _context-aware_ anti-DDoS framework that injects latency _adaptively_ during communication by utilizing context-aware PoW puzzles. In CAPoW, a security professional can define relevant request context attributes which can be learned by the AI system. These contextual attributes can include information about the user request, such as IP address, time, flow-level information, etc., and are utilized to generate a contextual score for incoming requests that influence the hardness of a PoW puzzle. These puzzles need to be solved by a user before the server begins to process their request. Solving puzzles slow down the volume of incoming adversarial requests. Additionally, the framework compels the adversary to incur a cost per request, hence making it expensive for an adversary to prolong a DDoS attack. We include the theoretical foundations of the CAPoW framework along with a description of its implementation and evaluation. ## I Introduction An organization protects its critical servers from distributed denial of service (DDoS), which may contain valuable information, such as intellectual property, trade secrets, employee personally identifiable information (PII), etc. To launch a DDoS attack, the malicious users send a flood of requests to these servers. As a result, requests from legitimate users either experience delays or their requests are dropped. For more than two decades, DDoS attacks have been a prominent issue and even today it is far from being solved as these attacks are cheaper to launch than to defend, especially with the rise of DoS-as-a-Service [25]. PoW system works by requiring incoming requests to expend resources solving an _computational puzzles_ to prove ones legitimacy. The general system consists of two parts: _prover_ and _verifier_. The prover finds the solution to the computational puzzles, when solved, sends the solution to the verifier. In a simple networked client-server environment, the user-side contains the prover component, and the server-side contains the verifier components. Researchers have proposed PoW-based solutions for DDoS which makes the attack expensive to launch [4, 21, 34]. Although, these solutions suffer from a lack of intuition on how to set puzzle difficulty and adaptability in different settings. In this paper, we develop a defensive tool that emphasizes on learning the normal activity patterns of legitimate users. The idea behind the tool is to penalize the users that _deviates_ from normal activity patterns by issuing them _hard_ puzzles and at the same time issuing _easy_ puzzles to users who follow the pattern. We leverage a _context-aware AI model_ that can learn these normal activity patterns by contextual information. The term _context_ within the scope of legitimate activity patterns can be defined as request attributes, such as, IP address, time, flow-level information, etc. When the context is _IP address_, network activity is considered deviated if the source IP address is part of a known blocked IP list. Whereas, when the context is _time_, network activity is considered deviated if it arrives at an unusual time compared to the normal activity pattern. Security professionals can select relevant request context attributes which can be learned by the AI models. The concept of _context-aware AI models_ is derived from context-aware computing introduced by Dey et. al [9]. We introduce CAPoW tool, a _context-aware_ AI-assisted PoW system that helps to secure critical servers against DDoS attacks. Our framework utilizes context-aware AI models that learn the expected context pattern from server-side activity-logs. The activity-logs are stored and managed by the server which contains user activity (IP address, timestamp, flow-level data, etc). The deviation from the learned pattern is then leveraged to generate a _contextual score_ for incoming requests which tunes the difficulty level of the PoW puzzle to be solved. _The underlying defensive strategy curtails the ability of a malicious user to prolong the attack by adaptively introducing latency through PoW puzzles and compelling malicious users to expend more resources to complete an attack._ The main contributions of this paper are as follows. **Contribution 1:** We introduce CAPoW, an anti-DDoS framework that injects latency adaptively, i.e., the framework ensures that malicious users incur higher latency than legitimate users based on the deviation in context pattern. We discuss the process of context score calculation from deviation in Section III-B. **Contribution 2:** We propose a policy component that is created by security personnel to incorporate server-specific security demands. We provide intuition for policy construction in Section III-C. **Contribution 3:** We discuss an instance of CAPoW implementation and perform evaluation to illustrate the effectiveness of CAPoW. The implementation details are discussed Section IV. The code is released on GitHub [3]. The rest of the paper is structured as follows. In Section II we discuss the threat model and attack definitions. We discuss the theoretical foundation of CAPoW in Section III and CAPoW implementation in Section IV. We discuss related works of the PoW system and DoS defense in Section V, followed by the conclusion in Section VI. ## II Threat Model In this section, we present a series of assumptions associated with the adversary's abilities. An adversary \(\mathbb{A}\) initiates a DDoS attack by sending a flood of requests to the server. The adversary's intention is to overwhelm the server's computational resources and disrupt legitimate user communication with the server. Although the attack described is a variant of DDoS, the usefulness of CAPoW can be extended to other variants. These assumptions described below are similar to previous literature on DDoS defense using proof of work [17] and in some sense, we consider a stronger adversary. **Assumption 1**. Adversary \(\mathbb{A}\) can eavesdrop on the communication channel of the server. \(\mathbb{A}\) cannot modify any user request and cannot read any request payload data. Assume a secure network communication channel is used by the user to send request packets to the server. The user performs encryption on the payload data, including the puzzle solution, and sends the packet to the server. When an adversary eavesdrops on the channel, they can read the source and destination IP of the packet, but they cannot read the encrypted payload consisting of the puzzle parameters. Additionally, the adversary cannot flip bits of the packet and pollute the puzzle solution included in the payload. Hence, we assume that the adversary has no knowledge of the puzzle parameters solved by a user nor can it deny service to a user who has correctly solved the puzzle. In Section IV, we utilize assumption 1 to claim that the adversary cannot reverse engineer the base AI models to receive easier PoW puzzles. **Assumption 2** Adversary \(\mathbb{A}\) can spoof user identifiers, such as IP addresses, and deceive a subset of underlying AI models. CAPoW uses AI models to learn legitimate network activity patterns and the deviation from the pattern is directly proportional to the difficulty of PoW puzzles to be solved by the user. \(\mathbb{A}\) can spoof a legitimate user IP address and send requests to the server. An intelligent adversary would send probe packets to the server using a set of spoofed IP addresses and only utilize IPs that require puzzles to be solved. This way, the adversary is able to deceive the AI model and reduce the latency introduced. In Section IV, we discuss that sending probe packets becomes costly for an adversary to deceive multiple base AI models. **Assumption 3** Adversary \(\mathbb{A}\) cannot pollute the training data of the AI models. The AI model used by CAPoW learns normal activity patterns and calculates a deviation which directly influences the hardness of the puzzle. Hence, it is essential that the AI learns normal activity patterns from an unpolluted activity-log to maximize the effectiveness of CAPoW. In Section IV-B, we describe the training process of a context-aware AI model where a security professional is deployed to select secure data to train the base AI models. ## III CAPoW Architectural Design and Theoretical Foundations In this section, we describe the high-level architecture of the core components and their inner workings that molds the CAPoW framework. As shown in Figure 1, CAPoW consists of four core components: _request context extractor, context-aware AI models, policy_, and _proof-of-work_. The AI models learn the normal activity pattern from previous activity-logs. When an incoming request packet is seen, first the context attributes are extracted from the new request packet (see Section III-A). Then, the deviation between the learned normal context pattern and new request contexts is computed to calculate _context score_. We elaborate on AI model training and score calculation in Section III-B. The policy component of CAPoW provides security professionals with certain abilities that strengthen the effectiveness of CAPoW in various security settings (see Section III-C). The context score influences the difficulty of PoW puzzle. In Section III-D, we discuss the proof-of-work component and how the PoW puzzles can curtails the ability of a malicious user to prolong the attack by adaptively introducing latency. **Data Flow.** From Figure 1, the flow of data between different components of CAPoW is described below. (1) When a new incoming packet is seen, the request packet is forwarded to the request context extractor. (2) The extracted request context attributes are passed to context-aware AI models which learned expected context patterns from activity logs. The context score generated by individual AI models is combined using a function \(f\) to produce the final context score (\(\Phi\)). (3) The context score is forwarded to the policy component which sets certain parameters, such as, it maps the context score to a puzzle difficulty level. (4) The difficulty level is passed to the puzzle solver which solves a puzzle of the defined difficulty level using a function _func_. (5) The computed solution is sent to the verifier. (6) When the solution is correct, the request packet is placed on the server queue for processing. ### _Context Extraction from Request Packet_ The concept of context-aware computing was introduced by Dey et. al [9], where the proposed mechanism improved human-to-computer interaction by delivering contextually relevant data. In the paper, the author proposed an abstract definition of _context_, which is a piece of information that clarifies the characteristics of an entity. When a system contains contextual data about a situation or entity, the system can take context-aware decisions which improve the overall quality of any general decision-making. In a security setting, a certain request is deemed suspicious if the associated request attributes deviate from the usual network activity pattern. For instance, a request packet of payload size \(65500\) bytes is considered suspicious due to deviation when the expected normal payload size pattern is in the order of a few hundred bytes. To this end, we define _context_ of a request packet as request attributes, such as source IP address, time of arrival, port address, time to live (TTL), and other flow-level attributes. The contexts attributes to be extracted are selected by security personnel via policy component. The list of selected context attributes are reformed periodically to update the defensive posture of the organization deployed. When a new request packet is seen, the request context extractor component extracts the selected context attributes from the request packet and feeds it to the context-aware AI models. ### _Context-Aware AI Model_ The framework component consumes activity-logs supplied by security personnel as input to generate a context-aware AI model. The model is generated by considering a set of request packets from the activity-log \(\lambda=\{\lambda_{0},\lambda_{1},\lambda_{2},...,\lambda_{i}\}\). Each request packet \(\lambda_{i}\) consists of a set of request context attributes, \[\mathbb{C}_{\lambda_{i}}=\{\mathbb{C}_{0\lambda_{i}},\mathbb{C}_{1\lambda_{i} },\mathbb{C}_{2\lambda_{i}},...,\mathbb{C}_{k\lambda_{i}}\} \tag{1}\] where \(k\) is the number of request context attributes. \(\mathbb{C}_{k}\) is represented as \(n\)-dimensional vector. When an \(n\)-dimensional vector of a single context for \(\lambda\) requests is projected in Euclidean space, such relative positioning produces a cluster. For \(k\) context attributes, \(k\) clusters are generated. The clusters represent the normal activity pattern. To evaluate a new incoming request, request context extractor from Section III-A, feeds the context attributes which are then projected in Euclidean space. The deviation \(\Delta(p,q)\) of context \(\mathbb{C}_{k}\) is calculated as the Euclidean distance between the corresponding normal activity cluster and the new request projection, \[\Delta(p,q)=\sqrt{\sum_{j=1}^{n}(q_{j}-p_{j})^{2}} \tag{2}\] where \(p\) is projected single context attribute of the new request and \(q\) is center of a normal cluster of the same context. Consequently, the context score \(\Phi\) for \(\mathbb{C}_{k}\) is calculated as, \[\Phi(\mathbb{C}_{k})=\left(\frac{\Delta(p,q)}{\Delta_{max}}\right)\times I \tag{3}\] where \(\Delta_{max}\) is the maximum possible deviation for \(\mathbb{C}_{k}\). The score is in the range of \([0,I]\), where \(I\in\mathbb{Z}^{+}\). In Section IV-B, we discuss the implementation of context-aware AI models. ### _Policy_ The policy component is a rule-based strategy that facilitates the adaptive security guarantees of CAPoW. The rules are set in policy files that determine certain CAPoW characteristics. These characteristics include context-aware AI model specifications, such as, which activity-logs are supplied to train the AI models, which context attributes hold more significance over the others, etc. Additionally, these parameters include proof-of-work components specifications, such as, what is the rule to translate context score to puzzle difficulty, which variant of PoW puzzle to be used, etc. Hence, it is evident that policy construction is a non-trivial task and requires consideration of various facets of the deployed server to bolster the effectiveness of CAPoW in different security settings. To perform the convoluted task of policy designing, _security professionals_ are deployed to design server-specific policies. **Intuition for AI model parameters.** From Section III-A, a request packet consists of several context attributes. The significance of some contexts holds more importance over others depending on the type of attack defense. For instance, payload size is an important context attribute to protect against large payload DDoS attacks [37], but less important to defend volumetric DDoS attacks. Policy includes the weight associated with context attributes to provide an attack-specific defense. Additionally, a policy includes the source of data Fig. 1: The figure illustrates the architecture of CAPoW framework. CAPoW consists of four core components: request context extractor, context-aware AI model, policy, and proof of work. The AI model learns context patterns from previous activity-logs selected by security personnel and calculates a context score based on the deviation of the incoming packet. The calculated score is mapped to the PoW puzzle difficulty level as defined by the security professional in policy files. The proof of work component performs evaluations to find the constrained solution. The request with a correct solution is placed on the server queue to process. to train the AI models to avoid model data pollution attacks (Assumption 3). **Intuition for proof-of-work parameters.** The context score produced by the context-aware AI model is translated to the PoW difficulty level. The policy includes the rules to translate context scores to puzzle difficulty. In Section IV-C, we implemented three rules to show that the translation leads to adaptive latency injected. As stated by Green et. al [13], amongst groups of users, the CPU capacity of each device can vary \(10\)x times, whereas memory capacity may only vary \(4\)x times. Hence, when a memory-bound PoW puzzle is used, it is less likely for the adversary to have an edge over a legitimate user as the discrepancy in memory power as the resource is less compared to CPU-bound puzzles. The policy includes the means to set variants of puzzles depending on the expected user base. ### _Proof of Work_ Classical proof of work systems [4, 10, 34] consists of two main components - _prover_ and _verifier_. The prover provides verifiable evidence of expanding computational resources by solving puzzles as assigned by the server. On the other hand, the verifier validates whether the solved puzzle yielded the desired solution. When PoW systems are used as DoS defense [4, 26, 35], a user commits some computation resources (CPU cycle, bandwidth, etc.) and _burns_ one of these resources for solving the PoW puzzle to prove their legitimacy. In CAPoW, when a user deviates from a normal activity pattern, the PoW component issues a PoW puzzle to request proof of legitimacy. The difficulty level of PoW puzzle is a function of context score. The rule to translate to context score to difficulty level is defined under policy component (Section III-C). PoW solver uses a function _func_ to solve the assigned difficulty puzzle (see Figure 1). In general terms, this function injects two types of cost: (1) direct cost of _resource burning_[14], and (2) indirect cost of _latency_. The notion of resource burning cost represents the resource consumption of a user, where the resource could be computational power, memory, network bandwidth, or human capital [14]. This cost directly impacts the ability of the adversary to conduct a DDoS attack as every request requires the adversary to spend real-life resources. The notion of _latency_ cost captures the delay in time introduced in communication due to the act of puzzle solving. This cost indirectly impacts the adversarial intent by throttling the rate of adversarial requests reaching the server queue. Both costs ultimately cripple the adversarial capability to prolong an ongoing DDoS attack. ## IV CAPoW Implementation, Tool Instance Deployment, and Evaluation In this section, we present a deployment of CAPoW framework by implementing a single instance of each core component: context extractor, context-aware AI models, policy, and proof-of-work. First, the context extractor instance extracts selected request context attributes. Second, the extracted contexts are relayed to context-aware AI model instances where each base AI model is generated using server-side activity-logs. Then, the trained AI models calculate the deviation of selected contexts to produce a context score. Third, we provide three policy designs that maps context score to difficulty of PoW puzzle. Finally, we implemented a hash-based PoW puzzle instance which, over repeated trials, finds the constrained solution of assigned difficulty level. The costs inflicted due to the our puzzle instance are CPU-cycles (resource burning) and time spent (latency). For the purposes of validating our contribution via evaluation, we consider that the main cost injected is latency which, when injected, throttles the rate of adversarial requests. Now, we will describe our evaluation setup. We split the CIC-IDS2017 dataset [24] into test and train files where day \(1\) to day \(5\) (Monday - Thursday) is used to train the models and day \(6\) (Friday) is used to evaluate CAPoW. From day \(1\) to day \(5\), we deleted the attack traffic to learn normal activity pattern. Consider five users sending requests to the server \(\mathcal{U}_{1},\mathcal{U}_{2},\mathcal{U}_{3},\mathcal{U}_{4},\) and \(\mathcal{U}_{5}\). We fixed four user identifiers from day \(5\) to map the four above-mentioned users. Let the fifth user \(\mathcal{U}_{5}\), be mapped to the user identifier that performs DoS on day \(6\). Since, the user identifier in CIC-IDS2017 is IP address, let the mapped IP of user \(\mathcal{U}_{1},\mathcal{U}_{2},\mathcal{U}_{3},\mathcal{U}_{4},\) and \(\mathcal{U}_{5}\) is represented by 104.20.30.120, 83.66.160.22, 37.59.195.0, 104.16.84.55, and 205.174.165.73 respectively. Through our evaluation scenario, we provided evidence that CAPoW injects latency adaptively based on the calculated context score of user \(\mathcal{U}_{5}\) which throttles the adversarial requests and make it expensive for an adversary to prolong a DDoS attack. ### _Context Extraction Instance_ The context extraction instance consumes the request packet and extracts context attributes from the request packet. For our implementation, we select three context attributes: (1) IP address, (2) temporal activity, and (3) flow-level data. For evaluation, we used feature attributes of CIC-IDS2017 dataset to serve as context attributes. The source IP feature becomes the IP address context, the timestamp feature becomes the temporal activity context, and the remaining features become flow-level context. ### _Context-Aware AI Model Instance_ We propose an ensemble learner that consists of dedicated base AI models to learn individual contextual patterns. The base AI model receives the context attributes from the context extractor as inputs. The model that (1) learns the IP address pattern is called dynamic attribute-based reputation (DAbR), (2) learns the temporal activity pattern is called temporal activity model (TAM), and (3) learns the flow-level data pattern is called flow-level model (Flow). Each model computes a context score in the range between \([0,10]\). Context scores from three AI models are combined using the argmax function. Next, we discuss three base models where the subsections are divided into model generation, context score calculation, and evaluation. **Dynamic Attribute-based Reputation (DAbR)**: We utilize DAbR [29] as the base AI model that learns context patterns for IP attributes. The AI model is generated by projecting malicious IP attributes from Cisco Talos dataset [31] into Euclidean space. The dataset contains a list of malicious IP addresses and IP-related attributes [29]. The red dots in Figure 3(A) represent the projected malicious IP attributes that form a cluster in Euclidean space. When a new request is evaluated, the IP attributes of the new request are projected in Euclidean space and a deviation is calculated as Euclidean distance to the malicious cluster center. The distance calculated produces the context score for DAbR (\(\alpha\)). The multi-colored stars represent \(\mathcal{U}_{1},\mathcal{U}_{2},\mathcal{U}_{3},\mathcal{U}_{4}\), and \(\mathcal{U}_{5}\). User \(\mathcal{U}_{1},\mathcal{U}_{2},\mathcal{U}_{3},\mathcal{U}_{4}\), and \(\mathcal{U}_{5}\) receives \(2.87\), \(1.16\), \(3.15\), \(2.18\), and \(2.98\) reputation score respectively. **Temporal Activity Model (TAM)**: We propose a temporal activity model (TAM) that learns the pattern of user request activity based on time of arrival from activity-logs. The model is generated using previous \(t\)-days server activity-logs. The selected activity-logs can be either previous \(t\) consecutive days, or \(t\) specific days (as defined in the policy). The temporal model can be updated by _aging_ the older activity models (see Figure 2). The red rectangular blocks in Figure 3(B) represent an activity cluster per user. The term _active_ in practice can represent a user session or concurrent requests. When a user request \(\mathcal{U}\) arrives at the server, the server finds the corresponding user activity cluster (\(\mathcal{U}_{CLS}\)) formed by the temporal activity model. The user activity cluster (\(\mathcal{U}_{CLS}\)) is a list of time intervals that represents the user's historical activity times. The deviation in time is calculated as the distance between the two nearest clusters. From CIC-IDS2017 dataset, the cluster formed for user \(\mathcal{U}_{1}\) shows that the user was active between \(130-140\) minutes, \(160-170\) minutes, \(600-670\) minutes, and \(720-760\) minutes. When user \(\mathcal{U}_{1}\) arrived at time \(700\) minutes on day \(6\), the two nearest clusters are \(600-670\) and \(720-760\) (see Figure 3(B)). This deviation is called \(\Delta_{local}\) which is the distance between the two nearest clusters. Finally, the context score for TAM is calculated as, \[\beta=\frac{\Delta_{local}}{\Delta_{max}}\times 10 \tag{4}\] where, \(\Delta_{max}\) represents the maximum deviation possible which in our implementation is \(720\) minutes. Note that no cluster is found for \(\mathcal{U}_{5}\), hence the context score calculates is the highest in range. **Flow-level Model (Flow)**: Flow-level Model (Flow) learns network flow context patterns from activity-logs. The network flow attributes of a request packet are flow-related data, such as TTL, flow duration, payload size, protocol, etc. To generate the model, the \(n\)-dimensional flow attribute vectors are projected in Euclidean space. In Figure 3(C), the green dots represent projected network flow attributes of legitimate requests, and the red dots represent projected network flow attributes of malicious requests. When a new request is seen, its flow-level attributes are projected and the Euclidean distance to malicious and legitimate clusters are computed. The context score is calculated as, \[\gamma=\frac{\Delta_{l,m}}{\Delta_{max}}\times 10 \tag{5}\] where, \(\Delta_{l,m}\) is the deviation from malicious and legitimate clusters and \(\Delta_{max}\) is the maximum deviation possible in flow-level context. ### _Policy Component Instance_ We constructed three policy instances, _policy \(1\)_, _policy \(2\)_, and _policy \(3\)_. These policies only set the mapping function between context scores to the PoW puzzle difficulty level. Context score is directly proportional to the difficulty of the PoW puzzle, such as the increase in contextual deviation leads to a higher difficulty puzzle and more latency injected. **Policies \(1\) and \(2\): Linear mapping**. Assume a linear map function. Policy \(1\) maps \(f(\Phi)\to d\), where \(\Phi\in[0,10]\) is the range of context score and \(d\in[0,10]\) is the difficulty levels of the PoW puzzle. Similar to policy \(1\), policy 2 maps \(f(\Phi)\to d\), where \(\Phi\in[0,10]\) and \(d\in[10,20]\). Note that, the error bar in Figure 4 shows the discrepancy in time to solve \(d\)-level PoW puzzle. As discussed in Section III-C, this discrepancy in time to solve can be avoided by using memory-bound PoW puzzles. **Policy 3: Error range mapping** For policy \(3\), we incorporated the error \(\epsilon\) of the context-aware AI model. Assume a linear map function. Policy \(3\) maps \(f(\Phi)\to d\), where \(\Phi\in[0,10]\) and \(d\in[0,10]\). The final difficulty level assigned is a difficulty value chosen at random in the interval \([\lceil d_{i}-\epsilon\rceil,\lceil d_{i}+\epsilon\rceil]\), where \(\epsilon=0.2\). Figure 4 shows that as contextual deviation increases, the amount of injected latency increases. Fig. 2: The figure shows that selected activity-logs (left) are used to generate a temporal activity model (TAM) (right). The illustration shows that out of four activity logs, currently only two activity logs are used to form the model (blue box). The remaining activity-logs are aged in an attempt to keep the model up-to-date. ### _PoW Instance - Hash Function_ We discuss two sub-components of CAPoW that mimic proof-of-work system: _puzzle solver_, and _puzzle verifier_. **Puzzle Solver.** The puzzle solver takes user identifiers as input, such as the timestamp of the arrival of the request packet (\(t\)), and the user IP address (\(u\)). Additionally, the solver takes a server seed value (\(\rho\)) to protect against pre-computational attacks. To this, a \(n\)-bit string is added, which the client modifies upon each hash function evaluation. We call this string _nonce_ denoted by \(\eta\). The user evaluates this input until it finds an output string \(Y\) where \(Y=H(u||t||\rho||\eta)\) with \(d\) leading zeroes, where \(d\) is the difficulty level assigned to the request packet. The puzzle solver is a user-end component that is installed either in the browser [19] or kernel-level. After solving, the user sends the nonce back to the server for verification. **Puzzle Verifier.** Puzzle verification is a server-side component that performs straightforward verification of the puzzle solution by performing one hash evaluation, i.e., \(Y^{\prime}=H(u||t||\rho||\eta)\). If the sent \(\eta\) value leads to desired number of leading 0's, then the solution is verified. **Summary of CAPoW implementation evaluation.** The context scores produced by DAbR, TAM, and Flow models are combined to produce the final context score (\(\Phi\)). As discussed in Section III-C, some contexts might be more relevant than others to provide attack specific defense. We denote weight \(w\) as the significance of each context in the final context score. The weights for each AI model are fixed through the policy instance as discussed in Section IV-C. \[\Phi=\arg\max(w_{1}\alpha,w_{2}\beta,w_{3}\gamma) \tag{6}\] where \(w_{1},w_{2},\) and \(w_{3}\) represent weights associated with DAbR, TAM, and Flow respectively. Figure 3(D) illustrates the combined context score where \(w_{1},w_{2},\) and \(w_{3}\) is set to \(1\). User \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) show that the final context score is decided by Flow model. Similarly, \(\mathcal{U}_{3},\mathcal{U}_{4},\) and \(\mathcal{U}_{5}\) the final score is decided by TAM model. Using policy \(2\), user \(\mathcal{U}_{5}\) incurs \(\approx 300\)ms latency for a context score of \(8\), which is the highest latency amongst other users introduced by CAPoW. Notably, the evaluation performed using a simulated dataset might not reflect the worst case efficiency of CAPoW as in practice, user \(\mathcal{U}_{5}\) might not be deviate in a temporal activity context. In this section, we discuss that the cost of deceiving multiple AI models is expensive for the adversary. In our implementation, user \(\mathcal{U}_{5}\) has to deceive three AI models to receive an easy PoW puzzle by receiving lower context scores. User \(\mathcal{U}_{5}\) can receive a lower context score for DAbR by trivially spoofing the IP address (Assumption 2). To deceive TAM, the user can engineer the requests around the same time as noticed during eavesdropping (Assumption 1). As reading or tracking flow-level data embedded in request payload data while eavesdropping is not possible (Assumption 1), the only way to deceive Flow is by sending multiple probe packets to land on a low context score. This is an extensive approach as a security personnel may select new contexts to improve the defensive posture of the organization periodically. Therefore, deceiving all AI models becomes expensive for the adversary. To validate contribution \(3\), we designed and evaluated an implementation instance on CAPoW and provided policy designs to validate contribution \(2\). Finally, CAPoW ensures that malicious users incur higher latency than legitimate users based on the deviation in context pattern that prevents DDOS. Hence, we validate contribution 1 (see Section I). ## V Related Works In this section, we discuss the overview of proof-of-work (PoW) literature in DDoS. Relevant to our work, we will also discuss the current advances in AI-assisted cybersecurity. Fig. 4: An evaluation of our three implemented policies. The median of 30 trials is reported for each reputation score. Fig. 3: The figure contains four sub-figures. (A) Representation of trained DARR in the 2-D plot. The red dot cluster represents malicious IP attributes. (B) Representation of trained TAM. The stars represent the current time of arrival. (C) Representation of Flow. The green cluster represents legitimate flow-level attributes and the red cluster represents malicious ones. (D) Represents the calculated context score after combining scores from Model A is DAbR, Model B is TAM, and Model C is Flow. ### _Classical Proof-of-Work_ Dwork et. al [10] coined the term proof-of-work (PoW) when they proposed the use of cryptographic hash functions (also known as client puzzles) to combat unsolicited bulk emails (junk emails). Following that, Franklin et. al [11] proposed a lightweight website metering scheme in 1997 to prevent fraudulent web server owners from inflating their website's popularity. In 1999, Jakobsson et. al [16] proposed MicroMinting (originally proposed by Rivest et. al [30] as a digital payment scheme) as a candidate problem that can reuse the computational effort of solving the POW puzzle. Later that year, Laurie et. al [18] proposed that proof of work does not work in a spam setting. ### _Proof-of-Work as DoS defense_ Similar to spam emails, in DDoS, it is significantly cheaper for the attacking party to launch a DDoS attack than to defend an infrastructure with the defending party. According to Arbor network, launching a DoS attack costs an average of $66 per attack and can cause damage to the victim of around $500 per minute [20]. Aura et. al [4] proposed the first client puzzle authentication protocol for a DoS resilient system. Mankins et. al [21] investigated methods for tuning the amount of resource consumption to access server resources based on client behavior, where the costs imposed can be either monetary or computational. In a similar vein, Wang and Reiter [33] investigate how clients can bid on puzzles through auctions. Nibibule et. al [22] proposed web traffic authentication as a replacement for CAPTCHA-based defenses. Wu et. al [36] proposed a software puzzle framework that disqualifies the adversary's ability to gain an advantage by using a GPU to solve puzzles. A framework was put forth by Dean et. al [8] to reduce DoS in TLS servers. A DoS variant was introduced by Wood et. al [35]. Certain PoW defenses against DoS are layer-specific. The network layer of the proof-of-work system used by Parno et. al [26] prioritizes users who use more CPU time to solve puzzles. The Heimdall architecture, which can detect any change in network flow in routers, was introduced by Chen et. al [7]. When a change in network flow is identified for any new connection, a puzzle is generated and sent to the new user. The difficulty of the computational challenges used in the context of DoS attacks on the transport layer was recently assessed using game theory by Noureddine et. al [23]. Walfish et. al [32] propose an alternative resource called communication capacity as a defense against application-layer flood attacks. Other research has concentrated on incorporating PoW puzzles into practical browsing experiences [5, 6, 19]. ### _Automated DoS defense_ In this section, we revisit the literature on ensemble learning techniques for network traffic classification problems. Ensemble learning is a branch of supervised machine learning technique that aggregates the learning of multiple base learners to improve overall prediction accuracy [28]. Like network traffic classification problems, each base learner is trained to become an expert in the local area of the total feature space. Gaikwad et. al [12] proposed a bagging ensemble approach using REPTree base learners to improve classification over weaker AI models. Gupta et. al [2] suggested an IDS system that uses ensemble learning to address a class imbalance problem. The ensemble learner uses three base learners. First, the deep neural network classifies normal and suspicious traffic. Second, eXtreme Gradient Boosting is used to identify major attacks. Third, random forest is used to classify minor attacks. Zhou et. al [1] proposed feature selection process using ensemble learning in two stages. The first stage involves feature reduction using the heuristic method CFS and the Bat Algorithm (BA). The second stage involves aggregating C4.5 and Random Forest (RF) algorithms. Jabbar et. al [15] suggested an ensemble classifier that uses Alternating Decision Tree (ADTree) and the k-Nearest Neighbor algorithm (kNN) as base AI models. Paulauskas and Auskalnis [27] proposed an ensemble learner that employs four base classifiers: J48, C5.0, Naive Bayes, and Partial Decision List (PART) to improve classification results over individual AI models. ## VI Conclusion and Future Work In this paper, we design and evaluate CAPoW a context-aware AI-assisted PoW framework that protects critical servers against DDoS. The underlying defensive strategy involves adaptively introducing latency on malicious users. To achieve this functionality, our framework employs an AI model that takes the context attributes from the incoming user request packet as input. The AI model computes deviation from normal activity patterns to output a context score. This score influences the difficulty level of a PoW puzzle that injects latency adaptively during communication. CAPoW ensures that the ability of a malicious user to prolong the attack is constrained by adaptively introducing latency through PoW puzzles and compelling malicious users to expend more resources to complete an attack. For future work, different design variants of CAPoW can be configured to combat different DDoS attack types. PoW systems suffer from inherent pitfalls of resource wastage which can be circumvented by replacing the model with proof of stake (PoS) component. Additionally, alternate design can include enhanced human in loop strategy which provides control of the framework to the security personnel deploying the framework.
2305.03694
A Solvable Model of Quantum Darwinism-Encoding Transitions
We propose a solvable model of Quantum Darwinism to encoding transitions -- abrupt changes in how quantum information spreads in a many-body system under unitary dynamics. We consider a random Clifford circuit on an expanding tree, whose input qubit is entangled with a reference. The model has a Quantum Darwinism phase, where one classical bit of information about the reference can be retrieved from an arbitrarily small fraction of the output qubits, and an encoding phase where such retrieval is impossible. The two phases are separated by a mixed phase and two continuous transitions. We compare the exact result to a two-replica calculation. The latter yields a similar ``annealed'' phase diagram, which applies also to a model with Haar random unitaries. We relate our approach to measurement induced phase transitions (MIPTs), by solving a modified model where an environment eavesdrops on an encoding system. It has a sharp MIPT only with full access to the environment.
Benoît Ferté, Xiangyu Cao
2023-05-05T17:14:57Z
http://arxiv.org/abs/2305.03694v3
# A Solvable Model of Quantum Darwinism-Encoding Transitions ###### Abstract We propose a solvable model of Quantum Darwinism to encoding transitions--abrupt changes in how quantum information spreads in a many-body system under unitary dynamics. We consider a random Clifford circuit on an expanding tree, whose input qubit is entangled with a reference. The model has a Quantum Darwinism phase, where one classical bit of information about the reference can be retrieved from an arbitrarily small fraction of the output qubits, and an encoding phase where such retrieval is impossible. The two phases are separated by a mixed phase and two continuous transitions. We compare the exact result to a two-replica calculation. The latter yields a similar "annealed" phase diagram, which applies also to a model with Haar random unitaries. We relate our approach to measurement induced phase transitions (MIPTs), by solving a modified model where an environment eavesdrops on an encoding system. It has a sharp MIPT only with full access to the environment. **Introduction** A pillar of modern quantum statistical mechanics [1; 2; 3] is the idea that unitary dynamics in a many-body system generically scrambles local quantum information. Eventually, it becomes highly nonlocal and impossible to retrieve, unless the observer has access to more than half of the system: the information has been encoded [4; 5; 6; 7]. Information scrambling and encoding have far-reaching consequences, for example on the quantum physics of black holes [8; 9; 10; 11; 12; 13]. Meanwhile, a basic premise of Quantum Darwinism (QD) [14; 15; 16; 17; 18; 19] is that a macroscopic environment, e.g., a measurement apparatus, _duplicates_ some classical information. Hence, the latter becomes retrievable in multiple small fractions of the environment. It is important to view the environment itself as a many-body quantum system. Indeed, the theory of QD aims to deduce the properties of the classical world from the core principles of quantum physics. According to QD, the duplication of information underlies the emergence of classical _objectivity_[20; 21; 22; 23]: being objective is being known to many. Quantum Darwinism and encoding are distinct ways of many-body quantum information spreading. Both behaviors emerge from the microscopic laws of quantum mechanics, just like both ferro- and para-magnetism can emerge from the Ising model. Ferro- and para-magnetism are distinct phases of matter, separated by a continuous phase transition. Can we view QD and encoding as stable phases of quantum information, and are they separated by some transition [24; 25]? In this Letter, we propose a solvable model of sharp phase transitions from QD to encoding. Our model is a random Clifford unitary circuit on an expanding tree, whose root forms a maximally entangled pair with a reference qubit [Fig. 1-(a)]. It has one parameter, analogue of the temperature in the Ising model. We then ask whether it is possible to retrieve information about the reference bit from a small fraction \(f<1/2\) of the tree's leaves (output qubits). We determine exactly the model's phase diagram [Fig. 1-(b)]. It has a stable QD (encoding, resp.) phase, where one may (may not, resp.) extract a classical bit of information about the reference bit. Unlike the Ising model, the encoding and QD phases are separated by an intermediate mixed phase and two continuous transitions. Another inspiration for this work is the measurement-induced phase transitions (MIPT) [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37], which are also "quantum information transitions". In the standard Figure 1: (a) Model for Quantum Darwinism-encoding transitions on an expanding tree with \(t=3\) generations. (b) Information on \(R\) is accessible to a small subsystem (squares) in the Quantum Darwinism (QD) phase, and inaccessible in the encoding phase. In the mixed phase, the information is accessible in a fraction of random realizations. (c) A tree model of an environment eavesdropping on an encoding dynamics. (d) A transition is only possible with full access to the environment \(f=1\). setup, a generic many-body unitary evolution is continually interrupted by local measurements. By tuning the measurement rate, one obtains a transition between a phase with volume-law entanglement entropy and one with area law. The MIPTs concern entanglement properties of random states drawn from the Born rule, and are delicate to study and observe [38; 39; 40]. Here, we consider a "Darwinian" MIPT setup, see Fig. 1-(c,d). We amend our model in the encoding phase with eavesdropping qubits [41], and ask whether they can extract a classical bit of information about the reference [42; 43; 44; 45; 34]. We show that a sharp transition occurs at a critical rate of eavesdropping, if and only if one has access to all the eavesdropping bits. **Model for QD-encoding transition** Consider a maximally entangled pair \((|0\rangle_{R}|0\rangle_{A}+|1\rangle_{R}|1\rangle_{A})/\sqrt{2}\) between a reference qubit \(R\) that will be kept intact, and the qubit \(A\) that will be the root of an expanding binary tree unitary circuit, see Fig. 1. The edges of the tree represent the world lines of the qubits constituting a growing system [46; 47]. At each branching, we recruit a new qubit with state \(|0\rangle\), and apply a CNOT gate to it and the input qubit: \[\includegraphics[width=142.6378pt]{L1}\,. \tag{1}\] Equivalently, the branching acts on the input qubit as an isometry \(\sum_{i=0,1}|ii\rangle\langle i|\). In addition, we apply a random one-body Clifford unitary (drawn uniformly) to each edge of the tree with probability \(p\), which is the parameter that interpolates between the QD (\(p=0\)) and encoding limits (\(p=1\)). After \(t\) time steps, there are \(N=2^{t}\) output qubits, from which we draw the subsystem \(F\) randomly: each output qubit belongs to \(F\) with probability \(f\). We denote by \(U\) the resulting unitary from \(A\) and \(N-1\) recruits to the \(N\) output qubits. By construction, \(U\) is a Clifford unitary, which can be efficiently simulated [48; 49]. Here, we can analyze the knowledge of \(F\) on \(R\) analytically. For this, we recall the defining property of a Clifford unitary: it transforms any Pauli operator to a _single_ product of Pauli's, known as a Pauli string. For example, a one-body Clifford unitary permutes \(X,Y\) and \(Z\), and choosing a random one-body Clifford amounts to picking one among the 6 permutations (here and below, a Pauli string will be always considered modulo a phase \(\pm 1,\pm i\)). Now, let us fix a realization of our model, and consider a Pauli string \(P\) acting on the subsystem \(F\). By definition, our Clifford unitary \(U\) will pull it back to \(Q=U^{\dagger}PU\), a Pauli string acting on \(A\) and the \(N-1\) recruits. We then contract it with the recruit states \((|0\rangle\langle 0|)^{\otimes N-1}\) to obtain a Pauli operator \(O_{A}\) acting on \(A\). There are two possibilities: (1) if \(Q\) contains an \(X\) or \(Y\) acting on some recruit bit, \(O_{A}\) vanishes. (2) Otherwise, \(O_{A}\in\{I,Z,X,Y\}\) is identity or a Pauli. Repeating this for all Pauli strings acting on \(F\), we construct a set \(\mathbf{s}\subset\{I,X,Y,Z\}\) of all the nonzero operators \(O_{A}\) thus obtained. It is not hard to see that \(\mathbf{s}\) is a subgroup of \(\{I,X,Y,Z\}\) (modulo phase), i.e., \(\mathbf{s}\) must equal one of these: \[\mathbf{n} =\{I\},\mathbf{z}=\{I,Z\},\mathbf{x}=\{I,X\},\mathbf{y}=\{I,Y\},\] \[\mathbf{a} =\{I,Z,X,Y\}. \tag{2}\] Since \(RA\) is initially a maximally entangled pair, \(\mathbf{s}\) tells us exactly what information about \(R\) is accessible from \(F\). If \(\mathbf{s}=\mathbf{n}\), \(F\) is uncorrelated with \(R\). If \(\mathbf{s}=\mathbf{z},\mathbf{x}\) or \(\mathbf{y}\), \(F\) contains one classical bit of information on \(R\), i.e., exactly one Pauli string operator on \(F\) is perfectly correlated with \(Z\), \(X\) or \(Y\) on \(R\). If \(\mathbf{s}=\mathbf{a}\), one may distill from \(F\) a qubit maximally entangled with \(R\). **Phase diagram** The "order parameter" of our model is thus the probability distribution of \(\mathbf{s}\): \[\pi:=\left(\pi_{\mathbf{n}},\pi_{\mathbf{z}},\pi_{\mathbf{x}},\pi_{\mathbf{y }},\pi_{\mathbf{a}}\right), \tag{3}\] where \(\pi_{\mathbf{n}}\) is the probability that \(\mathbf{s}=\mathbf{n}\), and so on. We can compute \(\pi\) of a tree with \(t\) generations from one with \((t-1)\) using a "backward recursion" relation. The phase diagram of the model is determined by iterating this relation and analyzing the \(t\to\infty\) limit of \(\pi\) as a function of \(p\) (and \(f\)) [50]. As a result, we find three phases, see Fig. 1-(b) for a sketch and Fig. 2 for plots. When \(p<3/5\), we have a Quantum Darwinism (QD) phase, where for any \(f\in(0,1)\), we have \(\pi_{\mathbf{a}}\to 0,\pi_{\mathbf{n}}\to 0\), and \[\pi_{\mathbf{z}}\to\frac{3-6p+\sqrt{24(p-1)p+9}}{6-6p}\,,\pi_{x,y}\to\frac{1- \pi_{\mathbf{z}}}{2}\,. \tag{4}\] (\(\pi_{\mathbf{z}}\to 1\) as \(p\to 0\).) When \(p>3/4\), we have a _encoding_ phase, where \(\pi_{\mathbf{n}}\to 1\) if \(f<1/2\) and \(\pi_{\mathbf{a}}\to 1\) if \(f>1/2\). Finally, when \(3/5<p<3/4\), we have a mixed phase. For any \(f<1/2\), we have \(\pi_{\mathbf{a}}\to 0\) while \[\left(\pi_{\mathbf{n}},\pi_{\mathbf{z}},\pi_{\mathbf{x}},\pi_{\mathbf{y}} \right)\stackrel{{ f<\frac{1}{2}}}{{\to}}(1-u,\frac{u}{2},\frac{u}{4}, \frac{u}{4}),\,u=\frac{6-8p}{3-3p}\,. \tag{5}\] Here \(u\) is probability that we can retrieve one classical bit from the subsystem \(F\), and it decreases from \(1\) to \(0\) as \(p\) varies from \(3/5\) to \(3/4\). The solution for \(f>1/2\) is obtained from (5) by swapping \(\pi_{\mathbf{n}}\) and \(\pi_{\mathbf{a}}\). The existence of two transitions can be associated to the breaking/restoration of two symmetries of the model. First, a \(\mathbb{Z}_{2}\) symmetry acts by exchanging \(\pi_{\mathbf{n}}\leftrightarrow\pi_{\mathbf{a}}\), or swapping the subsystem \(F\) and its complement (without \(R\)) [51]. This symmetry is preserved by the circuit dynamics, weakly broken by the "boundary condition" (the choice of \(F\)), and restored only in the QD phase. Second, a \(\mathcal{S}_{3}\) symmetry acts by permuting \(\mathbf{x},\mathbf{y},\mathbf{z}\) (while leaving \(\mathbf{n}\) and \(\mathbf{a}\) invariant). This symmetry is preserved by the random one-body Clifford unitary, broken by the branching (1), and restored only in the encoding phase. The mixed phase breaks both symmetries. We numerically explored a few other Clifford variants of our model, and found the above two-stage scenario to be rather general [52]. **Mutual information and discord** It is useful to consider the mutual information between \(F\) and \(R\), defined as \(I(R,F)=H(R)+H(F)-H(RF)\), where \(H(X)=-\mathrm{Tr}[\rho_{X}\log_{2}\rho_{X}]\) is the von Neumann entropy. In our model, it is not hard to see that \(I(R,F)=\log_{2}|\mathbf{s}|\) is the dimension of \(\mathbf{s}\) as a vector space over \(\mathbb{Z}_{2}\). So, in the QD phase, \[I(R,F)\to 1\quad(0<f<1)\quad(\mathrm{QD})\,, \tag{6}\] with probability one [Fig. 2-(c)]. The independence of \(I\) on the fraction size \(f\), sometimes called the "objectivity plateau", is a hallmark of QD [16]. Meanwhile, in the encoding phase, \[I(R,F)\rightarrow\begin{cases}0&f<1/2\\ 2&f>1/2\end{cases}\quad\text{(encoding)} \tag{7}\] with probability one [Fig. 2-(d)], as expected from the Page curve [53]. In the mixed phase, we may wonder how the \(I\)-\(f\) curve looks like in a _single_ realization (with large \(t\)), where we increase \(f\) by gradually adding random qubits into \(F\). To address this question, we computed the joint distribution of \((\mathbf{s},\mathbf{t})\) corresponding to two random subsystems \(F\subset G\), and a same unitary \(U\)[50]. As a result, we found that a single-realization \(I\)-\(f\) curve is exactly the QD one (6) with probability \(u\) defined in (5), and exactly the encoding curve (7) with probability \(1-u\). In other words, the intermediate-phase ensemble is a mixture of QD and encoding realizations, both occurring with nonzero probability in the \(t\rightarrow\infty\) limit. In general, the mutual information between \(F\) and \(R\) does not correspond exactly to the amount of information that one can learn about \(R\) by observing \(F\)[54; 55]. The discrepancy is known as "quantum discord". Here, the discord vanishes whenever \(I(R,F)=1\), given the knowledge of the unitary circuit: we can construct the observable on \(F\) which reveals the classical bit of information on \(F\). Moreover, we can show that in the QD phase, one may still retrieve a bit of information from \(R\) even with access to only the \(Z\) operators on \(F\). **Two-replica analysis** A valuable tool to compute quantum information quantities is the "replica trick" [56; 57; 58; 59; 60; 61]. Yet, results of replica calculations can be subtle to interpret, especially if one is not able to take the appropriate replica number limit. Here, we perform a two-replica analysis of our model, and compare the result with the exact phase diagram. In the replica approach, the accessible quantity is the "annealed" mutual information \[I^{(2)}(F,R):=\log_{2}\mathrm{Tr}\left[\overline{\rho_{FR}^{2}}\right]-\log_ {2}\mathrm{Tr}\left[\overline{\rho_{F}^{2}}\right]+1\,, \tag{8}\] where \(\overline{[\dots]}\) denotes an average over \(U\) and \(F\). Note that \(I^{(2)}\) would equal to the average von Neumann mutual information if \(\mathrm{Tr}[\overline{\rho_{X}^{2}}]\) were equal to \(2^{-\overline{H(X)}}\) (which is wrong!). The annealed mutual information can be computed by random unitary circuit techniques [62; 63; 64; 51; 65; 37]; indeed, since the Clifford group is a 2-design [65], \(I^{(2)}(F,R)\) will not change if we replace a random one-body Clifford unitary with a Haar-random one in \(U(2)\). We find [50]: \[I^{(2)}(F,R)\rightarrow\begin{cases}0&f<1/2,p>p_{c}(f)\\ 2&f>1/2,p>p_{c}(f)\\ 1&p<p_{c}(f)\end{cases}\,. \tag{9}\] Figure 2: (a,b) \(p\)-dependence of the order parameters \(1-\pi_{\mathbf{n}}\) (identical to Fig. 1-b) and \(\pi_{\mathbf{z}}\). (c,d) Average mutual information as a function of the size fraction \(f\) in the QD and encoding phase (resp.). The finite \(t\) data are from numerical iteration of the backward recursion, and the \(t=\infty\) curves are the exact prediction [50]. Figure 3: Comparing the annealed mutual information \(I^{(2)}(F,R)\) (9) with the genuine one \(I(F,R)\). They disagree in the mixed phase (\(3/5<p<3/4\)) and in part of the encoding phase where \(3/4<p<p_{c}(f)\). \(p_{c}(f)\) (solid curves) is determined numerically using the recursion relation for \(I^{(2)}\)[50]. Here \(p_{c}(f)=p_{c}(1-f)\) is a threshold function that increases from \(p_{c}(0)=3/4\) to \(p_{c}(1/2)=\frac{3}{7}\left(2\sqrt{2}-1\right)=0.783\dots\), see Fig. 3. The "annealed phase diagram" of \(I^{(2)}\) is similar to the exact one, with however differences: \(I^{(2)}(F,R)=1\) in both QD and mixed phases, as well as a small part of the encoding phase. So, the annealed phase diagram is biased towards QD, which we qualitatively explain as follows. Both purity averages in (8) are dominated by realizations with small entanglement entropy in \(F\). Now, QD states tend to have low entanglement; indeed, the "perfect" QD-state (produced at \(p=1\)) is the GHZ state [66], \[\left|\text{GHZ}\right\rangle=\frac{1}{\sqrt{2}}(\left|0_{R}\underbrace{0 \dots 0}_{F}0\dots 0\right\rangle+\left|1_{R}\underbrace{1\dots,1}_{F}1 \dots 1\right\rangle)\,.\] It has one bit of entanglement entropy for any bipartition. In comparison, an encoding state has a volume law entropy. Hence, in both QD and mixed phases, QD realizations will dominate \(I^{(2)}\), which fails to distinguish them. In the encoding phase, a QD realization occurs with an exponentially small (in \(t\)) probability, yet its \(\text{Tr}[\rho_{F}^{2}]\) and \(\text{Tr}[\rho_{FR}^{2}]\) can be exponentially large compared to the typical encoding states. Hence, rare QD states in the encoding phase can dominate the annealed mutual information. **Relating to MIPT** The model studied so far departs from the standard paradigm of open quantum systems, where the "world" is split into a "system" and its "environment" [67]. Hence, the QD-encoding transitions differ fundamentally from phase transitions in open quantum systems, including the MIPTs. To relate our approach to MIPTs, we consider a variant of our model that complies with the standard paradigm. We take the above model at \(p=1\) (in the encoding phase), and let every qubit in the tree be subject to an eavesdropping event with probability \(r\). The eavesdropping consists again as a branching (1), of which one output bit is then emitted to the "environment", see Fig. 1-(c). After \(t\) generations, we have a system with \(N=2^{t}\) bits and an environment \(E\) of average size \(|E|=(2N-1)r\). Then we ask: can we retrieve information on \(R\) from a fraction \(F\) of the _environment_, with \(|F|/|E|=f^{2}\) Moreover, we only allow access to \(Z\) operators on \(F\) (allowing access to all operators results in an entirely different phase diagram [52]). Then, the order parameter (3) obeys a modified recursion relation [50]. In particular, \(\pi_{\mathbf{a}}=0\), and the probability of retrieving one classical bit equals \(1-\pi_{\mathbf{n}}\). We find that, when \(f=1\), there is a transition: \[\pi_{\mathbf{n}}\stackrel{{ f=1}}{{\rightarrow}}\begin{cases} \frac{4r^{2}-8r+1}{1-r}&r<r_{c}\\ 0&r>r_{c}\,,\end{cases} \tag{10}\] where \(r_{c}=\frac{1}{2}\left(2-\sqrt{3}\right)\approx 0.134\). This transition is equivalent to the standard MIPT. Indeed, consider projectively measuring \(Z\) on all the qubits of \(F\). If \(\mathbf{s}=\mathbf{n}\), the measurements reveal nothing about \(R\), which remains entangled with unmeasured bits. Otherwise, if say \(\mathbf{s}=\mathbf{x}\), the measurements will project the qubit \(R\) to an eigenstate of \(X\), disentangling it. Therefore, \(r>r_{c}\) is the area-law (purified) phase and \(r<r_{c}\) the volume-law (encoded) phase [44; 45; 34; 42]. Note that the transition exists _only_ at \(f=1\), where almost all the environment is accessible. For any \(f<1\), \(\pi_{\mathbf{n}}(t\rightarrow\infty)\) depends smoothly on \(r\) and never vanishes. This is after all reasonable from the MIPT point of view: we need all the measurement outcomes to construct the quantum trajectory state. In contrast, the Quantum Darwinism-encoding transition is precisely about the information available in small samples. **Outlook** We introduced a solvable model for Quantum Darwinism-encoding transitions (QDETs). They are a new type of quantum information phase transitions under unitary evolution, whose order parameter is the knowledge of a small subsystem on a reference bit. It will be interesting to identify QDETs in finite-dimensional (\(d<\infty\)) systems and characterize their universality classes; our tree model is equivalent to an all-to-all (\(d=\infty\)) circuit, and has simple mean-field critical exponents [68]. In particular, it may be nontrivial to establish a QD phase in a \(d<\infty\) geometry, which hinders the fast spread of information [69; 70; 71]; an expanding (de Sitter) geometry could be necessary. Another important question concern QDETs in non-Clifford models [41; 72; 24], in particular, whether the mixed phase is generic. Indeed, the knowledge of \(F\) on \(R\) is in general not "quantized" as in a Clifford model. This will affect the nature of the order parameter, and make even the mean-field theory more involved [46; 47; 52]. Finally, encoding is proper to the quantum realm, and Quantum Darwinism is a theory of the emergence of the classical. Thus, we hope to shed light on the quantum-classical transition through the lens of dynamical critical phenomena. We thank Andrea De Luca for helpful comments on the manuscript. X.C. acknowledges support from CNRS and ENS, and thanks LPTMS for hospitality.
2302.08766
A Lower Bound and a Near-Optimal Algorithm for Bilevel Empirical Risk Minimization
Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning. In many practical cases, the upper and the lower objectives correspond to empirical risk minimization problems and therefore have a sum structure. In this context, we propose a bilevel extension of the celebrated SARAH algorithm. We demonstrate that the algorithm requires $\mathcal{O}((n+m)^{\frac12}\varepsilon^{-1})$ oracle calls to achieve $\varepsilon$-stationarity with $n+m$ the total number of samples, which improves over all previous bilevel algorithms. Moreover, we provide a lower bound on the number of oracle calls required to get an approximate stationary point of the objective function of the bilevel problem. This lower bound is attained by our algorithm, making it optimal in terms of sample complexity.
Mathieu Dagréou, Thomas Moreau, Samuel Vaiter, Pierre Ablin
2023-02-17T09:04:18Z
http://arxiv.org/abs/2302.08766v4
# A Lower Bound and a Near-Optimal Algorithm ###### Abstract Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning. In many practical cases, the upper and the lower objectives correspond to empirical risk minimization problems and therefore have a sum structure. In this context, we propose a bilevel extension of the celebrated SARAH algorithm. We demonstrate that the algorithm requires \(\mathcal{O}((n+m)^{\frac{1}{2}}\varepsilon^{-1})\) gradient computations to achieve \(\varepsilon\)-stationarity with \(n+m\) the total number of samples, which improves over all previous bilevel algorithms. Moreover, we provide a lower bound on the number of oracle calls required to get an approximate stationary point of the objective function of the bilevel problem. This lower bound is attained by our algorithm, which is therefore optimal in terms of sample complexity. ## 1 Introduction In the last few years, bilevel optimization has become an essential tool for the machine learning community thanks to its numerous applications. Among them, we can cite hyperparameter selection (Bengio, 2000; Pedregosa, 2016; Franceschi et al., 2017; Lorraine et al., 2020), implicit deep learning (Bai et al., 2019), neural architecture search (Liu et al., 2019; Zhang et al., 2021), data augmentation (Li et al., 2020; Rommel et al., 2022) and meta-learning (Franceschi et al., 2018; Rajeswaran et al., 2019) to name a few. In bilevel optimization, we are interested in minimizing a function under the constraint that one variable minimizes another function. This can be formalized as follows \[\min_{x\in\mathbb{R}^{d}}h(x)=F(z^{*}(x),x),\quad\text{subject to }z^{*}(x)\in \operatorname*{arg\,min}_{z\in\mathbb{R}^{p}}G(z,x)\enspace. \tag{1}\] The function \(F\) is called the outer function and the function \(G\) is the inner function. Likewise, we refer to \(z\) as the inner variable and \(x\) as the outer variable. A strategy to solve bilevel problems consists in using implicit differentiation that provides the following expression for the gradient of \(h\) \[\nabla h(x)=\nabla_{2}F(z^{*}(x),x)+\nabla_{21}^{2}G(z^{*}(x),x)v^{*}(x) \tag{2}\] where \(v^{*}(x)\) is the solution of a linear system \[v^{*}(x)=-\left[\nabla_{11}^{2}G(z^{*}(x),x)\right]^{-1}\nabla_{1}F(z^{*}(x), x)\enspace. \tag{3}\] When we have exact access to \(z^{*}(x)\), solving (1) boils down to a smooth nonconvex optimization problem which can be solved using solvers for single-level problems. However, computing exactly \(z^{*}(x)\) and \(v^{*}(x)\) is often too costly, and implicit differentiation based algorithms rely on approximations of \(z^{*}(x)\) and \(v^{*}(x)\) rather than their exact value. Depending on the precision of the different approximations, we are not ensured that the approximate gradient used is a descent direction. Results by Pedregosa (2016) characterized the approximation quality for \(z^{*}(x)\) and \(v^{*}(x)\) required to ensure convergence, opening the door to various algorithms to solve bilevel optimization problems (Lorraine et al., 2020; Ramzi et al., 2022). In many applications of interest, the functions \(F\) and \(G\) correspond to Empirical Risk Minimization (ERM), and as a consequence have a finite sum structure \[F(z,x)=\frac{1}{m}\sum_{j=1}^{m}F_{j}(z,x),\quad G(z,x)=\frac{1}{n}\sum_{i=1}^ {n}G_{i}(z,x)\enspace.\] For instance, in hyperparameter selection, \(F\) is the validation loss which is an average on the validation set and \(G\) is the training loss which is an average on the training set. In single-level optimization, the finite sum structure has been widely leveraged to produce fast first-order algorithms that provably converge faster than gradient descent. These methods are the cornerstone of many successful machine learning applications. Among these algorithms, we can cite stochastic methods such as stochastic gradient descent (Robbins and Monro, 1951; Bottou, 2010) and its variance-reduced variants such as SAGA (Defazio et al., 2014), STORM (Cutkosky and Orabona, 2019) or SPIDER/SARAH (Fang et al., 2018; Nguyen et al., 2017) that use only a handful of samples at a time to make progress. In order to get faster methods than full-batch approaches, it is natural to extend these methods to the bi-level setting. The main obstacle comes from the difficulty of obtaining stochastic approximations of \(\nabla h(x)\) because of its structure (2). In the literature, several strategies have been proposed to overcome this obstacle, and some works demonstrate that stochastic implicit differentiation based algorithms for solving (1) have the same complexity as single-level analogous algorithms. For instance, ALSET from (Chen et al., 2021) and SOBA from Dagreou et al. (2022) have the same convergence rate as stochastic gradient descent for smooth nonconvex single-level problems Ghadimi and Lan (2013); Bottou et al. (2018). Furthermore, Dagreou et al. (2022) show that SABA, an adaptation of SAGA algorithm (Defazio et al., 2014), has a sample complexity in \(\mathcal{O}((n+m)^{\frac{3}{5}}\varepsilon^{-1})\) which is analogous to the sample complexity of SAGA for nonconvex single-level problems (Reddi et al., 2016). However, in classical single-level optimization, it is known that neither of these algorithms is optimal: the SARAH algorithm (Nguyen et al., 2017) achieves a better sample complexity of \(\mathcal{O}(m^{\frac{1}{2}}\varepsilon^{-1})\) with \(m\) the number of samples. Furthermore, this algorithm is _near-optimal_ (i.e. optimal up to constant factors), because the lower bound for single-level non-convex optimization is also \(\mathcal{O}(m^{\frac{1}{2}}\varepsilon^{-1})\) as proved by Zhou and Gu (2019). It is natural to ask if we can extend these results to bilevel optimization: _Are the optimal complexity bounds for solving bilevel optimization the same as in single-level optimization?_ ContributionsIn Section 2, we introduce SRBA, an adaptation of the SARAH algorithm to the bilevel setting. We then demonstrate in Section 3 that, similarly to the single-level setting, it requires \(\mathcal{O}\left((n+m)^{\frac{1}{2}}\varepsilon^{-1}\vee(n+m)\right)\) calls to oracles to reach an \(\varepsilon\)-stationary point. This is therefore an upper bound on the complexity of solving bilevel empirical risk minimization (ERM) problems. As shown in Table 1, it achieves the best-known complexity in the regime \(n+m\lesssim\mathcal{O}(\varepsilon^{-2})\). In Section 4, we analyze the lower bounds for such problems. We demonstrate that the number of iterations required to reach an \(\varepsilon\)-stationary point (see Definition 3.1) is at least \(\Omega(m^{\frac{1}{2}}\varepsilon^{-1})\), hereby matching the previous upper-bound in the case where \(n\asymp m\) and \(\varepsilon\leq m^{-\frac{1}{2}}\). SRBA is therefore near-optimal in that regime. Even though our main contribution is theoretical, we illustrate the numerical performances of the algorithm in Section 5. Related workThere are several strategies to solve (1) in a stochastic fashion. They can be separated into two groups: iterative differentiation algorithms (ITD) and approximate implicit differentiation algorithms (AID). On the one hand, in ITD algorithms, the Jacobian of \(z^{*}\) is estimated by differentiating the different steps used to compute an approximation of \(z^{*}\). On the other hand, AID algorithms leverage the implicit gradient given by (2) replacing \(z^{*}\) and \(v^{*}\) by some approximations \(z\) and \(v\). In the class of ITD algorithms, Maclaurin et al. (2015) propose to approximate the Jacobian of the solution of the inner problem by differentiating through the iterations of SGD with momentum. The complexity of the hypergradient computation in ITD solvers is studied in Franceschi et al. (2017); Grazzi et al. (2020); Ablin et al. (2020). For AID algorithms, Ghadimi and Wang (2018); Chen et al. (2021); Ji et al. (2021) propose to perform several SGD steps in the inner problem and then use Neumann approximations to approximate \(v^{*}(x)\) defined in (3). A method consisting of alternating steps in the inner and outer variables was proposed in Hong et al. (2021). These methods can be improved by using a warm start strategy for the inner problem Ji et al. (2021); Chen et al. (2021) and for the linear system Arbel and Mairal (2022). Some works elaborate on these ideas by adapting variance reduction methods like STORM Khanduri et al. (2021); Yang et al. (2021) or SAGA Dagreou et al. (2022). We take a similar approach and extend the SARAH variance reduction method to the bilevel setting. Finally, recent works propose to approximate the Jacobian of \(z^{*}\) by stochastic finite difference Sow et al. (2022) or Bregman divergence-based methods Huang et al. (2022). In single-level optimization, the problem of finding complexity lower bound for optimization problems has been widely studied since the seminal work of Nemirovsky and Yudin (1983). On the one hand, Agarwal and Bottou (2015) provided a lower bound to minimize strongly convex and smooth finite sum with deterministic algorithms that have access to individual gradients. These results were extended to randomized algorithms for (strongly) convex and eventually nonsmooth finite sum objective by Woodworth and Srebro (2016). On the other hand, Carmon et al. (2017) provided a lower bound for minimizing nonconvex functions with deterministic and randomized algorithms. The nonconvex finite sum case is treated in Fang et al. (2018); Zhou and Gu (2019). In the bilevel case, Ji and Liang (2023) showed a lower bound for deterministic, full-batch algorithms. However, this result is restricted to the case where the value function \(h\) is convex or strongly convex, which is not the case with most ML-related bilevel problems. Our results are instead in a non-convex setting. NotationThe quantity \(A_{\bullet}\) refers to \(A_{z}\), \(A_{v}\), or \(A_{x}\), depending on the context. If \(f:\mathbb{R}^{p}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a twice differentiable function, we denote \(\nabla_{i}f(z,x)\) its gradient w.r.t. its \(i^{\text{th}}\) variable. Its Hessian with respect to \(z\) is denoted \(\nabla^{2}_{11}f(z,x)\ \in\ \mathbb{R}^{p\times p}\) and its cross derivative matrix \(\begin{pmatrix}\frac{\partial^{2}f}{\partial z_{i}\partial x_{j}}\end{pmatrix} _{\begin{subarray}{c}i\in[p]\\ j\in[d]\end{subarray}}\) is denoted \(\nabla^{2}_{12}f(z,x)\ \in\ \mathbb{R}^{p\times d}\). We denote \(\Pi_{\mathcal{C}}\) the projection on a closed convex set \(\mathcal{C}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & Sample complexity & Stochastic setting & \(F\) & \(G\) \\ \hline **StocBiO** Ji et al. (2021) & \(\mathcal{O}(\varepsilon^{-2})\) & General expectation & \(\mathcal{C}^{1,1}_{L}\) & SC and \(\mathcal{C}^{2,2}_{L}\) \\ \hline **AmIGO** Arbel and Mairal (2022) & \(\mathcal{O}(\varepsilon^{-2})\) & General expectation & \(\mathcal{C}^{1,1}_{L}\) & SC and \(\mathcal{C}^{2,2}_{L}\) \\ \hline **MRBO** Yang et al. (2021) & \(\mathcal{O}(\varepsilon^{-\frac{3}{2}})\) & General expectation & \(\mathcal{C}^{1,1}_{L}\) & SC and \(\mathcal{C}^{2,2}_{L}\) \\ \hline **VRBO** Yang et al. (2021) & \(\mathcal{\tilde{O}}(\varepsilon^{-\frac{3}{2}})\) & General expectation & \(\mathcal{C}^{1,1}_{L}\) & SC and \(\mathcal{C}^{2,2}_{L}\) \\ \hline **SABA**Dagreou et al. (2022) & \(\mathcal{O}((n+m)^{\frac{3}{5}}\varepsilon^{-1})\) & Finite sum & \(\mathcal{C}^{2,2}_{L}\) & SC and \(\mathcal{C}^{3,3}_{L}\) \\ **SRBA** & \(\mathcal{O}((n+m)^{\frac{1}{5}}\varepsilon^{-1})\) & Finite sum & \(\mathcal{C}^{2,2}_{L}\) & SC and \(\mathcal{C}^{3,3}_{L}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison between the sample complexities and the Assumptions of some stochastic algorithms for bilevel optimization. It corresponds to the number of calls to gradient, Hessian-vector products and Jacobian-vector product sufficient to get an \(\varepsilon\)-stationary point. The tilde on the \(\mathcal{\tilde{O}}\) hide a factor \(\log(\varepsilon^{-1})\). ”SC” means ”strongly-convex”. \(\mathcal{C}^{p,p}_{L}\) means \(p\)-times differentiable with Lipschitz \(k\)th order derivatives for \(k\leq p\). SRBA: a Near-Optimal Algorithm for Bilevel Empirical Risk Minimization In this section, we introduce SRBA (Stochastic Recursive Bilevel Algorithm), a novel algorithm for bilevel empirical risk minimization which is provably near-optimal for this problem. This algorithm is inspired by the algorithms SPIDER (Fang et al., 2018) and SARAH (Nguyen et al., 2017, 2022) which are known for being near-optimal algorithms for nonconvex finite sum minimization problems. It relies on a recursive estimation of directions of interest, which is restarted periodically. Proofs are deferred to the appendix. ### Assumptions Before presenting our algorithm, we formulate several Assumptions on the functions \(F\) and \(G\). As for SARAH, the regularity assumptions are made on the individual functions \((G_{i})_{1\leq i\leq n}\) and \((F_{j})_{1\leq j\leq m}\) rather than on the empirical means \(G\) and \(F\). In Assumption 2.1 and Assumption 2.2, we state the regularity needed on the outer function \(F\) and inner function \(G\) respectively. **Assumption 2.1** (Regularity of \(F\)).: For all \(j\in[m]\), the function \(F_{j}\) is twice differentiable. The function \(F_{j}\) is \(L_{0}^{F}\)-Lipschitz continuous, its gradient \(\nabla F_{j}\) is \(L_{1}^{F}\)-Lipschitz continuous and the Hessian \(\nabla^{2}F_{j}\) is \(L_{2}^{F}\)-Lipschitz continuous. **Assumption 2.2** (Regularity of \(G\)).: For all \(i\in[n]\), The function \(G_{i}\) is three times differentiable. Its first, second, and third order derivatives are respectively \(L_{1}^{G}\)-Lipschitz continuous, \(L_{2}^{G}\)-Lipschitz continuous, and \(L_{3}^{G}\)-Lipschitz continuous. For any \(x\in\mathbb{R}^{d}\), the function \(G_{i}(\,.\,,x)\) is \(\mu_{G}\)-strongly convex. The strong convexity and the smoothness with respect to \(z\) hold for instance when we consider an \(\ell^{2}\)-regularized logistic regression problem with non-separable data. These regularity assumptions up to first-order for \(F\) and second-order for \(G\) are standard in the stochastic bilevel literature (Arbel and Mairal, 2022; Ji et al., 2021; Yang et al., 2021). The second-order regularity for \(F\) and third-order regularity for \(G\) are necessary for the analysis of the dynamics of \(v\), as it is the case in Dagreou et al. (2022). As shown in Ghadimi and Wang (2018, Lemma 2.2), these assumptions are sufficient to get the smoothness of \(h\), which is a fundamental property to get a descent on \(h\). **Proposition 2.3** (Smoothness of the value function).: _Under Assumptions 2.1 and 2.2, the function \(h\) is \(L^{h}\) smooth for some \(L^{h}>0\) which is precised in Appendix A.2._ Another consequence of Assumptions 2.1 and 2.2 is the boundedness of the function \(v^{*}\). **Proposition 2.4** (Boundedness of \(v^{*}\)).: _Assume that Assumptions 2.1 and 2.2 hold. Then, for \(R=\frac{L_{0}^{F}}{\mu_{G}}\) it holds that for any \(x\in\mathbb{R}^{d}\), we have \(\|v^{*}(x)\|\leq R\)._ In what follows we denote \(\Gamma\) the closed ball centered in \(0\) with radius \(R\) and \(\Pi_{\Gamma}\) the projection onto \(\Gamma\). Moreover, for \((z,v,x)\in\mathbb{R}^{p}\times\mathbb{R}^{p}\times\mathbb{R}^{d}\), we denote \(\Pi(z,v,x)=(z,\Pi_{\Gamma}(v),x)\). ### Hypergradient Approximation The gradient of \(h\) given by (2) is intractable in practice because it requires the perfect knowledge of \(z^{*}(x)\) and \(v^{*}(x)\) which are usually costly to compute, for instance when the inner problem is ill-conditioned. As classically done in the stochastic bilevel literature (Ji et al., 2021; Arbel and Mairal, 2022; Li et al., 2022), \(z^{*}(x)\) and \(v^{*}(x)\) are replaced by approximate surrogate variables \(z\) and \(v\). The variable \(z\) is typically the output of one or several steps of an optimization procedure applied to \(G(\,.\,,x)\). The variable \(v\) can be computed by using Neumann approximations or doing some optimization steps on the quadratic \(v\mapsto\frac{1}{2}v^{\top}\nabla_{11}^{2}G(z,x)v+\nabla_{1}F(z,x)^{\top}v\). We consider the approximate hypergradient given by \[D_{x}(z,v,x)=\nabla_{21}^{2}G(z,x)v+\nabla_{2}F(z,x)\enspace.\] The motivation behind this direction is that if we take \(z=z^{*}(x)\) and \(v=v^{*}(x)\), we recover the true gradient, that is \(D_{x}(z^{*}(x),v^{*}(x),x)=\ \nabla h(x)\). Proposition 2.5 from (Dagreou et al., 2022, Lemma 3.4) controls the hypergradient approximation error with the distances between \(z\) and \(z^{*}(x)\) and between \(v\) and \(v^{*}(x)\). **Proposition 2.5** (Hypergradient approximation error).: _Let \(x\ \in\ \mathbb{R}^{d}\). Assume that \(F\) is differentiable and \(L_{1}^{F}\) smooth with bounded gradient, \(G\) is twice differentiable with Lipschitz gradient and Hessian and \(G(\,x)\) is \(\mu_{G}\)-strongly convex. Then there exists a constant \(L_{x}\) such that_ \[\|D_{x}(z,v,x)-\nabla h(x)\|^{2}\!\leq\!L_{x}^{2}(\|z-z^{*}(x)\|^{2}+\|v-v^{*} (x)\|^{2}).\] Thus, it is natural to make \(z\) and \(v\) move towards their respective equilibrium values which are given by \(z^{*}(x)\) and \(v^{*}(x)\). As a consequence, we also introduce the directions \(D_{z}\) and \(D_{x}\) as follows \[D_{z}(z,v,x) =\nabla_{1}G(z,x)\enspace,\] \[D_{v}(z,v,x) =\nabla_{11}^{2}G(z,x)v+\nabla_{1}F(z,x)\enspace.\] The interest of considering the directions \(D_{z}\) and \(D_{v}\) is expressed in Proposition 2.6. **Proposition 2.6** (First-order conditions).: _Assume that \(G\) is strongly convex with respect to its first variable. Then for any \(x\in\mathbb{R}^{d}\), it holds \(D_{z}(z^{*}(x),v^{*}(x),x)=0\) and \(D_{v}(z^{*}(x),v^{*}(x),x)=0\)._ The directions \(D_{z}\), \(D_{v}\), and \(D_{x}\) can be written as sums over the samples. Hence, as mentioned by Dagreou et al. (2022), following these directions enables us to adapt any classical algorithm suited for single-level finite sum minimization to bilevel finite sum minimization. In what follows, for two indices \(i\in[n]\) and \(j\in[m]\), we consider the sampled directions \(D_{z,i,j}\), \(D_{v,i,j}\) and \(D_{x,i,j}\) defined by \[D_{z,i,j}(z,v,x) =\nabla_{1}G_{i}(z,x) \tag{4}\] \[D_{v,i,j}(z,v,x) =\nabla_{11}^{2}G_{i}(z,x)v+\nabla_{1}F_{j}(z,x)\] (5) \[D_{x,i,j}(z,v,x) =\nabla_{21}^{2}G_{i}(z,x)v+\nabla_{2}F_{j}(z,x)\enspace. \tag{6}\] When \(i\) and \(j\) are randomly sampled uniformly, these directions are unbiased estimators of the true directions \(D_{z}\), \(D_{v}\), and \(D_{x}\). Nevertheless, as in Nguyen et al. (2017), we use them to recursively build biased estimators of the directions that enable fast convergence. ### SRBA: Stochastic Recursive Bilevel Algorithm We propose SRBA which is a combination of the idea of recursive gradient coming from (Fang et al., 2018, Nguyen et al., 2022) and the framework proposed in (Dagreou et al., 2022). The SRBA algorithm relies on a recursive estimation of each direction \(D_{z}\), \(D_{v}\), \(D_{x}\) which is updated following the same strategy as SARAH. Let us denote by \(\rho\) the step size of the update for the variables \(z\) and \(v\) and \(\gamma\) the step size for the update of the variable \(x\). We use the same step size for \(z\) and \(v\) because the problems of minimizing the inner function \(G\) and solving the linear system (3) have the same conditioning driven by \(\nabla_{11}^{2}G\). For simplicity, we denote the joint variable \(\mathbf{u}=(z,v,x)\) and the joint directions weighted by the step sizes \(\mathbf{\Delta}\ =\ (\rho D_{z},\rho D_{v},\gamma D_{x})=(\mathbf{\Delta}_{z}, \mathbf{\Delta}_{v},\mathbf{\Delta}_{x})\). At iteration \(t\), the estimate direction \(\mathbf{\Delta}\) is initialized by computing full batch directions: \[\mathbf{\Delta}^{t,0}=(\rho D_{z}(\mathbf{\tilde{u}}^{t}),\rho D_{v}(\mathbf{ \tilde{u}}^{t}),\gamma D_{x}(\mathbf{\tilde{u}}^{t}))\] and a first update is performed by moving from \(\mathbf{\tilde{u}}^{t}\) in the direction \(-\mathbf{\Delta}^{t,0}\). As done in Hu et al. (2022), we project the variable \(v\) onto \(\Gamma\) to leverage the boundedness property of \(v^{*}\). Then, during the \(k\)th iteration of an inner loop of size \(q-1\), two indices \(i\in[n]\) and \(j\in[m]\) are sampled and the estimate directions are updated according to Equations (7) to (9) \[\mathbf{\Delta}_{z}^{t,k} =\rho(D_{z,i,j}(\mathbf{u}^{t,k})-D_{z,i,j}(\mathbf{u}^{t,k-1}))+\bm {\Delta}_{z}^{t,k-1} \tag{7}\] \[\mathbf{\Delta}_{v}^{t,k} =\rho(D_{v,i,j}(\mathbf{u}^{t,k})-D_{v,i,j}(\mathbf{u}^{t,k-1}))+ \mathbf{\Delta}_{v}^{t,k-1}\] (8) \[\mathbf{\Delta}_{x}^{t,k} =\gamma(D_{x,i,j}(\mathbf{u}^{t,k})-D_{x,i,j}(\mathbf{u}^{t,k-1}) )+\mathbf{\Delta}_{x}^{t,k-1} \tag{9}\] where the sampled directions \(D_{z,i,j}\), \(D_{v,i,j}\) and \(D_{x,i,j}\) are defined by the Equations (4) to (6). Then the joint variable \(\mathbf{u}\) is updated by \[\mathbf{u}^{t,k+1}=\Pi(\mathbf{u}^{t,k}-\mathbf{\Delta}^{t,k})\enspace. \tag{10}\] Recall that the projection is only performed on the variable \(v\). The other variables \(z\) and \(x\) keep unchanged after the projection step. At the end of the inner procedure, we set \(\mathbf{\tilde{u}}^{t+1}=\mathbf{u}^{t,q}\). The method is summarized in Algorithm 1. Note that this two loops structure with periodic full batch computations is similar to the structure of SVRG. Unlike SVRG, there is no reference point and the directions are updated recursively. ``` Input: initializations \(z_{0}\in\mathbb{R}^{p}\), \(x_{0}\in\mathbb{R}^{d}\), \(v_{0}\in\mathbb{R}^{p}\), number of iterations \(T\) and \(q\), step sizes \(\rho\) and \(\gamma\). Set \(\mathbf{\tilde{u}^{0}}=(z_{0},v_{0},x_{0})\) for\(t=0,\dots,T-1\)do Reset \(\mathbf{\Delta}\): \(\mathbf{\Delta}^{t,0}=(\rho D_{z}(\mathbf{\tilde{u}}^{t}),\rho D_{v}(\mathbf{ \tilde{u}}^{t}),\gamma D_{x}(\mathbf{\tilde{u}}^{t}))\) Update \(\mathbf{u}\): \(\mathbf{u}^{t,1}=\Pi(\mathbf{\tilde{u}}^{t}-\mathbf{\Delta}^{t,\mathbf{0}})\enspace,\) for\(k=1,\dots,q-1\)do Draw \(i\in\{1,\dots,n\}\) and \(j\in\{1,\dots,m\}\) \(\mathbf{\Delta}_{z}^{t,k}=\rho(D_{z,i,j}(\mathbf{u}^{t,k})-D_{z,i,j}(\mathbf{u}^{ t,k-1}))+\mathbf{\Delta}_{z}^{t,k-1}\) \(\mathbf{\Delta}_{z}^{t,k}=\rho(D_{v,i,j}(\mathbf{u}^{t,k})-D_{v,i,j}(\mathbf{u}^{ t,k-1}))+\mathbf{\Delta}_{v}^{t,k-1}\) \(\mathbf{\Delta}_{x}^{t,k}=\gamma(D_{x,i,j}(\mathbf{u}^{t,k})-D_{x,i,j}(\mathbf{u}^{ t,k-1}))+\mathbf{\Delta}_{x}^{t,k-1}\) Update \(\mathbf{u}\): \(\mathbf{u}^{t,k+1}=\Pi(\mathbf{u}^{t,k}-\mathbf{\Delta}^{t,k})\) endfor Set \(\mathbf{\tilde{u}}^{t+1}=\mathbf{u}^{t+1,q}\) endfor Return \((\tilde{z}^{T},\tilde{v}^{T},\tilde{x}^{T})=\mathbf{\tilde{u}}^{T}\) ``` **Algorithm 1** Stochastic Recursive Bilevel Algorithm In Algorithm 1, the three variables \(z\), \(v\), and \(x\) are updated simultaneously rather than alternatively. From a computational perspective, this allows us to share the common computations between the different oracles and to do the update of each variable in parallel. As a consequence, there is no sub-procedure to approximate the solution of the inner problem and the solution of the linear system. Note that in Yang et al. (2021), the authors propose VRBO, another adaptation of SPIDER/SARAH for bilevel problems. VRBO has a double loop structure where the inner variable is updated by several steps in an inner loop. In this inner loop, the estimate of the gradient of \(G\) and the gradient of \(h\) are also updated using SARAH's update rules. SRBA has a different structure. First, in SRBA, the inner variable \(z\) is updated only once between two updates of the outer variable instead of several times. Second, the solution of the linear system evolves following optimization steps whereas in VRBO a Neumann approximation is used. Finally, in Yang et al. (2021), the algorithm VRBO is analyzed in the case where the functions \(F\) and \(G\) are general expectations but not in the specific case of empirical risk minimization, as we do in Section 3, and achieves a worse sample complexity (see Table 1). ## 3 Theoretical Analysis of SRBA In this section we provide the theoretical analysis of Algorithm 1 leading to a final sample complexity in \(\mathcal{O}\left((n+m)^{\frac{1}{2}}\varepsilon^{-1}\vee(n+m)\right)\). The detailed proofs of the results are deferred to the appendix. Before diving into the details, let us define a few concepts. In Definition 3.1, we recall the definition of \(\varepsilon\)-stationary point. **Definition 3.1** (\(\varepsilon\)-stationary point).: Let \(d\) a positive integer, \(f:\mathbb{R}^{d}\to\mathbb{R}\) a differentiable function and \(\varepsilon>0\). We say that a point \(x\in\mathbb{R}^{d}\) is an \(\varepsilon\)-stationary point of \(f\) if \(\|\nabla f(x)\|^{2}\leq\epsilon\). With a slight abuse of language, in a stochastic context, we also call \(\varepsilon\)-stationary point a random variable \(x\) such that \(\mathbb{E}[\|\nabla f(x)\|^{2}]\leq\varepsilon\). This notion of \(\varepsilon\)-stationary point is necessary since we are dealing with nonconvex objectives. In this paper, the theoretical complexity of the algorithms is given in terms of number of calls to oracle, that is to say the number of times the quantity \[[\nabla F_{j}(z,x),\nabla G_{i}(z,x),\nabla_{11}^{2}G_{i}(z,x)v,\nabla_{21}^{2 }G_{i}(z,x)v] \tag{11}\] is queried for \(i\in[n]\), \(j\in[m]\), \(z\in\mathbb{R}^{p}\), \(v\in\mathbb{R}^{p}\) and \(x\in\mathbb{R}^{d}\). Note that in practice, although the second-derivatives of the inner functions \(\nabla_{11}^{2}G_{i}(z,x)\in\mathbb{R}^{p\times p}\) and \(\nabla_{21}^{2}G_{i}(z,x)\in\mathbb{R}^{d\times p}\) are involved, they are never computed or stored explicitly. We rather work with Hessian-vector products \(\nabla_{11}^{2}G_{i}(z,x)v\in\mathbb{R}^{p}\) and Jacobian-vector products \(\nabla_{21}^{2}G_{i}(z,x)v\in\mathbb{R}^{d}\) which can be computed efficiently thanks to automatic differentiation with a computational cost similar to the cost of computing the gradients \(\nabla_{1}G_{i}(z,x)\) and \(\nabla_{2}G_{i}(z,x)\)Pearlmutter (1994). The cost of one query (11) is therefore of the same order of magnitude as that of computing one stochastic gradient. ### Mean Squared Error of the Estimated Directions One strength of our method is its simple expression of the estimation error of the directions which comes from the bias-variance decomposition of the mean squared error provided by Nguyen et al. (2017). Let us denote the estimate directions \(D_{z}^{t,k}\ =\ \mathbf{\Delta}_{z}^{t,k}/\rho\), \(D_{v}^{t,k}=\mathbf{\Delta}_{v}^{t,k}/\rho\) and \(D_{x}^{t,k}=\mathbf{\Delta}_{x}^{t,k}/\gamma\). We also introduce the residuals \[S_{\bullet}^{t,k} =\sum_{r=1}^{k}\mathbb{E}[\|D_{\bullet}(\mathbf{u}^{t,r})-D_{ \bullet}(\mathbf{u}^{t,r-1})\|^{2}]\] \[\tilde{S}_{\bullet}^{t,k} =\sum_{r=1}^{k}\mathbb{E}[\|D_{\bullet}^{t,r}-D_{\bullet}^{t,r-1 }\|^{2}]\enspace.\] Proposition 3.2 provides a simple link between the mean squared error \(\mathbb{E}[\|D_{\bullet}^{t,k}-D_{\bullet}(\mathbf{u}^{t,k})\|^{2}]\) and the residuals \(S_{\bullet}^{t,k}\) and \(\tilde{S}_{\bullet}^{t,k}\). **Proposition 3.2** (MSE of the estimate directions).: _For any \(t\geq 0\) and \(k\in\{1,\dots,q-1\}\), the estimate \(D_{\bullet}^{t,k}\) of the direction \(D_{\bullet}(\mathbf{u}^{t,k})\) satisfies_ \[\mathbb{E}[\|D_{\bullet}^{t,k}-D_{\bullet}(\mathbf{u}^{t,k})\|^{2}]=\tilde{S }_{\bullet}^{t,k}-S_{\bullet}^{t,k}\enspace.\] We observe that the above error has two components: the accumulation of the difference between two successive full batch directions and the accumulation of the difference between two successive estimate directions. Proposition 3.2 will play a critical role in the analysis of SRBA. ### Fundamental Lemmas As usually done in optimization, we start by establishing descent lemmas which are key ingredients to get the final convergence result. Lemma 3.3 aims at characterizing the joint dynamic of \(\mathbf{u}\) on the inner problem. To do so, we introduce the function \(\phi_{z}\) defined as \[\phi_{z}(z,x)=G(z,x)-G(z^{*}(x),x)\enspace.\] In the bilevel literature, direct control on the distance to optimum \(\delta_{z}^{t,k}\triangleq\mathbb{E}[\|z^{t,k}-z^{*}(x^{t,k})\|^{2}]\) is established. Here, the biased nature of the estimate direction \(D_{z}^{t,k}\) makes it hard to upper bound appropriately the scalar product \(\langle D_{z}(\mathbf{u}^{t,k})-D_{z}^{t,k},z^{t,k}-z^{*}(x^{t,k})\rangle\). This explains the choice of considering \(\phi_{z}^{t,k}\) instead of \(\delta_{z}^{t,k}\). By combining the smoothness property of \(\phi_{z}\) and the bias-variance decomposition provided in Proposition 3.2, we can show some descent property on the sequence \(\phi_{z}^{t,k}\) defined by \(\phi_{z}^{t,k}=\mathbb{E}[\phi_{z}(z^{t,k},x^{t,k})]\). Before stating Lemma 3.3, let us define \(\mathcal{G}_{v}^{t,k}=\frac{1}{\rho}\left(v^{t,k}-\Pi_{v}(v^{t,k}-\rho D_{v}^{ t,k})\right)\) so that \(v^{t,k+1}=v^{t,k}-\rho\mathcal{G}_{v}^{t,k}\). This is the actual update direction of \(v\). Note that if there were no projections, we would have \(\mathcal{G}_{v}^{t,k}=D_{v}^{t,k}\). As a consequence, it acts as a surrogate of \(D_{v}^{t,k}\) in our analysis. We also define \[V_{z}^{t,k}= \mathbb{E}[\|D_{z}^{t,k}\|^{2}],\quad V_{v}^{t,k}=\mathbb{E}[\| \mathcal{G}_{v}^{t,k}\|^{2}],\quad V_{x}^{t,k}=\mathbb{E}[\|D_{x}^{t,k}\|^{2}]\] the variances and their respective sums over the inner loop \[\mathcal{V}_{z}^{t,k}=\sum_{r=1}^{k}\mathbb{E}[\|D_{z}^{t,r-1}\|^{2}],\quad \mathcal{V}_{v}^{t,k}=\sum_{r=1}^{k}\mathbb{E}[\|\mathcal{G}_{v}^{t,r-1}\|^{ 2}],\quad\mathcal{V}_{x}^{t,k}=\sum_{r=1}^{k}\mathbb{E}[\|D_{x}^{t,r-1}\|^{2} ]\enspace.\] **Lemma 3.3** (Descent on the inner level).: _Assume that the step sizes \(\rho\) and \(\gamma\) verify \(\gamma\ \leq\ C_{z}\rho\) for some positive constant \(C_{z}\) specified in the appendix. Then it holds_ \[\phi_{z}^{t,k+1}\leq \left(1-\frac{\mu_{G}}{2}\rho\right)\phi_{z}^{t,k}-\frac{\rho}{2} \left(1-\Lambda_{z}\rho\right)V_{z}^{t,k}+\rho^{3}\beta_{zz}\mathcal{V}_{z}^{ t,k}+\gamma^{2}\rho\beta_{zv}\mathcal{V}_{v}^{t,k} \tag{12}\] \[+\gamma^{2}\rho\beta_{zx}\mathcal{V}_{x}^{t,k}+\frac{\Lambda_{z} }{2}\gamma^{2}V_{x}^{t,k}+\frac{\gamma^{2}}{\rho}\overline{\beta}_{zx} \mathbb{E}[\|D_{x}^{t,k}(\mathbf{u}^{t,k})\|^{2}]\] _for some positive constants \(\Lambda_{z},\beta_{zz}\), \(\beta_{zx}\) and \(\overline{\beta}_{zx}\) that are specified in the appendix._ In (12) we recover a linear decrease of \(\phi_{z}^{t,k}\) by a factor \((1-\rho\mu_{G})\) but the outer variable's movement and the stochasticity make appear the direction \(D_{x}(\mathbf{u}^{t,k})\) and the noise that slow down the convergence of \(z\) towards \(z^{*}(x)\). For the variable \(v\), the quantity we consider is \[\phi_{v}(v,x)\ =\ \Psi(z^{*}(x),v,x)-\Psi(z^{*}(x),v^{*}(x),x)\] where \(\Psi(z,v,x)\) is defined as \[\Psi(z,v,x)=\frac{1}{2}v^{\top}\nabla_{11}^{2}G(z,x)v+\nabla_{1}F(z,x)^{\top}v\enspace.\] The intuition behind considering this quantity is that solving the linear system (3) is equivalent to minimizing over \(v\) the function \(\Psi(z^{*}(x),v,x)\). **Lemma 3.4** (Descent on the linear system).: _Assume that the step sizes \(\rho\) and \(\gamma\) verify \(\rho\leq B_{v}\) and \(\gamma\ \leq\ C_{v}\rho\) for some positive constants \(B_{v}\) and \(C_{v}\) specified in the appendix. Then it holds_ \[\phi_{v}^{t,k+1}\leq \left(1-\frac{\rho\mu_{G}}{16}\right)\phi_{v}^{t,k}-\tilde{\beta}_ {vv}\rho V_{v}^{t}+\rho^{3}\beta_{vz}\mathcal{V}_{z}^{t,k}+2\rho^{3}\beta_{vv }\mathcal{V}_{v}^{t,k}+\gamma^{2}\rho\beta_{vx}\mathcal{V}_{x}^{t,k}\] \[+\rho\alpha_{vz}\phi_{z}^{t,k}+\frac{\Lambda_{v}}{2}\gamma^{2} \mathbb{E}[\|D_{x}^{t,k}\|^{2}]+\frac{\gamma^{2}}{\rho}\overline{\beta}_{vx} \mathbb{E}[\|D_{x}(\mathbf{u}^{t,k})\|^{2}]\] _for some positive constants \(\Lambda_{v},\beta_{vz}\), \(\beta_{vx}\), \(\tilde{\beta}_{vv}\) and \(\overline{\beta}_{vx}\) that are specified in the appendix._ Lemma 3.4 is similar to Lemma 3.3. The appearance of \(\phi_{z}^{t,k}\) is a consequence of the fact \(D_{v}(z,v,x)\) is a step towards \(-[\nabla_{11}^{2}G(z,x)]^{-1}\nabla_{1}F(z,x)\) instead of \(-[\nabla_{11}^{2}G(z^{*}(x),x)]^{-1}\nabla_{1}F(z^{*}(x),x)\). The proof of this lemma harnesses the generalization of Polyak-Lojasiewicz inequality for composite functions introduced in Karimi et al. (2016). The following lemma is a consequence of the smoothness of \(h\). Let us denote the expected values \(h^{t,k}=\mathbb{E}[h(x^{t,k})]\) and expected gradient \(g^{t,k}\ =\ \mathbb{E}[\|\nabla h(x^{t,k})\|^{2}]\). **Lemma 3.5** (Descent on the value function \(h\)).: _There exist constants \(\beta_{hz}\), \(\beta_{hu}\), \(\beta_{hx}>0\) such that_ \[h^{t,k+1}\leq h^{t,k}-\frac{\gamma}{2}g^{t,k}+\gamma\frac{2L_{x}^{ 2}}{\mu_{G}}(\phi_{z}^{t,k}+\phi_{v}^{t,k})+\gamma\rho^{2}\beta_{hx}\mathcal{V} _{z}^{t,k}\] \[\qquad+\gamma\rho^{2}\beta_{hv}\mathcal{V}_{v}^{t,k}+\gamma^{3} \beta_{hx}\mathcal{V}_{x}^{t,k}-\frac{\gamma}{2}\left(1-L^{h}\gamma\right)V_{x }^{t,k}\enspace.\] This lemma shows that the control of the approximation error \(\phi_{\bullet}\) (Lemma 3.3 and Lemma 3.4) and the sum of variances \(\mathcal{V}_{\bullet}\) is crucial to get a decrease of \(\mathbb{E}[h(x^{t,k})]\). ### Complexity Analysis of SRBA In Theorem 1, we provide the convergence rate of SRBA towards a stationary point. **Theorem 1** (Convergence rate of SRBA).: Assume that Assumptions 2.1 and 2.2 hold. Assume that the step sizes verify \(\rho\leq\overline{\rho}\) and \(\gamma\leq\min(\overline{\gamma},\xi\rho)\) for some constants \(\xi\), \(\overline{\rho}\) and \(\overline{\gamma}\) specified in appendix. Then it holds \[\frac{1}{Tq}\sum_{t=0}^{T-1}\sum_{k=0}^{q-1}\mathbb{E}[\|\nabla h(x^{t,k})\|^{ 2}]=\mathcal{O}\left(\frac{1}{qT\gamma}\right)\] where \(\mathcal{O}\) hides regularity constants that are independent from \(n\) and \(m\). The proof combines classical proof techniques from the bilevel literature and elements from SARAH's analysis (Nguyen et al., 2017, 2022). We introduce the Lyapunov function \(\mathcal{L}(\mathbf{u}^{t,k})=h^{t,k}+\psi_{z}\phi_{z}^{t}+\psi_{v}\phi_{v}^{t,k}\) where \(\psi_{z}\) and \(\psi_{v}\) are non-negative constants chosen so that we have the inequality \(\mathcal{L}(\mathbf{u}^{t,k+1})\leq\mathcal{L}(\mathbf{u}^{t,k})-\frac{\gamma }{4}g^{t,k}\). Summing and telescoping this inequality provides the convergence rate. Note that if we set \(q=1\), we are actually in a nonstochastic regime and we can observe that we recover the convergence rate of Gradient Descent for nonconvex single-level problems (Nesterov, 2018) since the step size \(\gamma\) depends neither on the current iteration \(t\) nor the horizon \(T\). Increasing \(q\) allows a faster convergence in terms of iterations but makes each iteration more expensive since the number of oracle calls per iteration is \((2n+3m)+2\times 5(q-1)\). Thus, there is a trade-off between the convergence rate and the overall complexity. In Corollary 3.6, we state that the value of \(q\) that gives the best sample complexity is \(\mathcal{O}(n+m)\). **Corollary 3.6** (Sample complexity of SRBA).: _Suppose that Assumptions 2.1 and 2.2 hold. If we take \(\rho~{}=~{}\overline{\rho}(n+m)^{-\frac{1}{2}}\), \(\gamma~{}=~{}\min(\overline{\gamma},\xi\rho)(n+m)^{-\frac{1}{2}}\) and \(q~{}=~{}n+m\), then \(\mathcal{O}\left((n+m)^{\frac{1}{2}}\varepsilon^{-1}\vee(n+m)\right)\) calls to oracles are sufficient to find an \(\varepsilon\)-stationary point with SRBA._ This sample complexity is analogous to the sample complexity of SARAH in the nonconvex finite-sum setting. To the best of our knowledge, such a rate is the best known for bilevel empirical risk minimization problems in terms of dependency on the number of samples \(n+m\) and the precision \(\varepsilon\). This improve by a factor \((n+m)^{-\frac{1}{6}}\) the previous result which was achieved by SABA (Dagreou et al., 2022). As a comparison, VRBO (Yang et al., 2021) achieves a sample complexity in \(\tilde{\mathcal{O}}(\varepsilon^{-\frac{3}{2}})\). Note that, for large value of \(n+m\) we can have actually \((n+m)^{\frac{1}{2}}\varepsilon^{-1}\gtrsim\varepsilon^{-2}\). This means that, just like single-level SARAH, the complexity of SRBA can be beaten by others when the number of samples is too high with respect to the desired accuracy (actually, if \(n+m=\Omega(\varepsilon^{-2})\)). ## 4 Lower Bound for Bilevel ERM In this section, we derive a lower bound for bilevel empirical risk minimization problems. This show that SRBA is a near-optimal algorithm for this class of problems. ### Functions and Algorithms Classes We start by defining the function class and the algorithm class we consider. **Definition 4.1** (Function class).: Let \(n,m\) two positive integers, \(L_{1}^{F}\) and \(\mu_{G}\) two positive constants. The class of the smooth empirical risk minimization problems denoted by \(\mathcal{C}^{L_{1}^{F},\mu_{G}}\) is the set of pairs of real-valued function families \(((F_{j})_{1\leq j\leq m},(G_{i})_{1\leq i\leq n})\) defined on \(\mathbb{R}^{p}\times\mathbb{R}^{d}\) such that for all \(j\in[m]\), \(F_{j}\) is \(L_{1}^{F}\) smooth and for all \(i\in[n]\), \(G_{i}\) is twice differentiable and \(\mu_{G}\)-strongly convex. Note that we do not require \(F\) to be convex in the class \(\mathcal{C}^{L_{1}^{F},\mu_{G}}\). In particular, the class of bilevel problems that we consider is nonconvex. This class contains, for instance, the functions defining the bilevel formulation of the datacleaning task (see Section 5). For the algorithmic class, we consider algorithms that implement approximate implicit differentiation, using oracles of the form (11). **Definition 4.2** (Algorithmic class).: Given initial points \(z^{0},v^{0},x^{0}\), a _linear bilevel algorithm_\(\mathcal{A}\) is a measurable mapping such that for any \(((F_{j})_{1\leq j\leq m},(G_{i})_{1\leq i\leq n})\in\mathcal{C}^{L_{1}^{F},\mu_ {G}}\), the output of \(\mathcal{A}((F_{j})_{1\leq j\leq m},(G_{i})_{1\leq i\leq n})\) is a sequence \(\{(z^{t},v^{t},x^{t},i_{t},j_{t})\}_{t\geq 0}\) of points \((z^{t},v^{t},x^{t})\) and random variables \(i_{t}\in[n]\) and \(j_{t}\in[m]\) such that for all \(t\geq 0\) \[z^{t+1}\in z^{0}+ \text{Span}\{\nabla_{1}G_{i_{0}}(z^{0},x^{0}),\ldots,\nabla_{1}G_ {i_{t}}(z^{t},x^{t})\}\] \[v^{t+1}\in v^{0}+ \text{Span}\{\nabla_{11}^{2}G_{i_{0}}(z^{0},x^{0})v^{0}+\nabla_{ 1}F_{j_{0}}(z^{0},x^{0}),\] \[\ldots,\nabla_{11}^{2}G_{i_{t}}(z^{t},x^{t})v^{t}+\nabla_{1}F_{j _{t}}(z^{t},x^{t})\}\] \[x^{t+1}\in x^{0}+ \text{Span}\{\nabla_{21}^{2}G_{i_{0}}(z^{0},x^{0})v^{0}+\nabla_{ 2}F_{j_{0}}(z^{0},x^{0}),\] \[\ldots,\nabla_{21}^{2}G_{i_{t}}(z^{t},x^{t})v^{t}+\nabla_{2}F_{j _{t}}(z^{t},x^{t})\}.\] This algorithmic class includes popular stochastic bilevel first-order algorithms, such as AmIGO (Arbel and Mairal, 2022), FSLA (Li et al., 2022), SOBA, and SABA (Dagreou et al., 2022). Note that despite the projection step, SRBA is part of this algorithmic class since the projection of a vector onto \(\Gamma\) is actually just a rescaling. ### Main Theorem Problem (1) is actually a smooth nonconvex optimization problem. The lower complexity bound for nonconvex finite sum problem has been studied in Fang et al. (2018); Zhou and Gu (2019). In particular, they show that the number of gradient calls needed to get an \(\varepsilon\)-stationary point for a smooth nonconvex finite sum is at least \(\mathcal{O}(m^{\frac{1}{2}}\varepsilon^{-1})\), where \(m\) is the number of terms in the finite sum. Intuitively, we expect that the lower complexity bound to solve (1) to be larger. Indeed, bilevel problems are harder than single-level problems because a bilevel problem involves the resolution of several subproblems to progress in its resolution. Theorem 2 formalizes this intuition by showing that the classical single-level lower bound is also a lower bound for bilevel problems. **Theorem 2** (Lower bound for bilevel ERM).: For any linear bilevel algorithm \(\mathcal{A}\), and any \(L^{F}\), \(n\), \(\Delta\), \(\varepsilon\), \(p\) such that \(\varepsilon^{2}\leq(\Delta L^{F}m^{-1})/10^{3}\), there exists a dimension \(d=\mathcal{O}(\Delta\varepsilon^{-1}m^{\frac{1}{2}}L^{F})\), an element \(((F_{j})_{1\leq j\leq m},(G_{i})_{1\leq i\leq n})\in\mathcal{C}^{L_{1}^{F},\mu_ {G}}\) such that the value function \(h\) defined as in (1) satisfies \(h(x^{0})\ -\ \inf_{x\in\mathbb{R}^{d}}h(x)\leq\Delta\) and in order to find \(\hat{x}\in\mathbb{R}^{d}\) such that \(\mathbb{E}[\|\nabla h(\hat{x})\|^{2}]\leq\varepsilon\), \(\mathcal{A}\) needs at least \(\Omega(m^{\frac{1}{2}}\varepsilon^{-1})\) calls to oracles of the form (11). The proof is an adaptation of the proof of Zhou and Gu (2019, Theorem 4.7). It consists in taking as outer function \(F\) defined by \(F(z,x)=\sum_{j=1}^{m}f(U^{(j)}z)\) where \(f\) is the "worst-case function" used by Carmon et al. (2017), \(U=[U^{(j)},\ldots,U^{(m)}]^{\top}\) is an orthogonal matrix and \(G(z,x)=\frac{1}{2}\|z-x\|^{2}\). We harness the fact that \(\|\nabla f(y)\|^{2}>K\) as long as the two last coordinates of \(y\) are zero for some known constant \(K\). Then we use the "zero chain property" to upper bound the number of indices \(j\) such the two last components of \(U^{(j)}x^{t}\) are zero at a given iteration \(t\), implying \(\|\nabla h(x^{t})\|^{2}>\epsilon\) when \(t\) is smaller than \(\mathcal{O}(m^{\frac{1}{2}}\varepsilon^{-1})\). Note that here, the function class considered is less restrictive than the function class that verifies the upper complexity bound achieved by SRBA in Corollary 3.6. Considering a class as restrictive as the class needed for the analysis of SRBA could lead to a smaller lower bound. As a comparison to the existing lower bound for bilevel optimization in Ji and Liang (2023), we consider randomized algorithms and do not assume the value function \(h\) to be convex or strongly convex. ## 5 Numerical Experiments Even though our contribution is mostly theoretical, we run several experiments to highlight the influence of the inner loop size on the performances of SRBA and to compare the proposed algorithm with state-of-the-art stochastic bilevel solvers. A more detailed description of the experiments is available in Appendix C. ### Influence of the Period \(q\) We are interested in the impact of the period \(q\) on the algorithm's performance. We consider the hyperparameter selection problem for \(\ell^{2}\)-regularized logistic regression with the dataset IJCNN1. In this case, \(F\) is the validation loss and \(G\) the training loss. We run SRBA for several values of \(q\). In Figure 1, we display the suboptimality \(h(x^{t})-h^{*}\) where \(h^{*}\) is the minimum value reached among all the runs. The performances are reported both relatively to wall clock time and iterations. An iteration corresponds to one update of the variables \(z\), \(v\), and \(x\) with the full batch or stochastic directions. Footnote 1: [https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html) The first observation is that the parameter \(q\) impacts dramatically the convergence speed of SRBA in practice. While all the runs converge, the variance and the speed of the suboptimality Figure 1: **Left:** Comparison of the performances of SRBA for different values of \(q\) on a hyperparameter selection task for \(\ell^{2}\)-regularized logistic regression with IJCNN1 dataset with respect to time and iterations. **Right:** Comparison of SRBA with other stochastic bilevel methods on the datacleaning task with the MNIST dataset. The solvers are run with 10 different seeds and the median performance over these seeds is reported. The shaded area corresponds to the performances between the 20% and the 80% percentiles. We report the test error with respect to wall clock time. We notice that SRBA achieves the best final accuracy even though it is slower than the others at the beginning. differ. The figure shows that reducing the period \(q\) gives performances with less variance, due to improved gradient estimators. However, we notice a difference between the performances with respect to time and with respect to iteration. For instance, the curve corresponding to \(q=n+m\) is among the best curves when looking in terms of iterations while it becomes the second slowest with respect to time. This suggests that there is a trade-off between computing too many times the full batch quantities versus improving the gradient estimates. In the presented experiment, \(q=4(n+m)\) gives the best performances. ### Comparison of SRBA with Competitors We compare the performances of SRBA with stochastic bilevel solvers on the datacleaning problem (Franceschi et al., 2017) with MNIST dataset2. In this task, the training set is composed of \(n_{\text{train}}\) labeled samples \((d_{i}^{\text{train}},y_{i}^{\text{train}})_{i\in[n_{\text{train}}]}\) that possibly have corrupted labels with a probability \(p_{c}\). The validation set has \(n_{\text{val}}\) clean labels. The datacleaning task consists in learning simultaneously a classifier and a weighting on the training samples. This problem can be cast as a bilevel optimization problem where the inner loss is the training loss where the training samples are weighted and the outer loss is the validation loss. The inner variable is the parameter of the classifier and belongs to \(\mathbb{R}^{p}\) and the outer variable is the weighting of the training set and belongs to \(\mathbb{R}^{n_{\text{train}}}\). A more formal formulation of the problem is provided in the appendix. Footnote 2: [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/) In Figure 1 (right), we compare the final test error of SRBA with AmIGO (Arbel and Mairal, 2022), MRBO (Yang et al., 2021), StocBiO (Ji et al., 2021) and SABA (Dagreou et al., 2022). In this experiment, we have \(p_{c}=0.5\). The parameters of each algorithm have been selected by a grid search. We observe that SRBA reaches the lowest plateau. Nevertheless, it is the slowest at the beginning due to the full batch computations. ## 6 Conclusion In this paper, we have introduced SRBA, an algorithm for bilevel empirical risk minimization. We have demonstrated that the sample complexity of SRBA is \(\mathcal{O}((n+m)^{\frac{1}{2}}\varepsilon^{-1})\) for any bilevel problem where the inner problem is strongly convex. Then, we have demonstrated that any bilevel empirical risk minimization algorithm has a sample complexity of at least \(\mathcal{O}(m^{\frac{1}{2}}\varepsilon^{-1})\) on some problems where the inner problem is strongly convex. This demonstrates that SRBA is optimal, up to constant factors, and that bilevel empirical risk minimization is as hard as single-level nonconvex empirical risk minimization. ## Acknowledgements SV acknowledges the support of the ANR GraVa ANR-18-CE40-0005. This work is supported by a public grant overseen by the French National Research Agency (ANR) through the program UDOPIA, project funded by the ANR-20-THIA-0013-01 and DATAIA convergence institute (ANR-17-CONV-0003).
2310.10338
Scene Graph Conditioning in Latent Diffusion
Diffusion models excel in image generation but lack detailed semantic control using text prompts. Additional techniques have been developed to address this limitation. However, conditioning diffusion models solely on text-based descriptions is challenging due to ambiguity and lack of structure. In contrast, scene graphs offer a more precise representation of image content, making them superior for fine-grained control and accurate synthesis in image generation models. The amount of image and scene-graph data is sparse, which makes fine-tuning large diffusion models challenging. We propose multiple approaches to tackle this problem using ControlNet and Gated Self-Attention. We were able to show that using out proposed methods it is possible to generate images from scene graphs with much higher quality, outperforming previous methods. Our source code is publicly available on https://github.com/FrankFundel/SGCond
Frank Fundel
2023-10-16T12:26:01Z
http://arxiv.org/abs/2310.10338v1
# Scene Graph Conditioning in Latent Diffusion ###### Abstract Diffusion models excel in image generation but lack detailed semantic control using text prompts. Additional techniques have been developed to address this limitation. However, conditioning diffusion models solely on text-based descriptions is challenging due to ambiguity and lack of structure. In contrast, scene graphs offer a more precise representation of image content, making them superior for fine-grained control and accurate synthesis in image generation models. The amount of image and scene-graph data is sparse, which makes fine-tuning large diffusion models challenging. We propose multiple approaches to tackle this problem using ControlNet and Gated Self-Attention. We were able to show that using our proposed methods it is possible to generate images from scene graphs with much higher quality, outperforming previous methods. Our source code is publicly available on [https://github.com/FrankFundel/SGCond](https://github.com/FrankFundel/SGCond) ## 1 Introduction Diffusion models [34, 10] have emerged as powerful image generation models, by gradually removing noise at each timestep. Recent advancements, such as Latent Diffusion [33], have improved the efficiency and computational cost of diffusion models. Latent Diffusion applies diffusion processes on a compressed latent space, achieved through a VQVAE-based autoencoder. Diffusion models exhibit great stability during training, avoid mode collapse, and generate images of exceptional quality. However, they are computationally expensive for sampling new images and lack a low-dimensional latent space compared to traditional VAEs, limiting conditional control. Nonetheless, diffusion models have proven to be highly effective in image generation, surpassing the performance of GANs in terms of image quality and stability [5]. Various techniques have been proposed to add more control to diffusion models. ControlNet [46] focuses on fine-tuning a copy of the model, while preserving the original knowledge using zero-convolutions. T2I-Adapter [23] offers a lightweight model for fine-tuning only a smaller adapter model while leaving the diffusion model unchanged. GLIGEN [20] introduces grounded text-to-image diffusion, combining text and grounding embeddings using Gated Self-Attention to enhance contextual understanding. These techniques showcase the advancements in controlling diffusion models. While conditioning on text for image generation models has its limitations, scene graphs offer a more structured and unambiguous representation of image content. Text-based descriptions can be long, loosely structured, and subject to semantic ambiguity. In contrast, scene graphs provide a concise and precise encoding of object relationships and spatial arrangements within an image. These advantages make scene graphs a superior choice for conditioning image generation models when accurate synthesis and fine-grained control over visual content are crucial. To address these challenges, different methods for utilizing scene graphs in image generation have been proposed. Some techniques rely on predicting scene layouts, which are coarse representations of the intended scenes [16, 37]. Other methods condition on crops containing single objects corresponding to nodes in a scene graph [21]. Recently, works have been proposed which involve learning embeddings through masked contrastive pre-training to align image and graph features [44]. However, images that are conditioned on scene graphs are not yet comparable to images that are conditioned on other modalities such as bounding boxes, poses or depth maps. ### Contribution Our contribution can be summarized with the following points: * We propose multiple approaches for effectively fine-tuning diffusion models using scene graphs * We evaluated the most promising methods to some extend [2] ## 2 Background ### Image Generation Recently, image generation has acquired significant attention due to the remarkable advancements and capabilities demonstrated by deep learning models. These models have the potential to generate highly realistic and diverse images, often indistinguishable from those captured by cameras or created by humans [5, 45]. #### 2.1.1 Autoencoder Autoencoders are great at compressing and reconstructing images, while preserving their most prominent features. By encoding an image using a Convolutional Neural Network (CNN) into a compressed latent space representation and then decoding this representation back into an image, a simple and easy to train method for image generation is at hand - one can simply manipulate the latent vector and thus, induce the desired changes. But autoencoders also bring an interesting problem when it comes to image generation. Autoencoders map an input image directly into a single point in the latent space which results in a discrete distribution of points [26]. This makes interpolation between different latent representations difficult. Additionally, the distribution of points in the latent space is uncertain which results in poor sampling of novel images. #### 2.1.2 Gan Generative Adversarial Networks (GANs) [9] solve the issue of non-uniform latent space by generating images directly from random uniform noise in the latent space. To train the generator network another network, the discriminator network, is trained simultaneously to decide if an image is from a set of example images or if it is generated. As the discriminator becomes better at deciding if an image is real or fake, the generator becomes better at generating new images. Because of the adversarial structure of the model, GANs are both very effective and can generate images almost indistinguishable from real images, but also difficult to train. Training is highly sensitive to hyperparameter selection and sometimes the model is unable to converge properly. Additionally, the generator can also learn to map every latent vector to the same few output
2303.10200
HIP 67506 C: MagAO-X Confirmation of a New Low-Mass Stellar Companion to HIP 67506 A
We report the confirmation of HIP 67506 C, a new stellar companion to HIP 67506 A. We previously reported a candidate signal at 2$\lambda$/D (240~mas) in L$^{\prime}$ in MagAO/Clio imaging using the binary differential imaging technique. Several additional indirect signals showed that the candidate signal merited follow-up: significant astrometric acceleration in Gaia DR3, Hipparcos-Gaia proper motion anomaly, and overluminosity compared to single main sequence stars. We confirmed the companion, HIP 67506 C, at 0.1" with MagAO-X in April, 2022. We characterized HIP 67506 C MagAO-X photometry and astrometry, and estimated spectral type K7-M2; we also re-evaluated HIP 67506 A in light of the close companion. Additionally we show that a previously identified 9" companion, HIP 67506 B, is a much further distant unassociated background star. We also discuss the utility of indirect signposts in identifying small inner working angle candidate companions.
Logan A. Pearce, Jared R. Males, Sebastiaan Y. Haffert, Laird M. Close, Joseph D. Long, Avalon L. McLeod, Justin M. Knight, Alexander D. Hedglen, Alycia J. Weinberger, Olivier Guyon, Maggie Kautz, Kyle Van Gorkom, Jennifer Lumbres, Lauren Schatz, Alex Rodack, Victor Gasho, Jay Kueny, Warren Foster, Katie M. Morzinski, Philip M. Hinz
2023-03-17T18:25:49Z
http://arxiv.org/abs/2303.10200v1
# HIP 67506 C: MagAO-X Confirmation of a New Low-Mass Stellar Companion to HIP 67506 A ###### Abstract We report the confirmation of HIP 67506 C, a new stellar companion to HIP 67506 A. We previously reported a candidate signal at 2\(\lambda\)/D (240 mas) in L\({}^{\prime}\) in MagAO/Clio imaging using the binary differential imaging technique. Several additional indirect signals showed that the candidate signal merited follow-up: significant astrometric acceleration in Gaia DR3, Hipparcos-Gaia proper motion anomaly, and overluminosity compared to single main sequence stars. We confirmed the companion, HIP 67506 C, at 0.1" with MagAO-X in April, 2022. We characterized HIP 67506 C MagAO-X photometry and astrometry, and estimated spectral type K7-M2; we also re-evaluated HIP 67506 A in light of the close companion. Additionally we show that a previously identified 9" companion, HIP 67506 B, is a much further distant unassociated background star. We also discuss the utility of indirect signposts in identifying small inner working angle candidate companions. keywords: planets and satellites: detection, (stars:) binaries: visual, stars: statistics, methods: data analysis, methods: observational ## 1 Introduction High-contrast imaging searches have found very low occurrence rates for close substellar companions. For example, 9\({}^{+5}_{-4}\)% for 5-13 MJup, \(\sim\) 0.8\({}^{+0.8}_{-0.5}\)% for 13-80 MJup companions within 10-100 AU in the recent results from the Gemini Planet Imager Exoplanet Survey (GPIES); (Nielsen et al., 2019), while the SHINE survey (Vigan et al., 2021) found frequency of systems with at least one substellar companion to be 23.0\({}^{+13.5}_{-9.7}\)%, 5.8\({}^{+4.7}_{-2.8}\)%, and 12.6\({}^{+12.9}_{-7.1}\)% for BA, FGK, and M stars. Yet radial velocity, transit, and microlensing surveys point to higher occurrence rates in regions promising for future direct imaging contrasts and separation (e.g. Bryan et al., 2019; Herman et al., 2019; Poleski et al., 2021). Decreasing the effective inner working angle (IWA) of observations increases the area of the accessible region proportional to (IWA)\({}^{-2}\). Smaller IWAs extend the reach to tighter regimes of nearby stars, and to the planetary regime of more distant stars (Mawet et al., 2012). Working at small IWAs will be vital for the future of the high-contrast imaging field. Rodigas et al. 2015 demonstrated that for visual binaries of separation \(\approx\)2 - 10\({}^{\prime\prime}\) and approximately equal magnitude, a starlight subtraction via a principal component analysis-based reference differential imaging (RDI) algorithm using each star of the binary as reference for the other - termed binary differential imaging (BDI) - outperforms the common angular differential imaging technique at close separations. In Pearce et al. 2022 we used BDI to reduce a set of 17 visual binaries imaged in L\({}^{\prime}\) and 3.95\(\mu\)m filters with MagAO/Clio instrument on the Magellan Clay Telescope at Las Campanas Observatory from 2015-2017. In that work we reported detection of a candidate companion signal at 2\(\lambda\)/D separation to the star HIP 67506 A. Due to the proximity to the star's core we were unable to determine the nature of the companion, but had evidence to suggest it might be near the stellar/substellar mass boundary. In this work we report the results of follow-up observations of HIP 67506 A with the MagAO-X instrument on the Magellan Clay telescope in April 2022 to confirm the candidate signal. We report the discovery of HIP 67506 C, a previously unknown early-M type 0.1\({}^{\prime\prime}\) (\(\sim\) 20 AU) companion to HIP 67506 A. In Section 2 we describe the indirect indications pointing to the existence of a hidden companion. In Section 3 we describe our MagAO-X follow up observations and confirmation of HIP 67506 C, and in Section 4 our astrometric and photometric characterization. Additionally in Appendix A we demonstrate that the previously identified 9\({}^{\prime\prime}\)-separated star HIP 67506 B is not actually physically associated. ## 2 Stellar Properties HIP 67506 A is a field star (99.9% probability in BANYAN E; Gagne et al., 2018) at 221.6\(\pm\)1.8 pc (Gaia Collaboration et al., 2021). It was identified as type G5 (Spencer Jones and Jackson, 1939), mass 1.2M\({}_{\odot}\)(Chandler et al., 2016), with effective temperature T\({}_{\rm eff}\) = 6077 \(\pm\) 150 K and luminosity L = 0.37 \(\pm\) 0.07 L\({}_{\odot}\)(McDonald et al., 2012). In Pearce et al. (2022) we used these values to estimate an age of \(\approx\)200 Myr from isochrone fitting to Baraffe et al. (2015) isochrones. It was identified in the Hipparcos and Tycho Doubles and Multiples Catalog (ESA, 1997) as a binary system with another star (HIP 67506 B) with separation 9\({}^{\prime\prime}\), and dubbed HIP 67506 A and B. ### Indicators of a companion to HIP 67506 A In Pearce et al. (2022) we observed 17 visual binary systems and reduced the images using the Binary Differential Imaging (BDI) technique (see also Rodigas et al., 2015) with Magellan Adaptive Optics system (MagAO) (Close et al., 2013) and Clio science camera on the Magellan Clay Telescope at Las Campanas Observatory in MKO L\({}^{\prime}\) and 3.95\(\mu\)m filters, from 2014-2017. To summarize briefly, we simultaneously observed a science and PSF reference target by selecting binaries of nearly equal magnitude, separated enough that their PSF features do not overlap, but close enough to be within the isoplanatic patch at these wavelengths, making the target and reference PSF as close to equal in structure and signal-to-noise ratio as possible. We then reduced each star with the other as the PSF reference, using Karhunen-Loeve Image Projection (KLIP; Soummer et al., 2012) to reconstruct a model PSF from the reference star to subtract from the target star. We observed HIP 67506 AB on 2015-05-31 as part of this survey and detected a candidate companion signal \(\sim\)0.2\({}^{\prime\prime}\) East of HIP 67506 A. Figure 1 displays the KLIP-reduced image of HIP 67506 A from that paper, with the candidate signal marked by the red circle. The candidate signal is distorted from a typical PSF shape - due its proximity to the star's core (at 2\(\lambda\)/D) the signal was corrupted by PSF subtraction. However the fact that it did not appear to smear azimuthally like the other residuals at that same separation points to the possibility of its being a true companion signal. There are secondary indications of a companion to HIP 67506 A. Figure 2 shows a Gaia EDR3 BP minus RP vs absolute G magnitude color-magnitude diagram of Praesepe Cluster members identified in Deacon and Kraus (2020) (orange), reproducing their Figure 4. Members they flagged as overluminous and with elevated astrometric noise in Gaia EDR3, indicating an unresolved companion, are marked with blue and purple triangles respectively. HIP 67506 A is marked with a red star in the main and inset axes. HIP 67506 A clearly falls on the overluminous region above the main sequence, indicating that the flux measured by Gaia is abnormally high for a single star, pointing to the presence of an unresolved stellar companion. HIP 67506 A also has indicators of an unresolved companion in Gaia astrometry. The Gaia Renormalized Unit Weight Error (RUWE) is a signpost for unresolved companions. RUWE encapsulates all sources of error in the fit to the assumed single star astrometric model, corrected for correlation with source color and magnitude. RUWE \(\approx\) 1 is expected for a well-behaved solution (Lindegren, 2018)1. RUWE \(>\)2 indicates signficant deviation from a single star model. HIP 67506 A has RUWE= 2.02 in Gaia EDR3, indicating that a companion is likely. Footnote 1: [https://www.cosmos.esa.int/web/gaia/dr2-known-issues#AstrrometryConsiderations](https://www.cosmos.esa.int/web/gaia/dr2-known-issues#AstrrometryConsiderations) Footnote 2: See [https://pea.esac.esa.int/archive/documentation/GEDR3/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html](https://pea.esac.esa.int/archive/documentation/GEDR3/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html) for complete description of Gaia catalog contents While RUWE is the most complete and easy to interpret metric (Lindegren, 2018), other metrics in Gaia can probe multiplicity. Perturbations of the source photocenter (caused by orbiting unresolved objects) compared to the center-of-mass motion (which moves as a single star) will cause the observations to be a poor match to the fitting model, which registers as excess noise via the astrometric_excess_noise parameter, and whose significance is captured in the astrometric_excess_noise_sig parameter (\(>\)2 indicates significant excess noise). The astrometric_chi2_al term reports the \(\chi^{2}\) value of the observations to the fitting model, with lower values indicating better fit to observations. From the image parameter determination (IPD) phase, ipd_gof_harmonic_amplitude is sensitive to elongated PSF shapes relative to the scan direction (larger values indicate more elongation), and ipd_frac_multi_peak reports the percentage of observations which contained more than one peak in the windows2. Footnote 3: See [https://pea.esac.esa.int/archive/documentation/GEDR3/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html](https://pea.esac.esa.int/archive/documentation/GEDR3/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html) for complete description of Gaia catalog contents Table 1 shows values of these metrics for HIP 67506 A. The IPD parameters are small and insignificant, suggesting that there are no Figure 1: MKO L\({}^{\prime}\) KLIP-reduced image of HIP 67506 A from our Binary Differential Imaging survey described in Pearce et al. (2022). The central star is masked in the reduction, and the candidate signal is marked with a red circle \(\sim\)2\({}^{\prime\prime}\) (2.0 \(\lambda\)/D) to the east. This was identified as a candidate signal due to the fact that it did not appear to smear azimuthally with derotation like the other residual structures at similar separation, and the other indications described in Section 2.1 marginally resolved sources (\(\rho\sim\)0.1-1.2\(\arcsec\), separation larger than the resolution limit but smaller than the confusion limit, Gaia Collaboration et al.2021) present in the images, however the astrometric noise parameters are large and significant, affirming the presence of subsystems. This points to a companion near or below the resolution limit of \(\approx\)0.1\(\arcsec\). Finally, HIP 67506 A also shows significant acceleration between the Hipparcos-Gaia astrometric measurements. The Hipparcos-Gaia Catalog of Accelerations (HGCA; Brandt, 2021) measures the change in proper motion between a star's Hipparcos and Gaia proper motion measurements, as well as the positional difference between the missions, divided by the \(\sim\)24 year time baseline, and quantifies the deviation from linear motion. This acceleration is called the proper motion anomaly (PMa). The HGCA shows a significant PMa for HIP 67506 A, with a \(\chi^{2}\) = 41 for the goodness fit of a linear proper motion to the measured astrometry. This points to unresolved subsystems causing acceleration. Additionally, Kervella et al.2022 produced a PMa catalog for Hipparcos-Gaia EDR3 which also shows significant acceleration for HIP 67506 A (S/N = 9.31). They used the measured tangential velocity anomaly to constrain the mass of the object causing acceleration (which is degenerate with separation; Kervella et al.2019). Using a mass of 1.3 M\({}_{\odot}\) for HIP 67506 A, they estimate a companion of mass 180 M\({}_{\rm Jup}\) at 10 au causing the observed acceleration of HIP 67506 A. Extrapolating this out to the 2015 projected separation of HIP 67506 C (48 AU), the acceleration would be caused by a \(\sim\)400 M\({}_{\rm Jup}\) object. The position angle of the acceleration given in Kervella et al.2022 is 96.6\(\pm\)3.8\({}^{\circ}\) for the 2016.0 Gaia epoch, which agrees within uncertainty with the candidate signal position angle in 2015.4, as would be expected if the candidate signal were the cause of the observed acceleration. Combined with the candidate signal in our 2015 MagAO observation, these other lines of evidence point to a strong chance of this being a genuine companion signal which merited follow-up for confirmation and characterization. ## 3 Observations and Analysis ### Observations We observed HIP 67506 A on April 18th, 2022 with the extreme adaptive optics instrument MagAO-X (Males et al.2022) on the 6.5m Magellan Clay Telescope at Las Campanas Observatory. We \begin{table} \begin{tabular}{l c} \hline Metric & Value \\ \hline Gaia & \\ \hline RUWE & 2.02 \\ astrometric\_excess\_noise & 0.22 \\ astrometric\_excess\_noise\_sig & 75.16 \\ astrometric\_chi2\_al & 2277.97 \\ ipd\_go\_fharmonic\_amplitude & 0.0099 \\ ipd\_frac\_multi\_peak & 0 \\ \hline Hipparcos-Gaia Accelerations & \\ \hline HGCA \(\chi^{2}\)(Brandt, 2021) & 41 \\ M\({}_{2}\) at 23AU from from from PMa (Kervella et al., 2022) & 270 M\({}_{\rm Jup}\) \\ \hline \end{tabular} \end{table} Table 1: Multiplicity Metrics for HIP 67506 A \begin{table} \begin{tabular}{l c c c} \hline Parameter & Previous Value & Ref & Our Value \\ \hline Distance (pc) & 102\(\pm\)86 & 1 & 221.6\(\pm\)1.8\({}^{a}\) \\ Mass (M\({}_{\odot}\)) & 1.2\(\pm\)0.1 & 2 & 1.2\(\pm\)0.2 \\ Spectral Type & G5 & 3 & F8-G2 \\ Ter (K) & 6077 \(\pm\) 150 & 4 & 6000\(\pm\)350 \\ Luminosity (L\({}_{\odot}\)) & 0.37 \(\pm\) 0.07 & 4 & 1.91\({}^{+0.28}_{-0.32}\) \\ Sloan m\({}_{B^{\prime}}\) & 11.04\(\pm\)0.01 & 5 & 11.04\(\pm\)0.01 \\ Sloan m\({}_{B^{\prime\prime}}\) & 10.66\(\pm\)0.01 & 5 & 10.67\(\pm\)0.01 \\ Sloan m\({}_{B^{\prime}}\) & 10.56\(\pm\)0.01 & 5 & 10.59\(\pm\)0.01 \\ Sloan m\({}_{B^{\prime}}\) & 10.50\(\pm\)0.01 & 5 & 10.55\(\pm\)0.01 \\ Sloan g-r & 0.38\(\pm\)0.02 & 5 & 0.37\(\pm\)0.02 \\ Sloan r-i & 0.11\(\pm\)0.02 & 5 & 0.09\(\pm\)0.02 \\ \hline \end{tabular} (1) van Leeuwen 2007, (2) Chandler et al.2016, (3) Spencer Jones & Jackson 1939, (4) McDonald et al.2012, (5) Zacharias et al. (2012), \({}^{a}\)Gaia EDR3 Gaia Collaboration et al. (2021) \end{table} Table 2: Stellar Properties of HIP 67506 A Figure 2: Gaia EDR3 BP minus RP vs absolute G magnitude color-magnitude diagram of Praesepe Cluster members identified in Deacon and Kraus2020 (orange). Objects they flagged as possible overluminous binaries are outlined in blue up-pointing triangles, and purple down-pointing triangles are objects they flagged with elevated astrometric noise, following their Figure 4. The position of HIP 67506 is marked with a red star in the main and inset axis, which shows a close view of the region surrounding HIP 67506 A. HIP 67506 A falls on the overluminous region above the main sequence, pointing to the presence of an unresolved stellar companion. observed HIP 67506 A in four science filters: g\({}^{\prime}\) (\(\lambda_{0}=0.527\mu\)m, \(\Delta\lambda_{\rm eff}=0.044\mu\)m), r\({}^{\prime}\) (\(\lambda_{0}=0.614\mu\)m, \(\Delta\lambda_{\rm eff}=0.109\mu\)m), i\({}^{\prime}\) (\(\lambda_{0}=0.762\mu\)m, \(\Delta\lambda_{\rm eff}=0.126\mu\)m), and z\({}^{\prime}\) (\(\lambda_{0}=0.908\mu\)m, \(\Delta\lambda_{\rm eff}=0.130\mu\)m)3. MagAO-X is equipped with two science cameras, so we carried out science observations in two filters simultaneously. The science camera EMCCDs were set to 5 MHz readout speed with EM gain 100. Observations in r\({}^{\prime}\), i\({}^{\prime}\), and z\({}^{\prime}\) had exposure time 0.115 sec; g\({}^{\prime}\) had exposure time of 3 sec. We obtained dark frames of the same settings. The pixel scale is 6 mas pixel\({}^{-1}\) (Long et al. in prep), and the science and dark frames were 512\(\times\)512 pixels (3\({}^{\prime\prime}\)\(\times\)3\({}^{\prime\prime}\)). Seeing was stable at 0.4\({}^{\prime\prime}\) throughout the observations. Footnote 3: Filter specifications and filter curves can be found in the MagAO-X instrument handbook at [https://magao-x.org/docs/handbook/index.html](https://magao-x.org/docs/handbook/index.html) We were unable to obtain observations of a photometric standard star. We observed HIP 67121 as a photometric standard, only to discover that it is itself a binary with separation too close to resolve but large enough to distort the shape of the PSF core. We performed all further analysis using HIP 67506 A as a photometric reference. To reduce the raw images in each filter, we dark subtracted each science frame, registered each frame using photutils DAOSTarfinder (Bradley et al., 2020; Stetson, 1987) to find the peak of HIP 67506 A (uncertainty \(\pm\)0.05 pixels on peak finding) and scipy hydimage (Vitanen et al., 2020) to center it, and rotated each frame to North up and East left (rotate CCW by telescope parallactic angle + 1.995 \(\pm\) 0.61 deg, Long et al. in prep). Finally we summed the images in each filter to maximize the signal to noise ratio of the faint companion. Figure 3 displays the final images in each science filter, shown with a log stretch. The companion, HIP 67506 C, is clearly visible at Figure 3: MagAO-X images of HIP 67506 Aand HIP 67506 Cin the four photometric filters g\({}^{\prime}\), r\({}^{\prime}\), i\({}^{\prime}\), z\({}^{\prime}\), shown with log stretch. HIP 67506 Ais centered in each image, and HIP 67506 C, located 0.1\({}^{\prime\prime}\) to the south east, is marked by the white pointers. North is up and East is left, and the stretch and spatial scale is same for each image. \(0.1\arcsec\) to the south east, indicated by the white cross-hairs. The spacial scale and stretch are the same in each image. The companion signal was strongest in the \(z^{\prime}\) filter. ### MagAO-X Photometry _Measuring photometry_. We obtained relative photometry for each filter with the following procedure. We estimated the background level by computing the median value in a wide annulus far from the star's halo (\(0.6\arcsec\)-\(1.2\arcsec\)). We used photutils aperture photometry tools to sum all pixels in an aperture of radius \(1.\lambda/D\)centered on A, and subtracted the sum of pixels with the same aperture area valued at the background level, to estimate the flux from HIP 67506 A. To estimate the flux from HIP 67506 C we repeated the previous with an aperture of the same size centered at its location. We subtracted the mean background value from the image, computed a radial profile of the background subtracted image (excluding the region containing C), and used the flux at C's location in the radial profile to estimate the contribution from HIP 67506 A's halo at that location, and subtracted that as well. We converted the flux estimates into magnitudes and subtracted to obtain the contrast in MagAO-X filters. _Uncertainty_. To estimate the uncertainty in the photometry measurements, we used the method of Mawet et al. 2014 for estimating signal to noise ratio in the regime of small number of photometric apertures, as we have at the separation of HIP 67506 C. At the separation HIP 67506 C, there are N = \(2\pi r\) resolution elements of size \(\lambda/D\)(the characteristic scale of speckle noise), where \(r=n\lambda\)/D and n varies with the filter wavelength. We defined a ring of N-3 resolution elements (neglecting those at and immediately to each side of HIP 67506 C) at separation \(r\) with radius 0.5 \(\lambda/D\), then applied Eqn (9) of Mawet et al. (2014), which is the Student's two-sample t-test: \[p(x,n2)=\frac{\bar{x}_{1}-\bar{x}_{2}}{s_{2}\sqrt{1+\frac{1}{n_{2}}}} \tag{1}\] where \(\bar{x}_{1}\) = HIP 67506 C flux, \(\bar{x}_{2}\) = mean[\(\Sigma\)pixels in apertures)], \(s_{2}\) = side[\(\Sigma\)(pixels in apertures)], n\({}_{2}\) = N-3, and S/N = p. The denominator of that equation is the noise term. We repeated this procedure for HIP 67506 A, defining a ring of apertures beyond the halo of both stars to estimate the background noise. _Applying the standard_. We used HIP 67506 A as the photometric standard star, however literature photometry for HIP 67506 A consisted of a blend of flux from HIP 67506 A and HIP 67506 C, since it was previously unresolved. So to use HIP 67506 A as a standard we used our measured contrasts to separate the flux contributions from both stars. First we computed color transformations for MagAO-X filters to Sloan prime system filters using MagAO-X filter curves, public Sloan Digital Sky Survey transmission curves4, and a spectral type G5V model from the Pickles Atlas (Pickles, 1998)5. We obtained published photometry for HIP 67506 A, displayed in Table 2, from the UCAC4 catalog (Zacharias et al., 2012) and converted to MagAO-X filters using our color transformation. We then computed the magnitude of HIP 67506 A and HIP 67506 C in the MagAO-X system as: Footnote 4: [http://classic.sdss.org/dr3/instruments/imager/#filters](http://classic.sdss.org/dr3/instruments/imager/#filters) Footnote 5: MagAO-X to SDSS color transformations for all spectral types can be found in the MagAO-X instrument handbook \[A_{\rm Flux}+C_{\rm Flux}=F_{0,\rm Vega}\times 10^{-0.4\times{\rm Total\,mag \,in\,MagAO-X\,system}} \tag{2}\] \[C_{\rm Flux}=A_{\rm Flux}\times{\rm Flux\,Contrast} \tag{3}\] \[A_{\rm Flux}\times(1+10^{-0.4\times{\rm mag\,Contrast}})=F_{0,\rm Vega}\times 1 0^{-0.4\times{\rm Total\,mag}} \tag{4}\] We then converted flux of A and C into the Sloan system using color transformation, displaying in Tables 2 and 3. ### Astrometry #### 3.3.1 Relative Astrometry Measurements The 2015 MagAO/Clio L\({}^{\prime}\) epoch and 2022 MagAO-X epoch give relative astrometry spanning a 7 year baseline. _The 2015 epoch_. The companion signal has been corrupted by the BDI KLIP algorithm - it is no longer a recognizable PSF shape, and in Pearce et al. 2022 we estimated a smaller flux than we measure in this work. The companion signal has been subject to over-subtraction by KLIP, and is not reliable for estimating photometry and astrometry (Soummer et al., 2012; Pueyo, 2016). To estimate the position of the companion, we performed a grid search of the parameters which influence the signal strength in post-processing, similar to Morzinski et al. (2015) Appendix E. For a grid of [x, y] pixel position and contrast \(c\), we injected a negative signal, modeled from the PSF of a median image of the HIP 67506 B 2015 dataset, into each HIP 67506 A image. We then performed KLIP reduction via the method in Pearce et al. 2022 and measured the root-mean-square (RMS) of pixels in a circle of radius \(1.5\lambda/D\) (\(\sim\)11 pixels) centered at the location of the companion signal. Figure 4 displays the grid search results for the x-pixel coordinate (left), y-pixel coordinate (middle), and contrast (right) versus the difference in RMS between the reduced image with and without the injected signal. We fit a Gaussian to each parameter, while keeping the other parameters fixed at their best value, and took the mean and standard deviation as the best modeled parameter. Figure 5 (top) shows the unsubtracted, KLIP-reduced image of HIP 67506 C (left, same as Figure 1, log stretch), the best value model \begin{table} \begin{tabular}{l c} \hline Parameter & Value \\ \hline \multicolumn{2}{c}{Stellar Properties} \\ \hline Spectral Type & K7–M2 \\ T\({}_{\rm eff}\) & \(3600^{+250}_{-350}\) K \\ log(L) & -\(1.177^{+0.06}_{-0.08}\) L\(\odot\) \\ Sloan m\({}_{gr}\) & \(16.74\pm 0.1\) \\ Sloan m\({}_{r^{\prime}}\) & \(15.61\pm 0.05\) \\ Sloan m\({}_{\rm fr}\) & \(14.45\pm 0.04\) \\ Sloan m\({}_{gr}\) & \(14.05\pm 0.03\) \\ Sloan g\({}^{+}\) & \(1.14\pm 0.1\) \\ Sloan r\({}^{-1}\) & \(1.16\pm 0.07\) \\ \hline \hline \multicolumn{2}{c}{Astrometry} \\ \hline & 2015-05-31 \\ \hline Separation & \(240\pm 42\) mas \\ Position Angle & \(85\pm 13\) deg \\ \hline \multicolumn{2}{c}{2022-04-18} \\ \hline Separation & \(100.9\pm 0.7\) mas \\ Position Angle & \(145.1\pm 0.8\) deg \\ \hline \end{tabular} \end{table} Table 3: Properties of HIP 67506 C from Figure 4 (middle, linear stretch), and the residuals post-KLIP with that model subtracted from each image pre-KLIP (right, log stretch). With HIP 67506 A registered at [\(x,y\)] = [89,5,89.5] (origin is lower left), we find: \(\bar{x}=75.76\pm 2.63\) pixels, and relative separation \(\rho_{x}=218\pm 42\) mas; \(\bar{y}=90.88\pm 3.02\) pixels, \(\rho_{y}=-22\pm 48\) mas; total separation and position angle is \(\rho=240\pm 42\) mas, \(\theta=85\pm 13\) deg. _The 2022 epoch._ We measured the relative astrometry in the MagAO-X z' image following a modified version of the method described in Pearce et al. (2019) and Pearce et al. (2021). We modeled the PSF core as a simple 2-dimensional Gaussian function and varied the model parameters using the python Markov Chain Monte Carlo package emcece (Foreman-Mackey et al., 2013) with 100 walkers. Our model had seven parameters: \(x,y\) subpixel position (Gaussian prior with \(\mu=\) center from DA0StarFinder, \(\sigma=\) FWHM\(/2.35\), FWHM \(=1.1\)/D at \(x^{\prime}=0.03\arcsec\)), amplitude (Gaussian prior with \(\mu=\) peak from DA0StarFinder, \(\sigma=\) Poisson noise), background level (Gaussian prior with \(\mu=\) mean background noise), Gaussian width in the \(x\) and \(y\) direction (Gaussian prior with \(\mu=\) FWHM/2.35, \(\sigma=0.01\)), and rotation relative to x axis (Uniform prior on [0, \(\pi\)/2]). The chains converged quickly and we found that 5000 steps was sufficient for chains to converge (Gelman-Rubin statistic \(<1.2\) for all parameters), with a burn-in of 1000 steps. We computed the model fit for the location of HIP 67506 A and HIP 67506 C in the 2022 z\({}^{\prime}\) image, where HIP 67506 C's signal was strongest. The data, model, and residuals for the two measurements are shown in Figure 5 (middle and bottom). We used the MagAO-X astrometric solution (Long et al., in prep)6 to compute [\(\rho\) (mas), \(\theta\) (deg)] for each [\(\Delta x\),\(\Delta y\)] (pixels) between A and C in the MCMC chains, then took the mean and standard deviation as the [\(\rho,\theta\)] for the 2022 epoch. Detector distortion is negligible at 0.1\({}^{\prime\prime}\) (Long et al. in prep). We find \(\rho=100.9\pm 0.7\) mas, \(\theta=145.1\pm 0.8\) deg. Footnote 6: Available in the MagAO-X instrument handbook, [https://magao-x.org/docs/handbook/](https://magao-x.org/docs/handbook/) ## 4 Results ### Photometry We compared our magnitudes in the Sloan filter system with synthetic photometry from two stellar evolution grids, the Mesa Isochrones and Stellar Tracks (MIST, Dotter, 2016; Choi et al., 2016; Paxton et al., 2011, 2013, 2015), and stellar tracks and isochrones with the Padova and Trieste Stellar Evolution Code (PARSEC, Bressan et al., 2012). We used our absolute g\({}^{\prime}\), \(r^{\prime}\), i\({}^{\prime}\), and z\({}^{\prime}\) SDSS magnitudes for HIP 67506 A and HIP 67506 C as well as g\({}^{\prime}\)-\(r^{\prime}\) and r\({}^{\prime}\)-i\({}^{\prime}\) colors for evaluating which models in each grid best describe our observations. For each isochrone set we minimized the \(\chi^{2}\) of the synthetic photometry to our data as \[\chi^{2}=\sum\left(\frac{M_{\rm x,obs}-M_{\rm x,model}}{M_{\rm x,uncert}} \right)^{2} \tag{5}\] where \(M_{\rm x}\) is the absolute magnitude in a given filter or \(\Delta\) magnitude in a color. We imposed the constraint that the age must be the same for HIP 67506 A and HIP 67506 C, and computed the final goodness of fits as \(\chi^{2}=\chi^{2}_{A}+\chi^{2}_{C}\). We obtained the MIST7 isochrone synthetic photometry in the SDSS ugriz system with rotation rate \(v/v_{\rm crit}\) = 0.0 and 0.4, [Fe/H] = [-4.00, -2.00] in 0.50 dex steps and [Fe/H] = [-2.00, +0.50] in 0.25 dex steps, and log(Age) = [5.0, 10.3] in 0.05 dex steps. Footnote 7: Accessed from [https://waps.cfa.harvard.edu/MIST/model_grids.html](https://waps.cfa.harvard.edu/MIST/model_grids.html) For MIST isochrone \(\chi^{2}\) minimization, we determine T\({}_{\rm eff}\) = 6000\(\pm\)350 K and log(L) = 0.28\({}^{+0.06}_{-0.08}\) L\({}_{\odot}\) for HIP 67506 A, T\({}_{\rm eff}\) = 3600\({}^{+250}_{-350}\)\(K\) and log(L) = \(-1.17^{+0.06}_{-0.08}\) L\({}_{\odot}\) for HIP 67506 C. Figure 6 shows the reduced \(\chi^{2}\) surface for log(T\({}_{\rm eff}\)) and log(L) for the overall lowest \(\chi^{2}\) MIST isochrone (\(\chi^{2}=36.7\)), with age = 14 Myr, rotation v/v\({}_{\rm crit}\) = 0.4, and [Fe/H] = 0.25 dex at and [Fe/H] = 0.0 for C. Values of log(T\({}_{\rm eff}\)) are not well constrained for A, spanning from log(T\({}_{\rm eff}\))\(\sim\)3.76-3.78 (5700-6000K). The insets in Figure 6 display reduced \(\chi^{2}\) as a function of mass at 14 Myr, with the best fitting values occurring at M\({}_{\rm A}\) = 1.1M\({}_{\odot}\), M\({}_{\rm C}\) = 0.4M\({}_{\odot}\). A second local minimum (\(\chi^{2}=39.2\)) occurred at age = 5.6 Gyr, M\({}_{\rm A}\) = 1.1M\({}_{\odot}\), and M\({}_{\rm C}\) = 0.65M\({}_{\odot}\) (A plot of \(\chi^{2}_{\rm min}\) as a function of age is included in the supplementary material.) We used PARSEC version 1.28 with the YBC bolometric correction library (Chen et al., 2019) and revised Vega SED from Bohlin et al. (2020), and retrieved isochrone tables from log(age) = [6.0, 10.13] dex in intervals of 0.1 dex and metalicities [M/H] = [-4.0, 0.5] dex in intervals of 0.5 dex, with synthetic photometry in the SDSS Figure 8: Relative astrometry of HIP 67506 C relative to A for the MagAO 2015 epoch (purple) and the MagAO-X 2022 epoch (orange). The abscissa and ordinate axes display position of HIP 67506 C relative to A in mas in right ascension (RA) and declination (Dec). The motion of a non-moving background object at the position of HIP 67506 C is given by the black track; the predicted position in 2022, given then 2015 position, is an open diamond. The observed position and uncertainty in each epoch is shown as filled circles (uncertainties are smaller than the marker for the 2022 epoch). The observed motion of the HIP 67506 C is not consistent with a background object, and is likely due to orbital motion. Figure 7: Color-magnitude diagram (CMD) of Sloan r\({}^{\prime}\)-i\({}^{\prime}\) vs. Sloan g\({}^{\prime}\) absolute magnitude. Points are photometry from the CARMENES sample of well-characterized M- and L dwarfs (Cifuentes et al., 2020) and a selection of Hipparcos stars with SDSS photometry and T\({}_{\rm eff}\) estimates from McDonald et al. 2012. Our photometry of HIP 67506 A (star) and HIP 67506 C (diamond) and uncertainties (black errorbars) are overplotted. A and C are colored according to the T\({}_{\rm eff}\) of the best-fit MIST model shown in Figure 6. The best-fitting MIST models correspond to T\({}_{\rm eff}\) values consistent with nearby objects on the CMD. ugriz system. For PARSEC isochrone \(\chi^{2}\) minimization, we determine T\({}_{\rm eff}\) = 6000\(\pm\)350 K and log(L) = 0.29\({}^{+0.06}_{-0.08}\) L\({}_{\odot}\) for HIP 67506 A, T\({}_{\rm eff}\) = 3600\({}^{+250}_{-350}\) K and log(L) = -1.18\({}^{+0.06}_{-0.08}\) L\({}_{\odot}\) for HIP 67506 C. Our photometry was insufficient to place meaningful constraints on the age of either star. Figure 7 shows a color-magnitude diagram of SDSS r-i color vs. SDSS g absolute magnitude. HIP 67506 A (purple star) and HIP 67506 C (orange diamond) are plotted with our photometry and colored according to our isochrone-derived T\({}_{\rm eff}\) estimates. Also plotted are reference stars from the CARMENES sample of well-characterized M- and L dwarfs (Cifuentes et al., 2020) and a selection of Hipparcos stars with SDSS photometry and T\({}_{\rm eff}\) estimates from McDonald et al. 2012. Our colors and temperature estimates are consistent with the reference stars. We estimate the spectral type of HIP 67506 A and HIP 67506 C to be SpT\({}_{\rm A}\)\(\approx\) F8V-G2V and SpTc \(\approx\) K7V-M2V. ### Astrometry Figure 8 displays a common proper motion plot of HIP 67506 C relative to HIP 67506 A. We show the observed separation of HIP 67506 C in right ascension and declination for the 2015 and 2022 epochs (filled circles and error bars), the expected track if HIP 67506 C were a non-moving background object (zero proper motion; black track), and the predicted position of HIP 67506 C at the 2015 observation if it were a background object (open diamond). The observed position of HIP 67506 C does not follow the expected motion for a distant background object. We infer that the relative motion of HIP 67506 C is more consistent with a bound object than an unassociated object. This is supported by the large proper motion anomaly of HIP 67506 A. Using the two position angles of Table 3, we determined that the position angle of HIP 67506 C at the Gaia epoch of 2016.0 was 90\(\pm\)12\({}^{\circ}\), which agrees with the proper motion anomaly vector PA at the Gaia epoch of 96.6\(\pm\)4.1\({}^{\circ}\)(Kervella et al., 2022). Our astrometry was insufficient to meaningfully constrain the orbit or dynamical mass, due to there being only two astrometric points and large error bars on the 2015 epoch. ## 5 Conclusion We have shown that HIP 67506 A has a previously unknown 0.1\({}^{\prime\prime}\) companion, originally detected in 2015 with MagAO/Clio and BDI in L\({}^{\prime}\). The shape was distorted from a typical PSF due to post-processing, and might have been easily dismissed with the other residuals at that radius. However several secondary indications hinted that the dubious candidate companion signal for HIP 67506 A in Pearce et al. (2022) was a strong candidate and merited follow-up observations: the poor Gaia astrometric signal, the significant PMa with the right acceleration vector angle, and the overluminosity of the Gaia photometry. Our analysis in Pearce et al. 2022 pointed to a possible high mass brown dwarf. We followed up in 2022 wHaagAO-X and the companion was immediately and easily detected and determined to be a low mass star. The low S/N signal of HIP 67506 C at such a small IWA was bolstered by secondary indicators, which turned out to be powerful predictors of the genuine companion. We estimate HIP 67506 A and HIP 67506 C to be type F8-G2 and K7-M2 respectively. Further astrometric and photometric measurements are required to constrain properties and orbital elements. ## Acknowledgements L.A.P. acknowledges research support from the NSF Graduate Research Fellowship. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1746060. J.D.L. thanks the Heising-Simons Foundation (Grant #2020-1824) and NSF AST (#1625441, MagAO-X). S.Y.H was supported by NASA through the NASA Hubble Fellowship grant #HST-HF2-51436.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. MagAO-X was developed with support from the NSF MRI Award #1625441. The Phase II upgrade program is made possible by the generous support of the Heising-Simons Foundation. We thank the LCO and Magellan staffs for their outstanding assistance throughout our commissioning runs. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory. _Facilities:_ Las Campanas Observatory, Magellan:Clay (MagAO-X) _Software:_Numpy (Harris et al., 2020), Astropy (Price-Whelan et al., 2018), Matplotlib (Hunter, 2007), Scipy (Virtanen et al., 2020), emcee (Foreman-Mackey et al., 2013), corner.py (Foreman-Mackey, 2016), Photutils (Bradley et al., 2020) ## Data Availability The data underlying this article are available in at [https://github.com/logan-pearce/HIP67506-AC-Public-Data-Release](https://github.com/logan-pearce/HIP67506-AC-Public-Data-Release) and at DOI: 10.5281/zenodo.7098006.
2310.08577
Visual Data-Type Understanding does not emerge from Scaling Vision-Language Models
Recent advances in the development of vision-language models (VLMs) are yielding remarkable success in recognizing visual semantic content, including impressive instances of compositional image understanding. Here, we introduce the novel task of Visual Data-Type Identification, a basic perceptual skill with implications for data curation (e.g., noisy data-removal from large datasets, domain-specific retrieval) and autonomous vision (e.g., distinguishing changing weather conditions from camera lens staining). We develop two datasets consisting of animal images altered across a diverse set of 27 visual data-types, spanning four broad categories. An extensive zero-shot evaluation of 39 VLMs, ranging from 100M to 80B parameters, shows a nuanced performance landscape. While VLMs are reasonably good at identifying certain stylistic \textit{data-types}, such as cartoons and sketches, they struggle with simpler data-types arising from basic manipulations like image rotations or additive noise. Our findings reveal that (i) model scaling alone yields marginal gains for contrastively-trained models like CLIP, and (ii) there is a pronounced drop in performance for the largest auto-regressively trained VLMs like OpenFlamingo. This finding points to a blind spot in current frontier VLMs: they excel in recognizing semantic content but fail to acquire an understanding of visual data-types through scaling. By analyzing the pre-training distributions of these models and incorporating data-type information into the captions during fine-tuning, we achieve a significant enhancement in performance. By exploring this previously uncharted task, we aim to set the stage for further advancing VLMs to equip them with visual data-type understanding. Code and datasets are released at https://github.com/bethgelab/DataTypeIdentification.
Vishaal Udandarao, Max F. Burg, Samuel Albanie, Matthias Bethge
2023-10-12T17:59:30Z
http://arxiv.org/abs/2310.08577v3
# Visual data-type understanding does not emerge from scaling vision-language models ###### Abstract Recent advances in the development of vision-language models (VLMs) are yielding remarkable success in recognizing visual semantic content, including impressive instances of compositional image understanding. Here, we introduce the novel task of _Visual Data-Type Identification_, a basic perceptual skill with implications for data curation (e.g., noisy data-removal from large datasets, domain-specific retrieval) and autonomous vision (e.g., distinguishing changing weather conditions from camera lens staining). We develop two datasets consisting of animal images altered across a diverse set of 27 visual _data-types_, spanning four broad categories. An extensive zero-shot evaluation of 39 VLMs, ranging from 100M to 80B parameters, shows a nuanced performance landscape. While VLMs are reasonably good at identifying certain stylistic _data-types_, such as cartoons and sketches, they struggle with simpler _data-types_ arising from basic manipulations like image rotations or additive noise. Our findings reveal that (i) model scaling alone yields marginal gains for contrastively-trained models like CLIP, and (ii) there is a pronounced drop in performance for the largest auto-regressively trained VLMs like OpenFlamingo. This finding points to a blind spot in current frontier VLMs: they excel in recognizing semantic content but fail to acquire an understanding of visual _data-types_ through scaling. By analyzing the pre-training distributions of these models and incorporating _data-type_ information into the captions during fine-tuning, we achieve a significant enhancement in performance. By exploring this previously unchatred task, we aim to set the stage for further advancing VLMs to equip them with visual data-type understanding. Code and datasets are released at github.com/bethgelab/DataTypledentification. ## 1 Introduction Vision-Language Foundation Models (VLMs) (Bommasani et al., 2021) lie at the frontier of the machine learning ecosystem. Profiting from high-capacity transformer architectures (Vaswani et al., 2017) and large-scale pre-training, these models excel at identifying the semantic content in images (Radford et al., 2021; Pham et al., 2023; Jia et al., 2021). They also exhibit strong robustness to image distortions and perceptual changes as assessed on benchmarks like ImageNet-C (Hendrycks and Dietterich, 2019), ImageNet-Sketch (Wang et al., 2019), and ObjectNet (Barbu et al., 2019). Taking ImageNet-C as a concrete example, a classifier is tasked with correctly identifying a category (e.g., a _stringay_) in the presence of a particular data transformation (e.g., _defocus blur_). Similarly, the other domains and perceptual transformations contained in ImageNet-C, ImageNet-Sketch, and ObjectNet can be seen as examples of different _Visual Data-Types_ obtained from ImageNet through applying image transformations that affect the appearance but not the content of the image. The prevalent strategy in computer vision to cope with variable data-types is to use domain invariant classifiers, often achieved via data augmentation during training. An alternative strategy would be to retain the data-type specific information and explicitly model its composition with the semantic content of the image (Fig. 1A). This constitutes a symmetric split of the total image information into the complementary components of _semantics_ and _visual data-types_(Granlund and Knutsson, 1995). Humans can flexibly switch between these two complementary aspects and visual data-type identification is an integral part of human perception (Oliva and Torralba, 2007; Ren et al., 2020; Bracci et al., 2023). The recent breakthrough of large language models (LLMs) to mimic human text understanding is reflected by remarkable compositional reasoning skills and flexibility to cope with arbitrary contexts and textual data-types. This suggests that VLMs could also gain an increasingly flexible, compositional image understanding to cope with arbitrary visual data-types by inheriting it from the use of such powerful LLMs. Therefore, we seek to investigate to what extent the increasing robustness of VLMs to distribution shifts could be a consequence of compositional _data-type understanding_. The most likely alternative would be that the increasing robustness of VLMs originates from increasing domain invariance (Mintun et al., 2021). However, VLMs differ in two important ways from ImageNet-trained classification models of the last decade: (1) They are trained on much more data crawled from the internet making it difficult to judge whether a test image is in-domain or Figure 1: **Data-type identification highly impacts vision tasks.** Complementary to standard _semantic_ recognition tasks **(A)**, _data-type identification_ targets recognising style and other contextual domain information. It is applicable for many practical scenarios, e.g., **(B)** data curation, and **(C)** autonomous cars and agents. In all contexts, flexible recognition of data-types is paramount, yet, VLMs exhibit poor performance on different data-types as illustrated by 4 select VLMs on 6 data-types (highlighted in the boxes). Notably, there is no one-size-fits-all model, underscoring the challenge of universal _data-type identification_. The bar plots report the informedness metrics, for more details refer to Sec. 4.1. out-of-domain (Mintun et al., 2021; Fang et al., 2022; Nguyen et al., 2022), and (2) Due to the compositional nature of language, training on image-text-pairs could facilitate a compositional understanding of images in VLMs. Both points drive performance on a large set of visual benchmarks, yet, it is not easy to dissect their specific contributions. In addition, compositional understanding itself is a property that needs to be learned and thus expected to gradually improve with the amount of training data and model scale (Wiedemer et al., 2023). Here, we test the hypothesis that dataset robustness of VLMs could be a consequence of compositional _data-type understanding_ by creating a carefully designed _data-type identification_ task and investigating to what extent VLMs exhibit a compositional understanding of semantic context and image appearance. Data-type identification is a necessary condition for data-type understanding: If a VLM understands the data-type of an image, e.g., the blurring operation, it needs to be able to identify it, independently from the particular image content. Further, identifying the visual data-type of an image in addition to its semantic context is relevant in many real-world scenarios. For _(1) data curation and filtering_ this is useful, for instance to exclude images of unwanted appearance from an image corpus (e.g., blurred samples), or to create a specific stylized domain generalization dataset (e.g., cartoons, sketches) (see Fig. 1B). In the context of _(2) autonomous vision_ (e.g., self-driving cars, household robots), knowing the data-type of cameras is relevant to interpret the data and intervene accordingly: for example, adapting driving style or sensor sensitivity based on detecting changing weather conditions versus sun-glare (see Fig. 1C). Rather than engineering narrow solutions for each of these problems individually, the flexibility of VLMs affords a general ability to cope with all possible conditions. A compositional understanding of data-types would be an attractive solution to achieve this level of generality, and it could be highly useful for practical challenges such as the long-tailed test-time distribution encountered in autonomous driving (Dosovitskiy et al., 2017; Makansi et al., 2021; Zhou et al., 2022). Due to the combinatorial number of possible conditions and the open-ended nature of perception for an autonomous agent, the importance of a compositional understanding of data-types extends to robotics at large to deal with variable conditions in households, agriculture, or healthcare. In summary, our work aims to make progress on _Data-Type Identification_; for this, we created two novel datasets containing images of animals, spanning 27 different data-types (see Fig. 2). On this data, we zero-shot benchmarked 39 state-of-the-art VLMs, with model sizes ranging from 100M to 80B parameters, across contrastive and LLM-based VLMs. We find that scaling up model size does not yield significant improvements. In particular, the largest auto-regressively trained VLMs perform significantly worse than their smaller contrastively-trained counterparts like CLIP. By investigating their performance across individual data-types, we found connections to structures in the pre-training data and vision embedding spaces of VLMs. Using this, we show that performance on the novel data-type identification task can be enhanced by fine-tuning with carefully tailored data. Our findings highlight an important limitation in the training of current leading VLMs: while they clearly excel on recognizing semantic content, acquiring data-type identification skills does not emerge from simply scaling up but rather requires a systematic change of the training data. ## 2 Related Work **Stress-testing VLMs.** Initial reports on the abilities of VLMs (e.g., in visual question answering) were largely anecdotal. Very recently, there is a growing interest in systematic investigation of such capabilities, often entailing the creation of synthetic datasets tailored for specific evaluations (Yuskegonul et al., 2022; Parcalabescu et al., 2021; Thrush et al., 2022; Hsieh et al., 2023; Zhao et al., 2022; Lewis et al., 2022; Yamada et al., 2022; Ma et al., 2023; Kamath et al., 2023; Marathe et al., 2023; Yarom et al., 2023; Bitton-Guetta et al., 2023; Bordes et al., 2023). Here, we too synthesize a controlled dataset, but distinctively introduce the new task of _Data-Type Identification_, a basic perceptual skill that remains largely underexplored in previous work. **Distribution shifts, anomaly detection and domain adaptation.** While many existing approaches study perceptually altered data, e.g., distribution shifts (Hendrycks et al., 2021; Taori et al., 2020; Schneider et al., 2020; Qiu et al., 2022; Rauber et al., 2017; Koh et al., 2021), domain adaptation (Farahani et al., 2021; You et al., 2019; Wang and Deng, 2018), out-of-distribution detection (Hendrycks and Gimpel, 2016; Yang et al., 2021; Liu et al., 2020), and anomaly detection (Roth et al., 2022; Han et al., 2022; Pang et al., 2021), they often only determine the presence of an anomaly or shift without pinpointing its exact nature. In contrast, if an intervention to an anomaly is necessary, we need to pinpoint its exact nature, which is the goal of _Data-Type Identification_. Very few previous works have touched upon this question in narrow scenarios. Some studied identifying few specific perceptual data-types using convolutional neural networks (CNNs) in a binary classification setup, e.g., image mirroring (Lin et al., 2020) and cropping (Van Hoorick and Vondrick, 2021). Zhu et al. (2022) trained linear probes to understand predictability of domains like paintings or cartoons from the image features of a pre-trained CNN. Paiss et al. (2023) investigated counting objects in VLMs (similar to our multi_same and multi_different data-types, see Fig. 2). (An et al., 2023) showed that CLIP can reasonably infer a limited number of simple data-types in a binary classification setup and used this to improve CLIP's zero-shot semantic classification. Rashchtian et al. (2023) used linear probes on the image embedding spaces of vision-only and vision-language models, to identify perceptual manipulations on images, without accounting for their text encoders. Our _Data-Type Identification_ framework subsumes all these setups in a unifying approach: we investigate an extensive range of 27 _data-types_ across a broad perceptual and stylistic range for 39 VLMs, encompassing both contrastively-trained discriminative models and auto-regressively trained generative models. Our work therefore enables studying generalisation of VLMs on a broad set of data-types. ## 3 The TypeIdent Datasets To probe the effectiveness of VLMs in identifying data-types, we created two novel datasets consisting of images of a single animal in a scene, spanning 27 data-types across 4 categories: **geometric** (e.g., left-rotation), **pixel** (e.g., applying Gaussian noise), style (e.g., creating a cartoon-version), and semantic (e.g., replacing a single animal with multiple animals). Note that geometric and pixel data-types can be obtained from simple, well-defined transformations such as pixel re-arrangement, linear filtering, or additive noise. In contrast, most transformations generating different style and semantic data-types from a reference image distinctively rely on the use of more complex neural networks. For a complete list of all data-types studied, see Fig. 2 and refer to the Appendix. Our first dataset, _SyntheticTypeldent_, is constructed by first generating a set of 50 reference-images of animals using a text-to-image model, with these images uniformly sampled across 10 animal species. Each generated reference-image was then altered with all our 27 data-type transformation functions, resulting in 1,350 evaluation samples (see Fig. 2 for an example of all data-type transformations). For creating the geometric and pixel data-type images, we directly applied the corresponding point-wise image transformation function (e.g., adding Gaussian noise) on the reference-images. To precisely control the transformations applied, we regenerated style and most semantic-level data-type images again using the same diffusion model. Figure 2: **Proposed data-types. Example images from our _SyntheticTypeldent_ dataset for each of our 27 data-types, spanning four categories: geometric, pixel, style, and semantic data-types.** For our second dataset, _NaturalTypedent_, we manually curated 50 reference-images from KaggleAnimalImages (Banerjee, 2023). We then followed the exact same procedure for creating data-type images from the reference-images. However, all generative steps were replaced by a refined, deduplicated web-retrieval step for mining style and semantic data-type images. This provides an in-the-wild, naturally occurring testbed, thereby complementing the precisely controlled _SyntheticTypedIdent_ dataset. Since we can procure appropriate images for only 25 data-types (we omit multi_different and tiger_stripes), _NaturalTypedIdent_ only contains 1,250 samples. Importantly, we manually verified both datasets to ensure that the target data-type for each image was the most prominent data-type reflected in it, enabling a careful study between models without interference between data-types. For details about dataset creation refer to the Appendix. ## 4 Benchmarking VLMs on Data-Type Identification ### Experimental Setup We evaluated 39 VLMs from 13 model families, with sizes ranging from 100M to 80B parameters, across two groups: discriminative, contrastively-trained VLMs (e.g., CLIP) which we refer to as **C-VLMs**, and generative, auto-regressively trained VLMs (e.g., OpenFlaming) which we refer to as large multi-modal models (**LMMs**) (Li, 2023). Specifically, from the C-VLM group we evaluated CLIP (Radford et al., 2021), BLIP-2-ITM (Li et al., 2023c), and CoCa (Yu et al., 2022); in the LMM group we tested Fromage (Koh et al., 2023b), GILL (Koh et al., 2023a), Multimodal-GPT (Gong et al., 2023), OpenFlaming (Awadalla et al., 2023), Otter (Li et al., 2023a), MPlugOwl (Ye et al., 2023), LLaVA (Liu et al., 2023a), BLIP-2-LLM (Li et al., 2023c), InstructBLIP (Dai et al., 2023), and IDEFICS (Laurencon et al., 2023). We tested all VLMs on correctly classifying the target data-type for each evaluation image, in a zero-shot manner. We evaluated C-VLMs by computing the cosine-similarity of the image embedding and the text embedding of the specific data-type description, e.g., "A blurred image of an animal." (see Appendix for full list). For a fair comparison, we evaluated LMMs by log-likelihood scoring (Dai et al., 2023; Li et al., 2023b) each of the 27 data-type description texts, with the prompt: "<image> Q: Describe the image. A: <data_type_description>", replacing <data_type_description> by the corresponding text description for a particular data-type. We quantified model performance using informedness, \(I_{k}\)=TPR\({}_{k}\)-FPR\({}_{k}\) on data-type \(k\), which in addition to the true positive rate (TPR, i.e., accuracy) accounts for the false positive rate (FPR). We summarized model performance as mean informedness across data-types, \(\mu_{I}\)=\(\langle I_{k}\rangle_{k}\). See Appendix for evaluation details. Figure 3: **(A) VLMs struggle with identifying data-types.** Less recent, contrastively learned C-VLMs (e.g., CLIP) outperform the much larger and more recent LMMs (e.g., IDEFICS) despite the latter’s strong language model priors. Scaling shows limited effect on performance. Chance-level performance is at 0. **(B) Weak scaling laws for VLMs.** Power-law fits reveal that for achieving strong data-type identification (mean informedness>0.7), current VLMs would need to surpass a trillion parameters. This calls for an alternative strategy to just scaling up current VLMs. ### VLMs struggle with identifying data-types Our evaluations reveal that all tested VLMs exhibit limited performance on both _SyntheticTypeldent_ and _NaturalTypeldent_ (Fig. 3A). We found that C-VLMs performed better than LMMs, even though the latter are more recent and orders of magnitude larger. The best C-VLM achieved mean informedness \(\mu_{I}{=}(0.47,0.50)\) while its LMM counterpart achieved \(\mu_{I}{=}(0.22,0.25)\) on _SyntheticTypeldent_ and _NaturalTypeldent_, respectively. As a control and for direct comparison, we also tested models on animal identification on _SyntheticTypeldent_. As expected, the performance on this semantic recognition task is very good, achieving a mean informedness across models of 0.89. This confirms quantitatively that the performance on identifying data-types (detailed plots in Appendix) is substantially worse than on object recognition. We further note three key findings from our evaluations: **LMMs, a downgrade?** Surprisingly, LMMs consistently underperform C-VLMs, despite using LLMs as text models, compared to the smaller text encoders in C-VLMs. Notably, the largest LMM (IDEFICS, 80B parameters) substantially underperforms an orders-of-magnitude smaller CLIP-RN50 (100M parameters). The rich language grounding that LLMs inherit from extensive real-world text training seemingly does not provide benefits for identifying data-types. This result challenges the prevailing notion that strong language model priors can improve fine-grained understanding in VLMs (Cascante-Bonilla et al., 2023; Doveh et al., 2023; Yuksekgounul et al., 2022; Wang et al., 2023). We hypothesise two plausible causes for this performance drop to be studied in detail by future work: (1) _Weak alignment_ between the vision encoder and LLM might degrade the real-world symbolic grounding innate to each independently (Bavishi et al., 2023). (2) _Discriminative-Generative gap_ might be at play, i.e., discriminating between answers is easier than generating one (Vapnik, 1999; Ng and Jordan, 2001). Both suggest that C-VLM contrastive objectives might better equip them for data-type identification than LMM auto-regressive objectives (Liu et al., 2023b). **Weak scaling behaviour.** Interestingly, within the C-VLM and LMM groups, our results suggest weak scaling effects. We analysed this quantitatively by fitting a power-law (Alabdulmohsin et al., 2022; Henighan et al., 2020; Cherti et al., 2023) on the observed mean informedness vs. model scale relationship for CLIP (C-VLM) and IDEFICS (LMM), since they span the widest parameter sizes within a model family. Fig. 3B confirms the weak scaling law, indicating a severe limitation for current VLMs: to achieve a performance practicable for data-type identification (\(\mu_{I}{>}0.7\)), current models would need to surpass a trillion parameters. This calls into question the effects of model scaling, and whether alternate strategies are required to enhance their performance. **Stark performance differences between simple and complex data-types.** To get a finer-grained understanding of the overall model performance (Fig. 4) we break-down the per-data-type averaged mean informedness across all models. We find that while VLMs are reasonably good at identifying style and semantic data-types, they falter systematically on pixel and geometric data-types. For the majority of data-types even the best-performing models struggle to surpass chance-level perfor Figure 4: **Average-performance across data-types on _SyntheticTypeldent_. VLMs perform reasonably on style and semantic data-types (e.g., pencil_sketch, cartoon) and show weak results on pixel and geometric data-types (e.g., gaussian_noise, high_contrast). Chance-level at 0.** mance and no single model consistently outperforms others across a majority of data-types. Instead, multiple models each excel in identifying just a few specific data-types. This reveals inherent biases in the pre-training procedures of VLMs, limiting the desired generality of foundation models. ## 5 Understanding why VLMs underperform in identifying data-types We next investigate two plausible reasons for the sub-par performance of VLMs in identifing data-types: (1) their image embeddings lack data-type discriminative information, and (2) their pre-training datasets, despite the enormous sizes, lack sufficient data-type specific information, limiting models from learning data-type discriminative features. We probe both candidate reasons in detail, performing a case study with CLIP, and find good evidence for both of them. Due to CLIP being a prototypical C-VLM, and the widespread adoption of its vision encoders in LMMs, we suggest that our findings should be broadly applicable. **Reason 1: Peeking into CLIP's embedding space.** We visualized the CLIP image embeddings of _SyntheticTypeldent_ using t-SNE (Van der Maaten and Hinton, 2008). Colour-coding the embeddings by (1) the image's semantic concept, i.e., the animal type (Fig. 5 left), and (2) the image's target data-type (Fig. 5 right), uncovered an interesting dichotomy: while distinct embedding clusters emerge based on semantic concepts (animals), most data-types are not clearly demarcated (see Appendix for KNN and linear-probe analysis). This suggests that CLIP's vision encoder is somewhat invariant to data-types, despite it not being explicitly trained to be so (only random-resized cropping was used as training data-augmentation, discussion in Appendix). As most C-VLMs and LMMs use CLIP image embeddings, this potentially explains the poor performance of all VLMs on identifying data-types. We further note that the embeddings of only three data-types are closely clustered (tattoo, patch_and_reshuffle, and typographic), yet, these are precisely the embeddings which are not directly semantically distinguishable--this suggests that CLIP might not encode semantic and data-type information compositionally but rather sacrifices one (data-type) over the other (semantics). This offers a consistent explanation why CLIP models are so effectively robust at classifying semantic content (Fang et al., 2022; Shi et al., 2023; Nguyen et al., 2022; Santurkar et al., 2022; Ramanujan et al., 2023) but fail at solving the complementary problem of data-type identification. **Reason 2: Peeking into VLM pre-training datasets.** Fig. 4 revealed that VLMs fare well on some complex data-types while falling short on simple ones. An intuitive explanation is pre-training dataset imbalance: an abundance of samples aligning with style data-types (e.g., cartoon, pencil_sketch) and a paucity of simple data-types (e.g., gaussian_noise, left_rotate). To confirm this quantitatively, we analysed LAION-2B-en--CLIP's pre-training dataset. We first counted and retrieved all samples containing representative data-type keywords in the captions (e.g., "blurry"; see Appendix for details and a semantic search-based analysis). As pure keyword-frequency might not account for mis-aligned image-caption pairs, we estimated an _alignment prob Figure 5: **What does CLIP’s image embedding space encode?** CLIP-RN50’s image embeddings, colour-coded by ground-truth semantic concept (left) and data-type (right), reveal its pronounced affinity for recognising semantic concepts, while being largely invariant to data-type distinctions. ability_--the fraction of retrieved samples where the image aptly captures the data-type concept--by manually labeling 100 random samples per data-type for data-type accuracy. Finally, we computed an _abundancy score_ as the product of text-frequency and _alignment probability_. Correlating this _abundancy score_ with averaged model performance across data-types revealed strong positive associations (Spearman rank correlation, \(r{=}0.557\) for _Synthetic-Tyeledent_; \(r{=}0.489\) for _Natural-Tyeledent_). The association is even stronger on _Synthetic-Tyeledent_ when correlating _abundance score_ with CLIP-model averaged performance (\(r{=}0.606\)), suggesting that the varying model performance across data-types can be explained by the constraints of their pre-training data distribution. ## 6 Improving VLMs to identify data-types Having understood some factors limiting the performance of VLMs, we experiment with methods using data-type information-rich samples to improve them. Here, we investigate CLIP (C-VLM) and Otter (LMM) as two representative models. ### Few-shot training-free adaptation does not help Can few-shot examples boost performance without updating model weights, using in-context learning (Dong et al., 2022; Brown et al., 2020) or training-free adapters (Zhang et al., 2021; Udandarao et al., 2022)? We answer next. CLIP TIP-Adapter.We test the TIP-Adapter (Zhang et al., 2021) framework with CLIP, using two few-shot example selection strategies: _Random_ (selecting examples with random animals) and _SameAnimal_ (selecting examples with same animal as test image). We evaluate \(1,2,4,8,16,32,48\) shots with RN50 and ViT-L-14 vision encoders. We found few-shot adaptation degrading performance across all settings (see Fig. 5(a)). This presumably originates from TIP-Adapter leveraging semantic similarities in CLIP's image embedding space, which lacks information to disambiguate between data-types (see Fig. 5). Hence, TIP-Adapter cannot capture any information discriminative across data-types but rather exploits semantic similarities b/w concepts which is detrimental for our task. Otter In-context Learning.We explored various in-context example selection strategies and found selecting \(n\) examples with one whose data-type matched the target of the test sample and other \(n{-}1\) randomly worked the best--we evaluate \(n{=}2{,}5{,}15\) examples on the _Random_ and _SameAnimal_ strategies, using LLaMA-7B (Touvron et al., 2023) or MPT-7B (MosaicML, 2023) as LLM-backbones (see Appendix for details and in-context scaling results with LLaVA). Surprisingly, we found an initial uptick in performance with \(n{=}2\), followed by a decline as in-context examples increased (see Fig. 5(b)). We attribute this to Otter overfitting on its in-context examples, i.e., simply predicting a random data-type from within the in-context examples. Since chance-level performance also increases with fewer in-context examples, this could explain improved performance with \(n{=}2\). We conclude that in-context learning does not enhance Otter's ability to identify data-types. Takeaways.Our empirical results strongly indicate that training-free few-shot approaches fail to enhance VLMs for identifying data-types, likely because VLMs lack data-type discriminative information in their embeddings. Rather, an intensive training procedure to infuse data-type knowledge might be more promising. ### Fine-tuning with appropriate data-mixtures improves performance Data-mixtures.We created a specialised dataset, _TeDaTy_ (Teaching Data-Types), incorporating data-type information into images and text-captions. We construct training images, sourced from COCO (Lin et al., 2014), ImageNet (Deng et al., 2009), PACS (Li et al., 2017), and Domain-Net (Peng et al., 2019), by applying our data-type transformation functions and adapting the captions Figure 6: **Few-shot training-free adaptation methods fail. Both TIP-Adapter with CLIP (top) and in-context learning with Otter (bottom) fail to substantially improve VLM data-type identification.** accordingly, e.g., "This is a cartoon image of a dog.". TeDaTy comprises 8 in-distribution (ID) data-types, holding out 19 for out-of-distribution (OOD) generalisation tests (see Appendix for details). To isolate effects of data-distributions, we experiment with three data-mixtures: (1) TeDaTy, (2) TeDaTy+COCO, and (3) TeDaTy+COCO+IN100k (sub-sampled from ImageNet). We also fine-tune only on COCO as a control to disentangle gains from fine-tuning and specific data-mixtures. **Results.** Fine-tuning CLIP improved performance on the ID data-types for all TeDaTy mixtures (Tab. 1). However, COCO-only fine-tuning degraded ID-performance, highlighting the importance of incorporating key data-type information with TeDaTy. Freezing the vision-encoder while fine-tuning provided large ID-boosts and surprisingly even improved OOD. Freezing the text-encoder improved ID-performance but degraded OOD-performance, likely because of large gradients from only updating the vision-encoder. This corroborates previous CLIP-tuning studies (Zhai et al., 2022). **Transfer to Otter.** To fine-tune Otter, we kept the vision encoder frozen (best CLIP fine-tuning strategy) and tuned only the perceiver resampler, cross-attention and embedding layers. We found fine-tuning with all TeDaTy variants improved ID-performance up to two-fold, while preserving OOD-performance (see Tab. 2). Fine-tuning only with COCO degrades ID-performance, reinforcing the importance of a dataset that captures data-type knowledge. **Takeaways.** Our results suggest that training with data-mixtures explicitly inducing data-type information is a promising direction for improving VLM data-type identification. ## 7 Conclusion In this work, we introduced and motivated _Data-Type Identification_ as a basic perceptual skill with general implications for visual foundation modeling. We created two novel datasets to study model performance on this task, and released a third dataset tailored to fine-tune models to improve data-type identification. Our extensive zero-shot experiments across 39 VLMs revealed that they struggle to identify many data-types. Interestingly, scaling model size results only in minimal gains--we traced this back to the structure in VLM embedding spaces and pre-training datasets, and suggest that studying weak alignment between image-encoders and LLMs (Bavishi et al., 2023) as well as the discriminative-generative gap (Vapnik, 1999; Ng and Jordan, 2001; Saunders et al., 2022) will be promising directions for future work (see for example Liu et al. (2023)). We found that training-free few-shot methods do not improve performance, and that it is necessary to incorporate data-type information back into the training process. Taken together, our study reveals an important limitation of the desired generality of foundation models, and the dataset and insights presented in this paper set the stage for further advancing VLMs for visual data-type understanding. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Data-Mixture**} & \multicolumn{6}{c|}{_SyntheticClypedent_} & \multicolumn{6}{c}{_NaturalTypedent_} \\ \cline{2-11} & \multicolumn{2}{c|}{**Full**} & \multicolumn{2}{c|}{**Freeze-Image**} & \multicolumn{2}{c|}{**Freeze-Text**} & \multicolumn{2}{c|}{**Full**} & \multicolumn{2}{c|}{**Freeze-Image**} & \multicolumn{2}{c}{**Freeze-Text**} \\ & ID-I & OOD-I & ID-I & OOD-I & ID-I & OOD-I & ID-I & OOD-I & ID-I & OOD-I \\ \hline Zero-shot CLIP & 0.451 & 0.457 & 0.451 & 0.457 & 0.451 & 0.457 & 0.440 & 0.473 & 0.440 & 0.473 & 0.440 & 0.473 \\ \hline COCO (control) & 0.451 & 0.468 & 0.354 & 0.465 & 0.488 & 0.451 & 0.494 & 0.507 & 0.451 & 0.500 & 0.457 & 0.473 \\ \hline TeDaTy & 0.669 & 0.392 & 0.777 & 0.469 & 0.780 & 0.370 & 0.691 & 0.412 & 0.654 & 0.474 & 0.646 & 0.379 \\ + COCO & 0.646 & 0.394 & 0.717 & 0.465 & 0.631 & 0.371 & 0.629 & 0.400 & 0.680 & 0.470 & 0.574 & 0.356 \\ + COCO + IN100k & 0.600 & 0.383 & 0.700 & 0.469 & 0.586 & 0.354 & 0.557 & 0.381 & 0.634 & 0.456 & 0.471 & 0.323 \\ \hline \hline \end{tabular} \end{table} Table 1: **CLIP ViT-B-32 fine-tuning results on _Typedent_ datasets with different data-mixtures.** \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Data-Mixture**} & \multicolumn{3}{c|}{_SyntheticClypedent_} & \multicolumn{2}{c}{_NaturalTypedent_} \\ \cline{2-5} & ID-I & OOD-I & ID-I & OOD-I \\ \hline Zero-shot Otter & 0.051 & 0.180 & 0.102 & 0.256 \\ \hline COCO (control) & 0.020 & 0.246 & 0.085 & 0.315 \\ \hline TeDaTy & 0.088 & 0.061 & 0.111 & 0.111 \\ + COCO & 0.106 & 0.168 & 0.171 & 0.276 \\ + COCO + IN100k & 0.120 & 0.166 & 0.166 & 0.261 \\ \hline \hline \end{tabular} \end{table} Table 2: **Outer-LLaMA-7B fine-tuning results with different data-mixtures.** ## Reproducibility statement We provide code and datasets to reproduce all experiments in the paper here: [https://github.com/bethgelab/DataTypeIdentification](https://github.com/bethgelab/DataTypeIdentification). For the _TypeIdentDatasets_, we have provided comprehensive details on dataset creation in the Appendix. We specify the details of the 39 models tested along with their evaluation methods in the Appendix. For all our fine-tuning experiments, we used a fixed random seed for reproducibility. Further, we will release all our fine-tuned checkpoints and make public the WeightsAndBiases training logs for easy access. ### Acknowledgements The authors would like to thank (in alphabetic order): Alexander S. Ecker, Cagatay Yildiz, Evgenia Rusak, Roland Zimmermann, Shyamgopal Karthik, Surabhi S. Nath, Susanne Keller and Thomas Klein, for helpful comments and feedback. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting VU and MFB. VU thanks the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program for support. SA is supported by a Newton Trust Grant. This work was supported by the German Research Foundation (DFG): SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms, TP4, project number: 276693517. MB is a member of the Machine Learning Cluster of Excellence, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC number 2064/1 - Project number 390727645.
2308.04107
Generic singularity behavior of conservative solutions to the Novikov equation
In this paper, we concentrate on the Novikov equation. We provide a description of the solution in a neighborhood of each singular point.
Zhen He, Wei Luo, Zhaoyang Yin
2023-08-08T07:44:15Z
http://arxiv.org/abs/2308.04107v2
# Generic regularity of conservative solutions to the Novikov equation ###### Abstract In this paper, we concentrate on the Novikov equation. For an open dense set of \(C^{3}\) initial data, we prove that the solution is piecewise smooth in the t-x plane, while the gradient \(uu_{x}\) can blow up along finitely many characteristic curves. And we provide a description of the solution in a neighborhood of each singular point. _2010 Mathematics Subject Classification_: 35B65, 35D30, 35F25, 35L03, 35L60. _Keywords_: Novikov equation; generic regularity; conservative weak solution; singularity ###### Contents * 1 Introduction * 2 Preliminaries * 3 Families of perturbed solutions. * 4 Generic property * 4.1 Generic solutions of the semilinear system * 4.2 Proof of Theorem 1.1 * 5 Generic singularity behavior * 5.1 Proof of Theorem 1.2 Introduction Consideration here is the initial-value problem for the Novikov equation in the form \[\left\{\begin{array}{l}u_{t}-u_{xxt}+4u^{2}u_{x}=3uu_{x}u_{x}x+u^{2}u_{xxx}, \quad x\in\mathbb{R},\ t>0,\\ u|_{t=0}=u_{0},\quad x\in\mathbb{R}.\end{array}\right. \tag{1.1}\] The equation was proposed by Novikov in [16]. In [8], Chen, Hu and Liu showed that the Novikov equation can be viewed as a shallow water model. The Novikov equation (1.1) can be rewritten as the following compact form \[m_{t}+u^{2}m_{x}+\frac{3}{2}(u^{3})_{x}m=0,\quad m=u-u_{xx}. \tag{1.2}\] There is a huge literature devoted to the equations (1.1), local well-posedness can be referred to [11, 12, 14, 20, 21, 22, 23]. Yan, Li and Zhang [22, 23] present two blow-up results under some suitable conditions. Wu and Yin [19] proved the global existence of weak solutions of the equation (1.1), provided the initial data satisfy certain sign conditions. In [20], they also obtained global strong solutions under some conditions. Chen, Zhang and Liu [7] proved the existence and uniqueness of conservative solutions for the Novikov equation in \(H^{1}\cap W^{1,4}\). Another related shallow water model is the Camassa-Holm (CH) equation \[m_{t}+um_{x}+2u_{x}m=0,\quad m=u-u_{xx}. \tag{1.3}\] Recently, Li and Zhang [15] proved the generic property and the singular behavior of the Camassa-Holm equation and the two-component Camassa-Holm equation. The generic property is a property which is satisfied by almost all elements of the whole set. Considering our PDE problem here, generic regularity is the regularity of the solutions solved from an open and dense subset in the space of initial data. Generic property is an interesting problem in hyperbolic conservation laws. The original result was studied by Schaeffer in [17], which showed that for the one space dimensional conservation law, the generic solutions are piecewise smooth, with finitely shocks in a bounded domain in the \((t,x)\) plane. The proof relies on the Hopf-lax representation formula. For the \(2\times 2\) temple class systems, Dafermos and Geng proved a similar result in [9] by analyzing the solution along the characteristics. For the general \(n\times n\) system with \(n\geq 3\), Caravenna and Spinolo [6] proved that the generic property does not hold true. Recently, Bressan and his collaborators [2, 3, 4] studied generic property and singularity behavior for the variational wave equation. In [2], the authors showed that for an open dense set of \(C^{3}\) initial data, the solution is piecewise smooth in the \((t,x)\) plane, while the gradient \(u_{x}\) will blow up along finite many characteristic curves. For an open dense set of initial data, the authors in [3] provided a detailed asymptotic description of the solution in a neighborhood of each singular point, where \(|u_{x}|\to\infty\) and analysed the different structure of conservative and dissipative solutions. Yang [24] studied generic regularity of energy conservative solutions to the rotation Camassa-Holm equation. Cai, Chen, Shen and Tan [5] studied the generic property of conservative solutions to the Hunter-Saxton type equations and give a new way to construct a Finsler type metric which renders the flow uniformly Lipschitz continuous on bounded subsets of \(H^{1}(\mathbb{R}^{+})\). To the best of our knowledge, the generic properties of the Novikov equation has not been studied yet. In [7], the authors pointed out that the singularity for the conservative solutions of the Novikov equation is different from the Camassa-Holm equation. The wave speed \(c(u)\) for the Camassa-Holm equation is \(u\) and for the Novikov equation is \(u^{2}\). Because of the difference nonlinearity of wave speed, the singularity behaviour is various. In this paper, we give an exactly proof to show that why the singularity is various between the Camassa-Holm equation and the Novikov equation. The content of this paper is the following. In Section 2, we recall some basic definitions and the related results about the conservative solutions the Noivkov equation. Section 3 presents a perturbation lemma. In Section 4, we study about the generic property of the Novikov equation. Section 5 devotes to showing the asymptotic behavior for generic singularity. We state our main results as follows **Theorem 1.1**.: _(Generic property) For any \(T{>}0\) fixed,there exists an open dense set of initial data_ \[\mathscr{D}\subset(C^{3}(\mathbb{R})\cap H^{1}(\mathbb{R})\cap W^{1,4}( \mathbb{R}))\] _such that for \(u_{0}\in\mathscr{D}\), the conserved solution u=u(t,x) of the Novikov equation (2.4) is differentiable in the complement of finitely many characteristic curves \(\gamma_{i}\), within the domain \([0,T]\times\mathbb{R}\)._ From the above generic regularity, we can obtain the asymptotic description of the solution in a neighborhood of each singular point, where \(|u_{x}|\rightarrow\infty\). **Theorem 1.2**.: _Consider generic initial data \(u_{0}\in\mathscr{D}\) as in Theorem 1.1 with \(u_{0}\in C^{\infty}(\mathbb{R})\). Call \((u,v,\xi,x,t)\) the corresponding solution of the semilinear system (2.17) and let u=u(x,t) be the solution to the original equation (2.4). Consider a singular point \(P=(t_{0},Y_{0})\) where v=\(\pi\), and set \((x_{0},t_{0})=(x(t_{0},Y_{0}),t(t_{0},Y_{0}))\). Generically, at the singular point, u has following parameteric expression._ * _If P is a point of Type_ \(\mathcal{I}\) _, i.e._\(v_{Y}=0\) _and_ \(v_{YY}\neq 0\)__\(v=\pi,\ \ v_{YY}\neq 0\) _then_ (1.4) \[u(t,x)=A(x-x_{0})^{\frac{3}{4}}+B(t-t_{0})+\mathcal{O}(1)(|t-t_{0}|^{2}+|x-x_{0 }|^{\frac{7}{2}})\] _for some constant_ \(A,B\)_._ * _If P is a point of Type_ \(\mathcal{II}\) _, i.e._\(v_{Y}\neq 0\) _and_ \(v_{YY}=0\)__\(v=\pi,\ \ v_{YY}\neq 0\) _then_ (1.5) \[u(t,x)=A(x-x_{0})^{\frac{1}{4}}+B(t-t_{0})+\mathcal{O}(1)(|t-t_{0}|^{2}+|x-x_{0 }|)\] _for some constant_ \(A,B\)_._ ## 2 Preliminaries In this section, we give some basic definitions, celebrated Thom's transversality theorem and an useful lemma. Then we recall some basic definitions and results for the Novikov equation. The content of this section can be found in many books and monographs [1, 10, 18, 13]. **Definition 2.1**.: _[_1, 10, 18, 13_]_ _(Map transverse to a submanifold) Let F: X\(\rightarrow\)Y be a smooth map from manifold X to manifold Y. W is a submanifold of Y. We say F is transverse to W at a point x\(\in\)X, denoted by F \(\pitchfork_{x}\)W, if_ \(\bullet\) _either \(F(x)\notin W\)_ \(\bullet\) _or \(F(x)\in\)Wand \(T_{F(x)}Y=(dF)_{x}(T_{x}X)+T_{F(x)}W\) Here \(T_{x}X\) means the tangent space of X at point x._ _If F \(\pitchfork_{x}\)W for every \(x\in X\), we say F is transverse to W, and denote as F \(\pitchfork\)W_ **Definition 2.2**.: _[_1, 10, 18, 13_]_ _Let \(F:X\to Y\) be a smooth map from manifold \(X\) to \(Y\). A point \(y\in Y\) is a regular value if for every \(x\in X\) one has_ \[T_{y}Y=(dF)_{x}(T_{x}X).\] _In the special case where \(W=\{y\}\) consists of a single point, \(F\pitchfork W\) if and only if \(y\) is a regular value of F_ **Theorem 2.3**.: _[_1, 10, 18, 13_]_ _(Transversality lemma) Let \(X\), \(\Theta\) and \(Y\) be smooth manifolds and \(W\) a submanifolds of \(Y\). Let \(\theta\rightarrow\phi^{\theta}\) be a smooth map that to each \(\theta\in\Theta\) associates a function \(\phi^{\theta}\in C^{\infty}(X,Y)\), and define \(\Phi:X\times\Theta\to Y\) by setting \(\Phi(x,\theta)=\phi^{\theta}(x)\). If \(\Phi\pitchfork W\) then the set \(\theta\in\Theta;\phi^{\theta}\pitchfork W\) is dense in \(\Theta\)._ **Lemma 2.4**.: _[_15_]_ _Consider an ODE system_ \[\frac{d}{dt}u^{\epsilon}=f(u^{\epsilon}),\quad u^{\epsilon}(0)=u_{0}+ \epsilon_{1}v_{1}+...+\epsilon_{m}v_{m}, \tag{2.1}\] _where \(u^{\epsilon}(t):\mathbb{R}\rightarrow\mathbb{R}\), \(f\) is a Lipschitz function. The system is well posed in [0,T). Assume the matrix_ \[D_{\epsilon}u^{\epsilon}_{0}=(v_{1},v_{2},...,v_{m})\in\mathbb{R}^{n\times m}, \tag{2.2}\] _and the rank of this matrix is_ \[rank(D_{\epsilon}u^{\epsilon}_{0})=k. \tag{2.3}\] _Then for any \(t\in[0,T)\), \(rank(D_{\epsilon}u^{\epsilon})=k\)._ Next we will recall some basic results about the Novikov equation. The equation (1.1) can be written into \[\left\{\begin{array}{l}u_{t}+u^{2}u_{x}+\partial_{x}P_{1}+P_{2}=0,\\ u(0,x)=u_{0}(x),\quad x\in\mathbb{R},\end{array}\right. \tag{2.4}\] where \[P_{1}\triangleq p*(\frac{3}{2}uu_{x}^{2}+u^{3})\quad\text{and}\quad P_{2} \triangleq\frac{1}{2}p*u_{x}^{3}\] with \(p(x)=\frac{1}{2}e^{-|x|}\), and we define the weak solution as follows. **Definition 2.5**.: _[_7_]_ _The energy conservative solution \(u=u(t,x)\) of (2.4) satisfies 1. For any fixed t\(\geq 0\),u(t,x)\(\in H^{1}(\mathbb{R})\cap W^{1,4}(\mathbb{R})\). The map \(t\to u(t,\cdot)\) is Lipschitz continuous under the \(L^{4}\) metric. 2. The solution u=u(t,x) satisfies the initial condition (2.4) in \(L^{4}(\mathbb{R})\), and_ \[\int\int_{\Lambda}-u_{x}(\phi_{t}+u^{2}\phi_{x})+(-\frac{3}{2}uu_{x}^{2}-u^{3} +P_{1}+\partial_{x}P_{2})\phi dxdt+\int_{\mathbb{R}}u_{0,x}\phi(0,x)dx=0\] _for every test function \(\phi\in C^{1}_{c}(\Lambda)\) with \(\Lambda=\{(t,x)|t\in[0,\infty),x\in\mathbb{R}\}\). 3. The solution \(u=u(t,x)\) is conservative if the balance law is satisfied in the following sense: There exists a family of Radon measures \(\{\mu_{(t)},t\in\mathbb{R}\}\), depending continuously on time and w.r.t the topology of weak convergence of measures. For every \(t\in\mathbb{R}^{+}\), the absolutely continuous part of \(\mu(t)\) w.r.t. the Lebesgue measure has density \(u_{x}^{4}(t,\cdot)\), which provides a measure-valued solution to the balance law_ \[\int_{\mathbb{R}^{+}}\{\int(\phi_{t}+u^{2}\phi_{x})d\mu(t)+\int\big{(}4u^{3}u_{x }^{3}-4u_{x}^{3}(P_{1}+\partial_{x}P_{2})\phi dx\big{)}\}dt-\int_{\mathbb{R}}u_ {0,x}^{4}\phi(0,x)=0\] _for every test function \(\phi\in C_{c}^{1}(\Lambda)\)._ **Theorem 2.6**.: _[_7_]_ _Let \(u_{0}\in H^{1}(\mathbb{R})\bigcap W^{1,4}(\mathbb{R})\) be an absolute continuous function on \(x\). Then the initial value problem (2.4) admits a unique energy conservative solution \(u(t,x)\) defined for all \((t,x)\in\mathbb{R}^{+}\times\mathbb{R}\). The solution also satisfies the following properties._ 1. \(u(t,x)\) _is Holder continuous with exponent_ \(\frac{3}{4}\) _on both_ \(t\) _and_ \(x\) _._ 2. _The first energy density_ \(u^{2}+u_{x}^{2}\) _is conserved for any time_ \(t\geq 0\)_, i.e._ \[\mathscr{E}(t)=\|u(t)\|_{H^{1}}^{2}=\|u(t)\|_{H^{1}}^{2}.\] 3. _The second energy density_ \(u^{4}+2u^{2}u_{x}^{2}-\frac{1}{3}u_{x}^{4}\) _is conserved in the following sense:_ 1. _An energy inequality is satisfied in_ \((t,x)\) _coordinates:_ \[\mathscr{F}(t)=\int_{\mathbb{R}}(u^{4}+2u^{2}u_{x}^{2}-\frac{1}{3}u_{x}^{4})dx \geq\mathscr{F}(0)\] _for any_ \(t\geq 0\)_._ 2. _Denote a family of Radon measures_ \(\nu(t),t\in\mathbb{R}^{+}\)_, such that_ \[\nu_{(t)}(\mathscr{A})=\int_{\mathscr{A}}(u^{4}+2u^{2}u_{x}^{2})(t,x)dx- \frac{1}{3}\mu_{(t)}(\mathscr{A})\] _for any Lebesgue measurable set_ \(\mathscr{A}\) _in_ \(\mathbb{R}\)_. Then for any_ \(t\in\mathbb{R}^{+}\)_,_ \[\nu_{(t)}=\mathscr{F}(0)=\int_{\mathbb{R}}(u^{4}+2u^{2}u_{x}^{2}-\frac{1}{3}u _{x}^{4})(0,x)dx\] _For any_ \(t\in\mathbb{R}^{+}\)_, the absolutely continuous part of_ \(\nu_{(t)}\) _w.r.t Lebeugue measure has density_ \(u^{4}+2u^{2}u_{x}^{2}-\frac{1}{3}u_{x}^{4}\)_. For almost every_ \(t\in\mathbb{R}^{+}\)_, the singular part of_ \(\nu_{(t)}\) _is concentrated on the set where_ \(u=0\)_._ 4. _A continuous dependence result holds. Consider a sequence of initial data_ \(u_{0,n}\) _such that_ \(\|u_{0n}-u_{0}\|_{H^{1}\bigcap W^{1,4}}\)_, as_ \(n\)_\(\rightarrow\infty\)_.Then the corresponding solutions_ \(u_{n}(t,x)\) _converge to_ \(u(t,x)\) _uniformly for_ \((t,x)\) _in any bounded sets._ By virtue of (2.4), a direct computation shows that conservation law holds \[(\frac{u^{2}+u_{x}^{2}}{2})_{t}+(\frac{u^{2}u_{x}^{2}}{2}+uP_{1}+u\partial_{x} P_{2})_{x}=0. \tag{2.5}\] And it is not hard to check that \[(u^{4} +2u^{2}u_{x}^{2}-\frac{1}{3}u_{x}^{4})_{t}+\big{[}2u^{4}u_{x}^{2 }-\frac{1}{3}u^{2}u_{x}^{4}+\frac{4}{3}u^{3}(P_{1}+\partial_{x}P_{2})\big{]}_ {x} \tag{2.6}\] \[+\frac{4}{3}[(P_{1}+\partial_{x}P_{2})^{2}-(P_{2}+\partial_{x}P_{ 1})^{2}]_{x}=0.\] Thus two conserved quantities can be derived from (2.5) and (2.6) \[\mathscr{E}(t) \triangleq\int_{\mathbb{R}}(u^{2}+u_{x}^{2})(t,x)dx=\mathscr{E}(0), \tag{2.8}\] \[\mathscr{F}(t) \triangleq\int_{\mathbb{R}}(u^{4}+2u^{2}u_{x}^{2}-\frac{1}{3}u_{x }^{4})(t,x)dx=\mathscr{F}(0). \tag{2.7}\] To bound \(P_{i}(i=1,2)\), we may firstly observe that \[\|u\|_{L^{\infty}}^{2} \leq\|u\|_{H^{1}}^{2}=\mathscr{E}(0),\] \[\|u_{x}\|_{L^{4}}^{4} =3\int_{\mathbb{R}}(u^{4}+2u^{2}u_{x}^{2})dx-\mathscr{F}(t)\] \[\leq 3\big{(}\|u\|_{L^{\infty}}^{2}\int_{\mathbb{R}}(u^{2}+2u_{x}^ {2})-\mathscr{F}\big{)} \tag{2.10}\] \[\leq 3(2\mathscr{E}^{2}(t)-\mathscr{F}(t))=3(2\mathscr{E}^{2}(0) -\mathscr{F}(0)), \tag{2.9}\] and (2.9)-(2.10) imply that \[\|u_{x}\|_{L^{3}}^{3}\leq\sqrt{3\mathscr{E}(0)[2\mathscr{E}^{2}(0)-\mathscr{F} (0)]})\triangleq K. \tag{2.11}\] Then we can bound \(P_{i}\) and \(\partial_{x}P_{i}\) for i=1,2 as follows. \[\|P_{1}(t)\|_{L^{\infty}},\|\partial_{x}P_{1}(t)\|_{L^{\infty}} \leq\|p\|_{L^{\infty}}\|\frac{3}{2}uu_{x}^{2}+u^{3}\|_{L^{1}}\leq\frac{3}{4} \mathscr{E}^{\frac{3}{2}}(0),\] \[\|P_{1}(t)\|_{L^{2}},\|\partial_{x}P_{1}(t)\|_{L^{2}}\leq\|p\|_{L ^{2}}\|\frac{3}{2}uu_{x}^{2}+u^{3}\|_{L^{1}}\leq\frac{3}{2\sqrt{2}}\mathscr{E} ^{\frac{3}{2}}(0),\] \[\|P_{2}(t)\|_{L^{\infty}},\|\partial_{x}P_{2}(t)\|_{L^{\infty}} \leq\frac{1}{2}\|p\|_{L^{\infty}}\|u_{x}^{3}\|_{L^{1}}\leq\frac{1}{4}K,\] \[\|P_{2}(t)\|_{L^{2}},\|\partial_{x}P_{2}(t)\|_{L^{2}}\leq\frac{1}{ 2}\|p\|_{L^{2}}\|u_{x}^{3}\|_{L^{1}}\leq\frac{1}{2\sqrt{2}}K. \tag{2.12}\] Following the idea in [7], the characteristic equation is \[\frac{dx(t)}{dt}=u^{2}(t,x(t)). \tag{2.13}\] If we consider \(Y=Y(t,x)\) is a characteristic coordinate, and denote \(T=t\). Then we consider function \(f(t,x)=f(T,x(t,Y))\) as a function of \((T,Y)\) also denoted by \(f(T,Y)\). It is easy to check that \[Y_{t}+u^{2}Y_{x}=0, \tag{2.14}\] \[f_{t}+u^{2}f_{x}=f_{Y}(Y_{t}+u^{2}Y_{x})+f_{t}(T_{t}+u^{2}T_{x})=f_{T}. \tag{2.15}\] We also denote \[v=2\arctan u_{x},\quad q=\frac{(1+u_{x}^{2})^{2}}{Y_{x}}. \tag{2.16}\] Then the conservative solution is constructed by the following semilinear system \[\left\{\begin{array}{l}u_{T}=-\partial_{x}P_{1}-P_{2},\\ v_{T}=-u\sin^{2}\frac{v}{2}+2u^{3}\cos^{2}\frac{v}{2}-2\cos^{2}\frac{v}{2}(P_{ 1}+\partial_{x}P_{2}),\\ q_{T}=q[(2u^{3}+u)-2(P_{1}+\partial_{x}P_{2})]\sin v,\end{array}\right. \tag{2.17}\] with the initial condition \[\left\{\begin{array}{l}u(0,\beta)=u_{0}(x(0,\beta)),\\ v(0,\beta)=2\arctan(u_{0}^{\prime}(x(0,\beta))),\\ q(0,\beta)=1,\end{array}\right. \tag{2.18}\] for every \(\beta\in\mathbb{R}\). **Lemma 2.7**.: _[_7_]_ _Let \((u,v,q,Y)\) be the solution to (2.17) and (2.18), with \(q>0\). Then the set of points_ \[\{(t,x(t,Y),u(t,Y));(t,Y)\in\mathbb{R}^{+}\times\mathbb{R}\} \tag{2.19}\] _is the graph of a conservative solution to the Novikov equation (2.4)_ To study the singularities of the solution \(u\) of (2.4), we should focus on the level sets \[\{v(t,Y)=\pi\}.\] ## 3 Families of perturbed solutions. **Lemma 3.1**.: _Let \((u,v,q)\) be a smooth solution of the semilinear system (2.17), and let a point \((t_{0},T_{0})\in\mathbb{R}^{+}\times\mathbb{R}\) be given. If \((v,v_{Y},v_{YY})(t_{0},Y_{0})=(\pi,0,0)\), then there exists a three parameter family of smooth solutions \((u^{\theta},v^{\theta},\xi^{\theta})\) depending smoothly on \(\theta\in\mathbb{R}^{3}\) such that_ 1. _When_ \(\theta=0\in\mathbb{R}^{3}\)_, one recovers the original solution namely_ \((u^{0},v^{0})=(u,v)\)_;_ 2. _At the point_ \((t_{0},Y_{0})\) _when_ \(\theta=0\)_, one has_ \[rank\ D_{\theta}(v^{\theta},v^{\theta}_{Y},v^{\theta}_{YY})=3\] _._ Proof.: Let \((u,v,\xi)\) be the smooth solution of (2.17). And taking derivatives to the equation of \(v\), we obtain \[\frac{\partial}{\partial_{T}}v_{Y} =-u_{Y}\sin^{2}(\frac{v}{2})-u\cos(\frac{v}{2})\sin(\frac{v}{2}) v_{Y}+6u_{Y}u^{2}\cos^{2}(\frac{v}{2})-2u^{3}(\sin\frac{v}{2}\cos\frac{v}{2}v_{Y})\] \[\quad+\cos(\frac{v}{2})\sin(\frac{v}{2})v_{Y}(P_{1}+\partial_{x} P_{2})-2\cos^{2}\frac{v}{2}(\partial_{Y}P_{1}+\partial_{Y}\partial_{x}P_{2})\] \[=-u_{Y}\sin^{2}(\frac{v}{2})-\frac{1}{2}uv_{Y}\sin v+6u_{Y}u^{2} \cos^{2}(\frac{v}{2})-u^{3}v_{Y}\sin v \tag{3.1}\] \[\quad+\frac{1}{2}\cos vv_{Y}(P_{1}+\partial_{x}P_{2})-(\cos v+1) (\partial_{Y}P_{1}+\partial_{Y}\partial_{x}P_{2}).\] Following the same line \[\frac{\partial}{\partial_{T}}v_{YY}= u_{YY}\sin^{2}\frac{v}{2}+\frac{1}{2}u_{YY}v_{Y}\sin v-\frac{1}{2}u_{Y}v_{Y} \sin v-uv_{YY}\sin v-\frac{1}{2}v_{Y}^{2}\cos v-\frac{1}{2}\sin vv_{Y}^{2}(P_{1 }+\partial_{x}P_{2})\] \[+6u_{YY}u^{2}\cos^{2}\frac{v}{2}+12u_{Y}^{2}u\cos^{2}\frac{v}{2} +3u_{y}v_{Y}u^{2}\sin v-3u^{2}u_{Y}v_{Y}\sin v-u^{3}v_{YY}\sin v-u^{3}v_{Y}^{2}\cos v\] \[+\frac{1}{2}\cos vv_{Y}(\partial_{Y}P_{1}+\partial_{Y}\partial_{x }P_{2})+\sin vv_{Y}(\partial_{Y}P_{1}+\partial_{Y}\partial_{x}P_{2})-(\cos v+1 )(\partial_{Y}^{2}P_{1}+\partial_{Y}{}^{2}\partial_{x}P_{2})\] \[=u_{YY}(\sin^{2}\frac{Y}{2}+\frac{1}{2}v_{Y}\sin v+6u^{2}\cos^{2} \frac{v}{2}-u^{3}\sin v)+v_{Y}(-\frac{1}{2}u_{Y}\sin v-\frac{1}{2}v_{Y}\cos v -u^{3}v_{Y}\cos v)\] \[-\frac{1}{2}\sin vv_{Y}^{2}(P_{1}+\partial_{x}P_{2})+v_{Y}( \partial_{Y}P_{1}+\partial_{Y}\partial_{x}P_{2})(\frac{1}{2}\cos v+\sin v)-( \cos v+1)(\partial_{Y}\partial_{x}P_{1}+\partial_{Y}^{2}\partial_{x}P_{2}). \tag{3.2}\] \[\frac{\partial}{\partial_{t}}q_{Y}= \xi_{Y}[(2u^{3}+u)-2(P_{1}+\partial_{x}P_{2})]\sin v+q[(2u^{3}+u) -2(P_{1}+\partial_{x}P_{2})]\cos vv_{Y} \tag{3.3}\] \[+q[(6u^{2}u_{Y}+u_{Y})-2(\partial_{Y}P_{1}+\partial_{Y}\partial_{ x}P_{2})]\sin v,\] with \[P_{1}(Y)=\frac{1}{2}\int_{-\infty}^{+\infty}e^{-|\int_{Y}^{Y}\xi \cos^{4}\frac{v}{2}(T,\bar{Y})d\bar{Y}|}(\frac{3}{8}u\sin^{2}v+u^{3}\cos^{4} \frac{v}{2})\xi(T,\bar{Y})d\bar{Y},\] \[P_{2}(Y)=\frac{1}{8}\int_{-\infty}^{+\infty}e^{-|\int_{Y}^{\bar{ Y}}(\xi\cos^{4}\frac{v}{2})(T,\bar{Y})d\bar{Y}|}(\xi\sin v\sin^{2}\frac{v}{2})(T, \bar{Y})d\bar{Y},\] \[\partial_{x}P_{1}(Y)=\frac{1}{2}(\int_{Y}^{+\infty}-\int_{-\infty }^{Y})e^{-|\int_{Y}^{Y}\xi\cos^{4}\frac{v}{2}(T,\bar{Y})d\bar{Y}|}(\frac{3}{8} u\sin^{2}v+u^{3}\cos^{4}\frac{v}{2})\xi(T,\bar{Y})d\bar{Y}, \tag{3.4}\] \[\partial_{x}P_{2}(Y)=\frac{1}{8}(\int_{Y}^{+\infty}-\int_{-\infty }^{Y})e^{-|\int_{Y}^{Y}(\xi\cos^{4}\frac{v}{2})(T,\bar{Y})d\bar{Y}|}(\xi\sin v \sin^{2}\frac{v}{2})(T,\bar{Y})d\bar{Y}.\] And we have identities about \(Y\) derivatives as follows, \[\partial_{Y}P_{i}=\xi\cos^{4}\frac{v}{2}\partial_{x}P_{i},\qquad i=1,2,\] \[\partial_{Y}\partial_{x}P_{2}=-\frac{1}{4}\sin v\cos^{2}\frac{v}{2}\xi+\xi \cos^{4}\frac{v}{2}P_{2}, \tag{3.5}\] \[\partial_{Y}^{2}\partial_{x}P_{2}=\xi^{2}\cos^{8}\frac{v}{2}(- \frac{3}{2}u_{xx}^{2}u_{xx}+\partial_{x}P_{2})=\xi^{2}\cos^{8}\frac{v}{2}(- \frac{3}{16}v_{Y}q\sin^{2}\frac{v}{2}+\partial_{x}P_{2}).\] We now construct families \((\bar{u}^{\theta},\bar{v}^{\theta},\bar{q}^{\theta})\) of perturbations of the initial data along the charactristic as \[\bar{u}^{\theta}(Y)=\bar{u}(Y)+\sum_{i=1,2,3}\theta_{i}U_{i}(Y), \tag{3.6}\] \[\bar{v}^{\theta}(Y)=\bar{v}(Y)+\sum_{i=1,2,3}\theta_{i}V_{i}(Y), \tag{3.7}\] \[\bar{q}^{\theta}(Y)=\bar{q}(Y)+\sum_{i=1,2,3}\theta_{i}Q_{i}(Y). \tag{3.8}\] Therefore, an ODE system of dimension 5 can be constrcuted as follows: \[\frac{\partial}{\partial t}\left(\begin{array}{c}u\\ v\\ v\\ q\\ v_{Y}\\ v_{YY}\end{array}\right)=\left(\begin{array}{c}-\partial_{x}P_{1}-P_{2}\\ -u\sin^{2}\frac{v}{2}+2u^{3}\cos^{2}\frac{v}{2}-2\cos^{2}\frac{v}{2}(P_{1}+ \partial_{x}P_{2})\\ q[(2u^{3}+u)-2(P_{1}+\partial_{x}P_{2})]\sin v\\ A\\ B\end{array}\right), \tag{3.9}\] with \(A\triangleq-u_{Y}\sin^{2}(\frac{v}{2})-\frac{1}{2}uv_{Y}\sin v+6u_{Y}u^{2}\cos ^{2}(\frac{v}{2})-u^{3}v_{Y}\sin v+\frac{1}{2}\cos vv_{Y}(P_{1}+\partial_{x}P _{2})-(\cos v+1)(\partial_{Y}P_{1}+\partial_{Y}\partial_{x}P_{2})\) and \(B\triangleq u_{YY}(\sin^{2}\frac{v}{2}+\frac{1}{2}v_{Y}\sin v+6u^{2}\cos^{2} \frac{v}{2}-u^{3}\sin v)+v_{Y}(-\frac{1}{2}u_{Y}\sin v-\frac{1}{2}v_{Y}\cos v -u^{3}v_{Y}\cos v)-\frac{1}{2}\sin vv_{Y}^{2}(P_{1}+\partial_{x}P_{2})+v_{Y}( \partial_{Y}P_{1}+\partial_{Y}\partial_{x}P_{2})(\frac{1}{2}\cos v+\sin v)-( \cos v+1)(\partial_{Y}\partial_{x}P_{1}+\partial_{Y}^{2}\partial_{x}P_{2})\). We construct a family of solutions \((\bar{u}^{\theta},\bar{v}^{\theta},\bar{\xi}^{\theta})\) to (3) of perturbations of the initial data as in (3.6)-(3.8). Take derivative w.r.t. \(\theta\), \[\frac{\partial}{\partial t}\left(\begin{array}{c}D_{\theta}u^{\theta}\\ D_{\theta}v^{\theta}\\ D_{\theta}q^{\theta}\\ D_{\theta}v_{Y}^{\theta}\\ D_{\theta}v_{YY}^{\theta}\end{array}\right)=\left(\begin{array}{c}D_{ \theta}f_{1}^{\theta}\\ D_{\theta}f_{2}^{\theta}\\ D_{\theta}f_{3}^{\theta}\\ D_{\theta}f_{4}^{\theta}\\ D_{\theta}f_{5}^{\theta}\end{array}\right) \tag{3.10}\] where \(f_{i}^{\theta}\)(i=1,2,3,4,5) are the perturbation of the right-hand side of (3.10). Then we can get \[\frac{\partial}{\partial t}\left(\begin{array}{c}D_{\theta}u^{\theta}\\ D_{\theta}v^{\theta}\\ D_{\theta}q^{\theta}\\ D_{\theta}v_{Y}^{\theta}\\ D_{\theta}v_{Y}^{\theta}\\ D_{\theta}v_{Y}^{\theta}\end{array}\right)=\left(\begin{array}{cccccc}D_{u}f _{1}^{\theta}&D_{v}f_{1}^{\theta}&D_{y}f_{1}^{\theta}&D_{vv}f_{1}^{\theta}&D_ {v_{YY}}f_{1}^{\theta}\\ D_{u}f_{2}^{\theta}&D_{v}f_{2}^{\theta}&D_{q}f_{2}^{\theta}&D_{v_{YY}}f_{2}^{ \theta}&D_{v_{YY}}f_{2}^{\theta}\\ D_{u}f_{3}^{\theta}&D_{v}f_{3}^{\theta}&D_{p}f_{3}^{\theta}&D_{v_{YY}}f_{3}^{ \theta}&D_{v_{YY}}f_{3}^{\theta}\\ D_{u}f_{4}^{\theta}&D_{v}f_{4}^{\theta}&D_{q}f_{4}^{\theta}&D_{vv}f_{4}^{ \theta}&D_{v_{YY}}f_{4}^{\theta}\\ D_{u}f_{5}^{\theta}&D_{v}f_{5}^{\theta}&D_{q}f_{5}^{\theta}&D_{vv}f_{5}^{ \theta}&D_{v_{YY}}f_{5}^{\theta}\end{array}\right)\left(\begin{array}{cccc}D_{ \theta_{1}}\bar{u}^{\theta}&D_{\theta_{2}}\bar{u}^{\theta}&D_{\theta_{3}}\bar {u}^{\theta}\\ D_{\theta_{1}}\bar{v}^{\theta}&D_{\theta_{2}}\bar{v}^{\theta}&D_{\theta_{3}}\bar {v}^{\theta}\\ D_{\theta_{1}}\bar{v}^{\theta}&D_{\theta_{2}}\bar{v}^{\theta}&D_{\theta_{3}}\bar {v}^{\theta}\\ D_{\theta_{1}}\bar{v}^{\theta}_{YY}&D_{\theta_{2}}\bar{v}^{\theta}_{YY}&D_{ \theta_{3}}\bar{v}^{\theta}_{YY}\end{array}\right). \tag{3.11}\] Utilizing Lemma 2.4, we only need to prove the Lipschitz continuity of \(f\). Because of the smoothness of \((u,v,\xi)\), we only need to consider the Lipschitz continuity of \(P_{i}\) and \(\partial_{x}P_{i}\) (i=1,2). And for the sake of simplcity, here we only detail the analysis for \(\partial_{u}(P_{1})\) and \(\partial_{u}(\partial_{x}P_{1})\). All other derivatives can be estimated by the same methods. \[|\partial_{u}P_{1}| =|\frac{1}{2}\int_{-\infty}^{\infty}e^{-\int_{Y}^{Y}(q\cos^{4}\frac{ v}{2}(T,\bar{Y})d\bar{Y})}(\frac{3}{8}\sin^{2}v+3u^{2}\cos^{4}\frac{v}{2})\xi(T,\bar{Y})d \bar{Y}|\] \[\leq C\|\Gamma\|_{L^{\infty}}(\|v\|_{L^{2}}^{2}+\|u\|_{L^{2}}^{2}) \tag{3.12}\] \[\leq C\mathscr{E}^{2}(0).\] In the same way, we have \[|\partial_{u}\partial_{x}P_{1}| =|\frac{1}{2}(\int_{Y}^{+\infty}-\int_{-\infty}^{Y})e^{-|\int_{Y}^ {Y}q\cos^{4}\frac{v}{2}(T,\bar{Y})d\bar{Y}|}(\frac{3}{8}\sin^{2}v+3u^{2}\cos^{ 4}\frac{v}{2})\xi(T,\bar{Y})d\bar{Y}|\] \[\leq C\|\Gamma\|_{L^{\infty}}(\|v\|_{L^{2}}^{2}+\|u\|_{L^{2}}^{2}) \tag{3.13}\] \[\leq C\mathscr{E}^{2}(0)\] By choosing suitable perturbation \(V_{i}\)(i=1,2,3), we can make \[rank\left(\begin{array}{c}D_{\theta}\bar{v}^{\theta}\\ D_{\theta}\bar{v}^{\theta}_{YY}\end{array}\right)=3 \tag{3.14}\] when \(\theta=0\). ## 4 Generic property ### Generic solutions of the semilinear system In this section we first study smooth solutions to the semilinear system (2.17), determining the generic structure of the level sets \(\{v=\pi\}\). **Lemma 4.1**.: _Consider a compact domain of the form_ \[\Gamma\triangleq\{(t,Y);0\leq t\leq T,|Y|\leq M\}. \tag{4.1}\] _Call \(\mathbb{S}\) the family of all \(C^{2}\) solutions to the semilinear system, with \(q{>}0\) for all \((t,Y)\in\mathbb{R}_{+}\times\mathbb{R}\). Moreover, call \(\mathbb{S}^{\prime}\subset\mathbb{S}\) the subfamily of all solutions \((u,v,q)\) such that for \((t,Y)\in\Gamma\) the following value is never attained:_ \[(v,v_{Y},v_{YY})=(\pi,0,0). \tag{4.2}\] _Then \(\mathbb{S}^{\prime}\) is a relatively open and dense subset of \(\mathbb{S}\), in the topology induced by \(C^{2}(\Gamma)\)._ Proof.: 1. Denote \(\mathbb{S}_{1}\) to be the subset of solutions for which \((v,v_{Y},v_{YY})=(\pi,0,0)\) is never attained on \(\Gamma\). Since \(\Gamma\) is a compact domain, each \(\mathbb{S}_{1}\) is a relatively open subset of \(\mathbb{S}\), in the topology of \(C^{2}(\Gamma)\). 2. Let \((u,v,q)\) be any \(C^{2}\) solution of the semilinear system, with \(q{>}0\). For any \((t_{0},Y_{0})\in\Gamma\), two cases can occur. CASE 1 \((v,v_{Y},v_{YY})(t_{0},Y_{0})\neq(\pi,0,0)\). In this case, by continuity, there exists a neighborhood \(\mathscr{N}\) of \((t_{0},Y_{0})\) in the \(t-Y\) plane where \((v,v_{Y},v_{YY})\neq(\pi,0,0)\). CASE 2 \((v,v_{Y},v_{YY})=(\pi,0,0)\). By Lemma 3.4 we can find a three-parameter family of solutions \((u^{\theta},v^{\theta},q^{\theta})\) such that the 3\(\times\)3 Jacobian matrix of the map (4.3) \[(\theta_{1},\theta_{2},\theta_{3})\rightarrow(v^{\theta}(t,Y),v^{\theta}_{Y}(t,Y),v^{\theta}_{YY}(t,Y))\] has rank 3 at the point \((t_{0},Y_{0})\), when \(\theta=0\). By continuity, this matrix still has rank 3 on a neighborhood \(\mathscr{N}\) of \((t_{0},Y_{0})\), for \(\theta\) small enough. Now we choose finitely many points \((t_{i},Y_{i}),i=1,....,n\), such that the corresponding open neighborhoods \(\mathscr{N}_{(t_{i},Y_{i})}\) cover the compact set \(\Gamma\). Call \(n_{\mathcal{I}}\) the cardinality of the set of indices (4.4) \[\mathcal{I}\triangleq\{i;(v,v_{Y},v_{YY})(t_{i},Y_{i})=(\pi,0,0)\}\] for which CASE 2 applies. For each \(i\in\mathcal{I}\), our previous construction provided a 3-parameter family of perturbations. Together, all these perturbed solutions depend on \(N=3n_{\mathcal{I}}\) parameters. 3. Let \(\Omega\supset\Gamma\) be an open set contained in the union of the neighborhoods \(\mathscr{N}_{(t_{i},Y_{i})}\) and call \(B_{\epsilon}\triangleq\{\theta\in\mathbb{R}^{N};|\theta|\leq\epsilon\}\) the open ball of radius \(\epsilon\) in \(\mathbb{R}^{N}\). We shall construct a family \((u^{\theta},v^{\theta},q^{\theta})\) of smooth solutions to the semilinear system (2.17), such that the map (4.5) \[(t,Y,\theta)\rightarrow(v^{\theta}(t,Y),v^{\theta}_{Y}(t,Y),v^{\theta}_{YY}(t,Y))\] from \(\Omega\times B_{\epsilon}\) into \(\mathbb{R}^{3}\) has (\(\pi\),0,0) as a regular value. Toward this goal, we need to combine perturbations based at possibly different points \((t_{i},Y_{i})\) into a single N-parameter family of perturbed solutions. Let \((u,x,q)(t,Y)\) be a solution to the semilinear system. For each \(k=1,...,N\), let a point \((t_{k},Y_{k})\) be given, together with a number \(U_{k}\in\mathbb{R}\) and functions \(V_{k},Q_{k}\in C_{c}^{\infty}(\mathbb{R})\). By the previous analysis, if we construct a family of initial data \((\bar{v},\bar{v}_{Y},\bar{v}_{YY})\) with rank 3, we will obtain a one-parameter family of perturbed solutions to the semilinear system with rank 3 in the neighborhood of 0 for the parameter. So the family of the perturbed solution is determined as follows. For \(|\epsilon|\)\(<\)\(\epsilon_{k}\) sufficiently small, we can determine the unique solution of the perturbed semilinear system as (4.6) \[(u^{\epsilon},v^{\epsilon},q^{\epsilon})\triangleq\Phi_{k}^{\epsilon}(u,v,q).\] Given\((\theta_{1},...,\theta_{N})\), we can define a perturbation of the original solution \((u,v,q)\) as the composition of N-parameter perturbations: (4.7) \[(u^{\theta},v^{\theta},q^{\theta})=\Phi_{N}^{\theta_{N}}\circ...\circ\Phi_{1} ^{\theta_{1}}(u,v,q).\] 4. At every point \((t_{i},Y_{i})\) where i\(\in\mathcal{I}\), using Lemma 3.1, we can obtain a three-parameter families of perturbed solutions so that Jacobian matrix of (4.3) is of full rank on neighborhood \(\mathscr{N}\) of point \((t_{0},Y_{0})\), for \(\theta\) sufficiently small. Now we can choose finitely many points \((t_{i},Y_{i}),\;i=1,...,n\) such that the corresponding open neighborhood \(\mathbb{N}_{(t_{i},Y_{i})}\) covers the compact set \(\Gamma\). So we obtain a \(N-\)parameter family of solutions such that the value \((\pi,0,0)\) is a regular value for the map (4.3) from \(\Gamma\times B_{\epsilon}\) into \(\mathbb{R}^{3}\). By the transversality theorem, for a.e. \(\theta\), the map: (4.8) \[F:(t,Y)\rightarrow(v^{\theta}(t,Y),v_{Y}^{\theta}(t,Y),v_{YY}^{\theta}(t,Y))\] is transverse to \((\pi,0,0)\). By the definition of transversality, either \[F(t,Y)\neq(\pi,0,0)\] or \[F(t,Y)=(\pi,0,0)\qquad\text{and}\qquad T_{(\pi,0,0)}\mathbb{R}^{3}=(dF)_{(t,Y)} (T_{(t,Y)}\Omega)\] Since \(\Omega\) is only two-dimensional, \(T_{(\pi,0,0)}\mathbb{R}^{3}=(dF)_{(t,Y)}(T_{(t,Y)}\Omega)\) cannot happen. So the only choice is \(F(t,Y)\neq(\pi,0,0)\). This means that for a.e. \(\theta\) suffiently small, the corresponding solution \((u^{\theta},v^{\theta},\xi^{\theta})\) has property that \((v^{\theta},v_{Y}^{\theta},v_{YY}^{\theta})\neq(\pi,0,0)\) for all \((t,Y)\in\Gamma\). This proves that the set \(\mathbb{S}_{1}\) is dense in \(\mathbb{S}\). From now on we are going to prove our main result. ### Proof of Theorem 1.1 Proof.: Consider the space \(\mathbb{S}=C^{3}(\mathbb{R})\cap H^{1}(\mathbb{R})\cap W^{1,4}(\mathbb{R})\), with norm \[\|u\|_{\mathbb{S}}\triangleq\|u_{0}\|_{C^{3}}+\|u_{0}\|_{H^{1}}+\|u_{0}\|_{W^{ 1,4}}. \tag{4.9}\] Given initial data \(\tilde{u}_{0}\in\mathbb{S}\), consider the open ball \[B_{\delta}\triangleq\{u_{0}\in\mathbb{S};\|u_{0}-\tilde{u}\|{<}\delta\}. \tag{4.10}\] We will prove that, for any \(\tilde{u}_{0}\in\mathbb{S}\),there exists a radius \(\delta>\)0 and an open dense subset \(\mathscr{D}\subset B_{\delta}\) such that for every initial data \(u_{0}\in\mathscr{D}\) the conservative solution \(u=u(t,x)\) is of class \(C^{1}\) in the complement of finitely many characteristic curves \(\gamma_{i}\) within the domain \([0,T]\times\mathbb{R}\). **Step 1**. (Construction of \(\mathscr{D}\).) If \(u_{0}u_{0,x}=\mathcal{O}(\epsilon^{\frac{1}{2}})\), then the blow up time of \(uu_{x}\) along the chararcteristics is of order \(-\frac{1}{\epsilon^{2}}\). Since \(u_{0}\in\mathbb{S}\), we know \(u_{0}u_{0,x}\to 0\) as x\(\to\infty\). So if we can take r \(>\)0 such that when \(|x|{>}0\), we will have \(|uu_{x}|{<}\frac{1}{T_{1}}\), which means the singularity of u in set \([0,T_{1}]\times\mathbb{R}\) only appears in the compact set \(A\triangleq[0,T_{1}]\times[-r-\|u^{2}\|_{L^{\infty}}T_{1},r+\|u^{2}\|_{L^{ \infty}}T_{1}]\), where\(\|u\|_{L^{\infty}}\triangleq\max\{u(t,x),(t,x)\in[0,T_{1}]\times\mathbb{R}\}\). In the (T,Y) plane,it is reasonable for us to take a domain \(\Gamma\) such that \(A\subset\Lambda(\Gamma)\), where \(\Lambda\) is the map from (t,Y) to (t,x(t,Y)) The subset \(\mathscr{D}\subset B_{\delta}\) is defined as follows. \(u_{0}\in\mathscr{D}\) if \(u_{0}\in B_{\delta}\) and, for the corresponding solution \((u,v,\xi)\) of (2.17) with initial data, the value (4.2) are never attained for \((t,x)\in A\). **Step 2**. (\(\mathscr{D}\) is open). We begin by proving that \(\mathscr{D}\) is open, in the topology of \(C^{3}\). Take a sequence of initial data \(u^{\nu}_{0}\notin\mathscr{D}\) that converges to \(u_{0}\). By definition of \(\mathcal{D}\), there is point \((t^{\nu},Y^{\nu})\) satisfying \[(v^{\nu},v^{\nu}_{Y},v^{\nu}_{YY})(t^{\nu},Y^{\nu})=(\pi,0,0),\qquad(t^{\nu},x ^{\nu}(t^{\nu},Y^{\nu}))\in A, \tag{4.11}\] for all \(\nu\geq 1\). Since we have the domain \(A\) is compact, by taking a subsequence, we can assume \((t^{\nu},Y^{\nu})\) converges to some point \((t,Y)\) and \[(v,v_{Y},v_{YY})(t,Y)=(\pi,0,0),\qquad(t,x(t,Y))\in A, \tag{4.12}\] which implies \(u_{0}\notin\mathscr{D}\). The other case can be proved in the same way. So \(\mathscr{D}\) is open. **Step 3**. (\(\mathscr{D}\) is dense). Given \(u_{0}\in B_{\delta}\), by a small perturbation, we can assume \(u_{0}\in C^{\infty}\) By lemma 4.1, we can construct a sequence of solutions \((u^{\nu},v^{\nu},\xi^{\nu})\) of (2.17) such that * For every bounded set \(\Omega\subset\mathbb{R}^{2}\) and \(k\geq 1\), (4.13) \[\lim_{\nu\to+\infty}\|(u^{\nu}-u,v^{\nu}-v,\xi^{\nu}-\xi,x^{\mu}-x)\|_{C^{k}( \Omega)}=0.\] * For every \(\nu\geq 1\) the values in (4.2) are never attained for any \((t,Y)\in\Gamma\). Consider the sequence of solution \(u^{\nu}(t,x)\) with graph \[\left\{(u^{\nu}(t,Y),t^{\nu}(t,Y),x^{\nu}(t,Y));(t,Y)\in\mathbb{R}^{2}\right\} \subset\mathbb{R}^{3} \tag{4.14}\] The corresponding sequence of initial data satisfies \[\|u^{\nu}(0,\cdot)-u_{0}\|_{C^{t}(I)}\to 0,\quad\ as\ \nu\to\infty \tag{4.15}\] for every bounded set \(I\). In order to obtain the convergence for the far field, we modify the sequence. Define a cutoff function \(\phi\in C^{\infty}_{c}\): \[\phi(x)=1\quad\quad\text{if}\quad|x|\leq\rho,\] \[\phi(x)=0\ \ \ \ \ \text{if}\ \ \ |x|\geq\rho+1, \tag{4.16}\] with \(\rho\gg r+\|u^{2}\|_{L^{\infty}}T\) sufficiently large. For each \(\nu\geq 1\) consider the initial data \[\tilde{u}_{0}^{\nu}\triangleq\phi u_{0}^{\nu}+(1-\phi)u_{0}. \tag{4.17}\] Then, we have \[\lim_{\nu\rightarrow+\infty}\|\tilde{u}^{\nu}-u_{0}\|_{\mathbb{S}}=0. \tag{4.18}\] Moreover, if \(\rho\)\(>\)0 large enough, we have \[\tilde{u}^{\nu}(t,x)=u^{\nu}(t,x),\ \ \ \ \forall(t,x)\in A, \tag{4.19}\] while \(\tilde{u}^{\nu}(t,x)\) is \(C^{2}\) on the outer domain. Then we can find that \(\tilde{u}^{\nu}(t,x)\in\mathscr{D}\) for all \(\nu\geq 1\) sufficiently large, which leads to \(\mathscr{D}\) being dense on \(B_{\delta}\). **Step 4**. By previous argument, we know that \(u\) is \(C^{2}\) on the outer domain \(\{(t,x)|0\leq\ t\leq T,|x|\)\(>\)\(\|u^{2}\|_{L^{\infty}}T|\}\). So it suffices to study the singularity of \(u\) on the inner domain \(A\). Consider a point \((t_{0},Y_{0})\in\Gamma\), there could be two situations: * \(v(t,Y)\neq\pi\). Then we can have \(x_{Y}=\frac{\xi}{(1+u_{2}^{2})^{2}}=cos^{4}\frac{v}{2}q\), \[0\textless x_{Y}\textless\infty,\] so the map \((t,Y)\rightarrow(t,x)\) is locally invertible in a neighborhood of \((t_{0},Y_{0})\). We then conclude that the function u is \(C^{2}\) in a neighborhood of the point\((t_{0},x(t_{0},Y_{0}))\). * \(v(t_{0},Y_{0})=\pi\). As is pointed out in [7] the singularity happens on the level set \(\{u=0\}\). By continuity, there exists \(\epsilon\)\(>\)0 such that the values (4.2) are never attained in the open neighborhood (4.20) \[\Gamma^{\prime}\triangleq\{(t,Y);t\in[0,T],|Y|\leq M+\epsilon\}.\] For (4.2), there are two states that can occur. * \((v_{Y}=0,v_{YY}\neq 0)\) By the implicit function theorem and (4.2), we can conclude that the set (4.21) \[\mathcal{S}^{v_{Y}}\triangleq\{(t,Y)\in\Gamma^{\prime};v_{Y}(t,Y)=0\}\] is a one dimensional embedded manifold of class \(C^{1}\). To prove the number of connected components of \(\mathcal{S}^{v_{Y}}\) that intersect the compact set \(\Gamma\) is finite, we shall assume the contrary that \(P_{1},P_{2},....\)is a sequence of points in \(\mathcal{S}^{vv}\cap\Gamma\) belonging to distinct components. Taking a subsequence we can assume the convergence \(P_{i}\rightarrow\bar{P}\) for some \(\bar{P}\in\mathcal{S}^{v_{Y}}\cap\Gamma\). By assumption, \(v_{YY}\neq 0\). Hence, by the implicit function theorem, there is a neighborhood \(\mathcal{N}\) of \(\bar{P}\) such that \(\gamma\triangleq\mathcal{S}^{v_{Y}}\cap\mathcal{N}\) is a connected \(C^{1}\) curve. Hence \(P_{i}\in\Gamma\) for all i large enough, providing a contradiction. * \((v_{Y}\neq 0)\) By the implicit function theorem and (4.2), we can conclude that the set (4.22) \[\mathcal{S}^{v}\triangleq\{(t,Y)\in\Gamma^{\prime};v(t,Y)=\pi\}\] is a one dimensional embedded manifold of class \(C^{2}\). The proof of that the number of connected components of \(S^{v}\) is finite is very similar, thus it will be omitted. **Remark 4.2**.: _We notice that the argument in Step 4 is differ from the Camassa-Holm equation. For the Camassa-Holm equation, if \(v(t_{0},Y_{0})=0\) for some point \((t_{0},Y_{0})\in\Gamma\) then one can see that \(v_{t}\neq 0,v_{Y}\neq 0\). However, for the Novikov equation, the case \(v_{Y}=0\) will happen. We believe that this difference is caused by the energy concentration. For the Camassa-Holm equation when the characteristic meet tangentially, they will separate immediately. However, for the Novikov equation, when the characteristics tangentially touch each other, they will stay for a period of time._ ## 5 Generic singularity behavior For smooth data \(u_{0}\in C^{\infty}(\mathbb{R})\), the solution \((t,Y)\rightarrow(x,t,u,v,q)(t,Y)\) of the semilinear system (2.17), with initial data as in (2.18), remains smooth on the entire t-Y plane. Yet the smoothness of the solution u of (2.4) is still needed to study because the coordinate change:(Y,t)\(\rightarrow\)(x,t) is not smoothly invertible. By definitions, its Jacobian matrix is computed by \[\begin{pmatrix}x_{Y}&x_{t}\\ t_{Y}&t_{t}\end{pmatrix}=\begin{pmatrix}qcos^{4}\frac{v}{2}&u\\ 0&1\end{pmatrix}. \tag{5.1}\] And we will observe that the matrix is invertible when \(v\neq\pi\). To study the set of points in the \(t-x\) plane where \(u\) is singular, we thus need to look at points where \(v=\pi\). Our main theorem provides us with a detailed description of the solution u(t,x) in the neighborhood of each one of these singular points. For simplicity, we shall assume that the initial data \(u_{0}\) are smooth, so we shall not need to count how many derivatives are actually used to derive the Talyor approximations. ### Proof of Theorem 1.2 Proof.: 1. Let P be the point of Type \(\mathcal{I}\). Recalling (2.16) and (2.17) then we will have \[u_{Y}=u_{x}x_{Y}=\frac{1}{2}q\cdot\sin\;v\cdot\cos^{2}\frac{v}{2}. \tag{5.2}\] In a similar way, we obtain \[u_{YY} =\frac{1}{2}q_{Y}\cdot\sin v\cdot\;cos^{2}\frac{v}{2}+\frac{1}{2} q\cdot v_{Y}\cdot\cos v\cdot\cos^{2}\frac{v}{2}-\frac{v_{Y}}{4}\cdot q\sin^{2}v,\] \[u_{Yt} =\frac{1}{2}qsin^{2}vcos^{2}\frac{v}{2}\big{(}(2u^{3}+u)sin^{2} \frac{v}{2}-2(P_{1}+\partial_{x}P_{2})+(2u^{3}+u)cos^{2}\frac{v}{2}\big{)} \tag{5.3}\] \[+\frac{1}{2}(1-4sin^{2}\frac{v}{2})cos^{2}\frac{v}{2}\big{(}-usin ^{2}\frac{v}{2}+2u^{3}cos^{2}\frac{v}{2}-2cos^{2}\frac{v}{2}(P_{1}+\partial_{ x}P_{2})\big{)}.\] At the point P, it provides us with that \(v_{Y}=0,\;v_{t}=0\) and \(v_{YY}\neq 0\). If we denote \(u_{Y^{n}}\) that u is differentiated with Y by n times. It is not hard to check \[u_{Y^{3}}=0,\quad u_{Y^{4}}=0,\quad u_{Y^{5}}=0,\quad u_{Y^{6}}=\frac{6}{8}qv_ {YY}^{3}cosvsin^{2}\frac{v}{2}-3v_{YY}^{3}qcos^{2}v, \tag{5.4}\] which means \(u_{Y^{6}}\neq 0\) Following the same method, we obtain that \[u_{Y^{2}t}=0,\quad u_{Y^{3}t}=0,\quad u_{Y^{4}t}=0,\quad u_{Y^{5}t}=\frac{6}{ 8}qv_{YY}^{2}v_{Yt}cosvsin^{2}\frac{v}{2}-3v_{YY}^{2}v_{Yt}qcos^{2}v. \tag{5.5}\] we will have \(u_{Y^{8}t}=0\). So we have Taylor approximations of u at the singular point \(P=(t_{0},Y_{0})\) \[u(t,Y)=B_{1}(t-t_{0})+B_{3}(Y-Y_{0})^{6}+\mathcal{O}(1)(|t-t_{0}|^{2},|Y-Y_{0}|^{ 7}) \tag{5.6}\] By (2.16) and (2.17), we can obtain \[x_{Y}=\frac{\xi}{(1+u_{x}^{2})^{2}}=qcos^{4}\frac{v}{2}, \tag{5.7}\] similarly we have \[x_{YY}=q_{Y}cos^{4}\frac{v}{2}-2qv_{Y}cos^{3}\frac{v}{2}sin\frac {v}{2}\] \[x_{Yt}=q_{t}cos^{4}\frac{v}{2}-2qv_{t}sin\frac{v}{2}cos^{3}\frac {v}{2} \tag{5.8}\] so it is easy to check that in the point P \[x_{Y^{i}}=0,\ \ \ \ (i=1,2,3,4,5,6,7)\] and \[x_{Y^{8}}=36qv_{YY}^{4}sin^{4}\frac{v}{2},\] \[x_{Y^{7}t}=36qv_{YY}^{3}v_{Yt}sin^{4}\frac{v}{2}. \tag{5.9}\] So we have the Taylor approximations of x at the singular point \(P=(t_{0},Y_{0})\) \[x(t,Y)=x(t_{0},Y)+A_{2}(Y-Y_{0})^{8}+\mathcal{O}(1)(|t-t_{0}|^{2},|Y-Y_{0}|^{ 9}). \tag{5.10}\] We combine (5.6) and (5.10) to deduce (1.4). 2.If P is of type \(\mathcal{II}\),we have \[v=\pi,\ \ \ v_{Y}\neq 0,\ \ \ v_{YY}=0, \tag{5.11}\] which implies \[u_{Y}=0,\ \ \ \ \ \ u_{YY}=0,\ \ \ \ \ \ \ u_{Y^{3}}=0, \tag{5.13}\] \[u_{Y^{4}}=\frac{1}{4}qv_{Y}^{3}cosv\sin^{2}\frac{v}{2}-\frac{v_ {Y}{}^{3}}{2}q\cos^{2}v=\frac{1}{2}v_{Y}^{3}q\neq 0. \tag{5.12}\] And \[u_{Yt}=0,\ \ \ \ \ u_{Y^{2}t}=0,\ \ \ \ \ u_{Y^{3}t}=0. \tag{5.14}\] Then we obtain the Taylor approximation of u is \[u(t,Y)=B_{1}(t-t_{0})+B_{2}(Y-Y_{0})^{4}+\mathcal{O}(1)(|t-t_{0}|^{2}+|Y-Y_{0}| ^{5}). \tag{5.15}\] At point P, \[x_{Y}=0,\ \ \ x_{Y^{2}}=0,\ \ \ x_{Y^{3}}=0,\ \ \ x_{Y^{4}}=0, \tag{5.17}\] \[x_{Yt}=0,\ \ \ x_{Y^{2}t}=0,\ \ \ x_{Y^{3}t}=0\ \ \,\ \ \ x_{Y^{4}t}=0, \tag{5.16}\] \[x_{Y^{5}}=\frac{3}{2}qv_{Y}^{3}cos^{3}v\neq 0. \tag{5.18}\] This leads \[x(t,Y)=x(t_{0},Y_{0})+A_{2}(Y-Y_{0})^{5}+\mathcal{O}(1)(|t-t_{0}|^{2},|Y-Y_{0}|^ {6}). \tag{5.19}\] So (1.5) can be concluded from (5.15) and (5.19). **Acknowledgments** This work was partially supported by the National Natural Science Foundation of China (No.12171493 and No.11701586), the National Key R&D Program of China ( No. 2021YFA1002100), and the Natural Science Foundation of Guangdong province (No. 2021A1515010296 and 2022A1515011798).
2302.09478
Bayesian quantification of strongly-interacting matter with color glass condensate initial conditions
A global Bayesian analysis of relativistic Pb + Pb collisions at $\sqrt{s}_{\rm NN}$ = 2.76 TeV is performed, using a multistage model consisting of an IP-Glasma initial state, a viscous fluid dynamical evolution, and a hadronic transport final state. The observables considered are from the soft sector hadronic final state. Posterior and Maximum a Posteriori parameter distributions that pertain to the IP-Glasma and hydrodynamic phases are obtained, including the shear and bulk specific viscosity of strong interacting matter. The first use of inference with transfer learning in heavy-ion analyses is presented, together with Bayes Model Averaging.
Matthew R. Heffernan, Charles Gale, Sangyong Jeon, Jean-François Paquet
2023-02-19T04:47:57Z
http://arxiv.org/abs/2302.09478v4
Bayesian quantification of strongly-interacting matter with color glass condensate initial conditions ###### Abstract A global Bayesian analysis of relativistic Pb + Pb collisions at \(\sqrt{s}_{\rm NN}=2.76\) TeV is performed, using a multistage model consisting of an IP-Glasma initial state, a viscous fluid dynamical evolution, and a hadronic transport final state. The observables considered are from the soft sector hadronic final state. Posterior and Maximum a Posteriori parameter distributions that pertain to the IP-Glasma and hydrodynamic phases are obtained, including the shear and bulk specific viscosity of strong interacting matter. The first use of inference with transfer learning in heavy-ion analyses is presented, together with Bayes Model Averaging. ## I Introduction Much is known about the behavior of Quantum Chromodynamics (QCD) - the theory of the nuclear strong interaction - in situations where "cold" strongly interacting systems are investigated by high-energy probes. This success is owed in great part to asymptotic freedom, the running of the strong constant whose value decreases as the energy scale increases, rendering controlled and systematically improvable perturbative calculations possible. In regimes where perturbative approaches converge poorly, lattice QCD has proven to be a powerful tool to investigate QCD both at zero and finite temperatures [1]. Comparing with the status of "cold" QCD, much less is known about the behavior of the theory in conditions of extreme temperatures and energy density, although some features have nevertheless been predicted with certainty. In such environments, lattice calculations have predicted a crossover transition to occur from composite hadronic degrees of freedom to a partonic state (the quark-gluon plasma (QGP)) at a temperature \(\approx 150\) MeV, for vanishing net baryonic density [2]. Furthermore, exploring the QCD phase diagram away from the axis where baryon density vanishes, several theoretical studies support the existence of a first-order phase transition line, terminating at a critical end point (CEP) [3]. In addition to an active global research effort in the theory of strongly-interacting systems under extreme conditions, a vigorous experimental program exists to further study and characterize the QGP through experiments performed at facilities around the world and also through observations involving dense stellar objects such as neutron stars [4]. In terrestrial laboratories, this exotic state of QCD has been experimentally observed at the Relativistic Heavy-Ion Collider (RHIC, at Brookhaven National Laboratory) and at the Large Hadron Collider (LHC, at CERN) involving the relativistic collisions of large nuclei ("heavy ions") [5], and much activity is currently also being devoted to studies involving comparatively smaller hadronic objects [6]. One of the theoretical breakthroughs in modeling relativistic heavy ion collisions has been the realization of the effectiveness of relativistic viscous fluid dynamics, which features collective hadronic flow that accurately describe experimental observations in heavy-ion collisions [7]. Along with this milestone in theory, the importance of deviation from ideal fluid dynamics is quantified with the evaluation of transport coefficients that represent fundamental features of QCD. The main ones represent the shear and bulk viscosity of the strongly interacting matter [7; 8]. Much research has been devoted to the evaluation of those shear and bulk viscosity coefficients using a variety of models and approaches, both perturbative and non-perturbative [9]. Up to now, direct calculations have been met with limited success. One of the contributing factors is the fact that the conditions created in nuclear collisions and reconstructed by hadronic probes span a parameter space where QCD is inherently non-perturbative and strong non-equilibrium features render the use of fluctuation-dissipation techniques [10] problematic. The difficulty in directly calculating the transport coefficients has been highlighted in several presentations and reviews and is illustrated by a wide spread in theoretical results [11; 12; 13; 14; 15; 16; 17]. Consequently, data-driven techniques - chiefly Bayesian inference - have been developed and currently are successful in extracting the transport coefficients from heavy-ion collision data through systematic model-to-data comparison. The efforts to obtain the temperature dependence of the coefficient of shear viscosity over the entropy density, \(\eta/s\), and of the bulk viscosity over the entropy density, \(\zeta/s\), have relied on multistage models constructed to describe the entire space-time history of the collision process. Prior to this work, the modeling of the different reaction stages has included * T\({}_{\mbox{\small{\sc n}}}\)ENTo[18] supplemented by freestreaming [19; 20] for the early stage, pre-hydrodynamic era * more recently - TRAJECTUM[25], for the relativistic viscous fluid dynamics epoch * UrQMD[26] and SMASH[27] for the late time evolution and dynamical freezeout Various combinations of those elements have been assembled as _ab initio_ simulations to interpret measurements. Traditional modeling and simulation techniques would simply entail simulating the nuclear reaction as faithfully as possible, obtaining a good fit to the final state(s), and extracting physically relevant information from the exercise. In the field of relativistic heavy-ion collisions, this avenue of investigation is not practical at scale for a variety of reasons. Firstly, the sophistication of the multistage models used to simulate and interpret experiments probing nuclear matter under extreme conditions comes at considerable computational expense. In addition, the many-body environment typical of relativistic nuclear collisions final states generate a wide variety of observables, many of which are correlated with each other and increase the dimensionality of the Bayesian inference. Those two aspects have led to the development of modern approaches combining principal component analysis (PCA) with surrogate modeling in an effort to minimize possible correlations and accelerate calculations. Many of these aspects have been used with Bayesian inference (described in the following sections) to determine the temperature dependence of shear and bulk viscosity within some statistically-relevant intervals [25; 28; 29; 30; 31]. This work [32] shares similar goals but with important differences. It is known that the extraction of QCD transport coefficients from the analysis of heavy-ion collisions is influenced by the physics of very early times [33; 34; 35]. In this context, the calculations made up to now have been made using TaENTo and freestreaming, for the initial very early stages. While this model and scenario are practical and versatile, it is worthwhile to explore analyses based on an approach with some degree of microscopic support. In this vein, this work will use the IP-Glasma model [36] which we summarize in a later section, and which has a history of successful phenomenology for heavy-ion collisions [37]. Another novel aspect highlighted in this work is the first use of transfer learning in a realistic analysis involving the physics of relativistic heavy-ion collisions. We describe this aspect later in our paper, and also in a companion letter [38]. Finally, this study also comprises a series of technical innovations [32] (_e.g._ the prior distributions, the design space, etc.) which will be presented and discussed in turn, as appropriate. To keep this survey as simple and transparent as possible, we focus on Pb + Pb collisions at an energy of \(\sqrt{s_{\rm NN}}=2.76\) TeV. Even at this degree of resolution, the required numerical work needed to establish the surrogate modeling on a firm statistical basis was considerable. Extending our approach to other systems and energies is left for future work. This paper is structured as follows: section II reviews the basic tenets and usage of Bayesian analysis in the context of this work, and goes over the basics of surrogate modeling and of transfer learning. The following section - Section III - then discusses the different components of our physical multistage model. Section IV covers the approach to defining priors, together with a description of the physical parameters studied in this work. Our approach to the design phase is also outlined. Section V discusses the important milestone of closure tests and self-consistency of the surrogate modeling. This step is crucial in order to ensure that the model is self-consistent. We then address comparing the model with data: post-dictions and predictions. The statistics approach to selecting a particular physical model over competitors is discussed in Section VII. The paper ends with a summary and conclusion. ## II Bayesian inference and surrogate modeling ### Bayes Theorem We begin by defining the statistical notation used throughout this work. \(p(A)\) denotes the probability density \(p(\cdot)\) of a proposition \(A\). \(p(A|B)\) denotes the probability density of proposition \(A\) conditional on proposition \(B\), _i.e._ the probability density of \(A\)_given_\(B\). There may be multiple statements to which the proposition of interest is conditional; these are all contained to the right of the vertical bar, _e.g._\(p(A|B,C,D,\dots)\). With this established, we can begin to interpret Bayes' Theorem, a fundamental statement of probability theory: \[p(H|d,I)=\frac{p(d|H,I)p(H,I)}{p(d,I)}. \tag{1}\] \(H\) represents a particular _hypothesis_, such as the proposed values of a set of parameters and \(d\) represents data to which the hypothesis is compared. The Bayes evidence quantifies a balance between quality of fit via the likelihood and predictive power by penalizing increasing dimensionality. It can be used in model selection as the best model is the one that fits the data best with the fewest number of free parameters. Finally, the quantity of interest in Bayesian inference is \(p(H|d,I)\), the _posterior_. It quantifies the belief in a given hypothesis \(H\)_posterior_ to comparison with measured data \(d\). Bayes' theorem formalizes statistical learning by making a prior belief explicit and then comparing it to data, after which the prior belief is determined to be relatively more or less likely. The result _posterior_ to comparison with data is the new state of understanding. ### Surrogate modeling with a statistical emulator Surrogate modeling is a strategy for computation with expensive likelihood functions. Likelihood functions are expensive because they require detailed model evaluation. A cheaper model (the "surrogate") is trained to emulate the expensive model using calculations from the more expensive model. This less computationally expensive surrogate can be considered a low-fidelity model, or a model-of-a-model, and compromises a limited degree of accuracy for great reduction of computational expenditure. It does this by mapping inputs to outputs and learning the functional relationship between them rather than attempting to produce a coarse version of the intermediary physics. These methods have had success in heavy ion physics [28; 29; 30; 31; 39; 40; 41; 42; 43; 44]. Given a set of training points, there are infinitely many functions that can describe the points. Gaussian processes (GPs), the surrogate models used in this study, assign a probability to each of these functions, meaning that the output is a probability distribution of the characterization of the data. Conveniently, this also allows one to determine the relative confidence in the prediction. The only assumptions by the GP are that it assumes the function is continuous and smoothly varying with respect to the length scales of the observations. ### Transfer learning A surrogate modeling technique only recently considered in the context of heavy ion collisions is transfer learning [45; 46]. This learns about a "task" of interest (the target task) by using information from related tasks (source tasks). In heavy ion collisions, inductive transfer learning - where the source and target have the same input domain - can be readily deployed. This allows for transfer learning between models of viscous corrections at particlization that do not introduce additional parametric flexibility [46]. Efficient transfer was also found to be possible for collisions of slightly different nuclei at different collision energies [46]. Transfer learning is performed by first having a trained surrogate model for a source task. Then, the discrepancy between the source and the target is found and is encapsulated in a discrepancy function. The advantage of transfer learning is to use comparably little new training information about the target to learn the discrepancy function. More formally, if \(f_{S}(x)\) is the source and \(f_{T}(x)\) the target, one can propose a simple relationship \[f_{T}(x)=\rho f_{S}(x)+\delta(x) \tag{2}\] where \(\rho\) is a linear correlation between the source and target estimated via the maximum likelihood and \(\delta(x)\) is the discrepancy between the source and target models. \(f_{S}(x)\) and \(\delta(x)\) are considered to be independent Gaussian processes. This is derived from multifidelity emulation, where the source is a computationally-inexpensive low-fidelity model and the target is a computationally-expensive high-fidelity model [47]. This vastly reduces the computational cost of training new models that are similar to an already-trained source. In the case of (linear) viscous corrections, particularly those between Grad's 14-moment viscous corrections [48; 49] and the Chapman-Enskog viscous corrections [30; 50], an order of magnitude fewer training points are required to reach a specified accuracy when using transfer learning as opposed to training a new surrogate model [46]. This work is the first time transfer learning methods will be used for Bayesian inference in heavy ion collisions. We use transfer learning to implement a second viscous correction model with Grad's 14-moment viscous corrections as the source \(f_{S}\) and Chapman-Enskog relaxation time approximation (RTA) viscous corrections as the target \(f_{T}\). ## III Physical models As all of the individual elements of our hybrid modeling exist in the literature and have been used extensively in a variety of applications, only a brief summary is provided here and the reader will be referred to the appropriate references. ### Pre-equilibrium - IP-Glasma The very first instants of the heavy-ion collisions considered in this work are modeled by IP-Glasma, an approach which supplements the Color-Glass Condensate (CGC) [51]. More specifically, the CGC action can be written as [52] \[S_{CGC}=\int d^{4}x\left(-\frac{1}{4}F^{a}_{\mu\nu}F^{a\,\mu\nu}+J^{a\,\mu}A^{ a}_{\mu}\right) \tag{3}\] where \(F^{a\,\mu\nu}\) is the non-Abelian field strength tensor with color index \(a\), and \(J^{a\,\mu}\) is the current representing the hard partons that source soft gluons. The CGC can be viewed as an effective field theory representation of QCD. Its implementation here follows the IP-Glasma model of initial conditions [7; 36; 53], where the IP-Sat approach [54; 55] is used to determine the fluctuating initial color configuration in the two highly energetic approaching nuclei. These color charges then act as sources for the small \(x\) soft gluon fields, which have a large occupation number and therefore can be treated classically. Their evolution obeys the Yang-Mills equation: \[[D_{\nu},F^{\mu\nu}]^{a}=J^{a\,\mu} \tag{4}\] with \(D^{a}_{\mu}=\partial_{\mu}-igA_{\mu}t^{a}\), and \(t^{a}\) the color SU(3) matrices. The color current \(J^{a\,\mu}=\delta^{\mu\pm}\rho^{a}_{A(B))}(x^{\mp},\mathbf{x}_{\perp})\) is generated by nucleus A (B) moving along the light-cone direction \(x^{+}(x^{-})\), and \(\rho^{a}\) represents the color charge distribution extracted from IP-Sat. The Glasma distributions resulting from solving the Classical Yang-Mills equations event-by-event then serve as an input to fluid dynamics, at proper time \(\tau_{0}\). For the purpose of this work, an important parameter of IP-Glasma is \(\mu_{Q_{s}}\), the constant of proportionality relating the color charge per unit transverse area \(g^{2}\mu(x,\mathbf{b}_{\perp})\) to the [56] squared saturation scale \(Q_{s}^{2}\). This is one of the parameters considered in this study. ### Viscous hydrodynamics - MUSIC Hydrodynamics is an effective theory of long-wavelength modes. Practically, this means that the evolution of a collection of differential elements can be described not by tracking microscopic particles, but considering their long-wavelength (or spatially-coarse) collective motion. Analogously, to model a hurricane or storm front, it is not necessary (or even relevant) to model the behavior of every constituent water droplet or molecule of air. Instead, the collective dynamics at a much larger scale reveal the physics of interest. Most of the modern approaches to relativistic fluid dynamics perform a gradient expansion of hydrodynamic equations up to second order, following original work by Muller, Israel, and Stewart [57; 58] (MIS), who added transport coefficients in the form of shear and bulk relaxation times that characterize the timescale on which shear stress tensor and bulk pressure approach their first-order solutions, ensuring causal solutions under a range of conditions 1. Footnote 1: Recent works have further investigated the constraints imposed by causality on relativistic fluid dynamics [59; 60]. In this study, second-order transient relativistic hydrodynamics is used to describe the plasma, specifically the DNMR formulation [61] with both shear and bulk viscosity as implemented numerically in MUSIC (_MUSCI for Ion Collisions_) [21; 22; 23]. The equation for the conservation of energy and momentum is coupled with relaxation equations for the shear tensor and the bulk pressure, with parametrized shear and bulk viscosities (discussed below) and second-order transport coefficients related to the first-order ones [62]. The transport coefficients of interest here, \(\eta/s\) (the shear viscosity over the entropy density) and \(\zeta/s\) (the bulk viscosity over the entropy density), characterize the first-order deviation from ideal fluid dynamics. The Equation of State (EoS) is where the specific material properties of QCD matter inform the hydrodynamic stage and as a result, must be constructed to be consistent with the model choices. The EoS at high temperature is matched to lattice calculations [63]. At low temperature, the EoS matches that of the particle list used in the hadronic transport (to be discussed in a later section), which ensures that the EoS is continuous across the transition between the two stages. The matching between the high and low temperature results must be done in a manner consistent with what is currently known about QCD: it must have a smooth crossover between degrees of freedom rather than a sharp phase transition at vanishing baryochemical potential.2 Attempts to constrain the equation of state directly from hadronic observables have shown promise, but as of yet still have significant remaining uncertainty [70; 71]. Active learning techniques are also being applied to the efforts to characterize the equation of state [72]. Footnote 2: Discussions of non-zero baryochemical potential are beyond the scope of this work, but are a vibrant field which features a search for a possible QCD critical point [64; 65; 66; 67; 68; 69]. The equation of state used in this work smoothly connects the HotQCD calculation [73] at high temperatures to a list of stable resonances at low temperatures, and matches that of Ref. [30] and the code that produced it is publicly available with the default parameters [74]. ### Particlization - iS3D To particlize the hydrodynamic medium, one defines a surface at constant temperature, energy density, or entropy - these choices are equivalent in the case of zero baryochemical potential, which this study strictly respects. This temperature is the switching, or particlization, temperature. Once this surface has been drawn, particles can be sampled stochastically, respecting energy and momentum conservation _on ensemble average_. This means that the sampled distribution converges to the true distribution of particles, momenta, etc. it is useful to _over_sample this surface. This is either done a fixed number of times (typically 100 to 300 times), or until a sufficient number of particles has been sampled. The way the sampling is performed is via the Cooper-Frye prescription [75], implemented in iS3D [76]. Given an isothermal (or isentropic, etc.) hypersurface \(\Sigma\) with normal vector \(\sigma_{\mu}(x)\), the invariant momentum spectra of a particle species \(i\) with degeneracy \(g_{i}\) is \[E\frac{dN_{i}}{d^{3}p}=\frac{g_{i}}{(2\pi)^{3}}\int_{\Sigma}f_{i}(x,p)p_{\mu} d\sigma^{\mu}(x) \tag{5}\] where \(f_{i}(x,p)\) is the phase-space distribution, and \(g_{i}\) is a degeneracy factor. This distribution function reproduces the energy-momentum tensor of hydrodynamics at the particlization surface, \[T^{\mu\nu}(x)=\sum_{i}\frac{g_{i}}{(2\pi)^{3}}\int\frac{p^{\mu}p^{\nu}f_{i}(x, p)}{E}d^{3}p. \tag{6}\] Here, \(f_{i}(x,p)\) is species-specific, representing either Bose-Einstein or Fermi-Dirac statistics. The out-of-equilibrium nature of the system generates interesting physics, but presents significant challenges. If at the time of particlization the hydrodynamic medium were in equilibrium, the choice of the distribution function would simply be the equilibrium form, and the rest frame velocity and temperature would be fixed by the hydrodynamic velocity and the energy density in the local rest frame. However, the medium is generally not in equilibrium and consistency between the kinetic description of particles and viscous hydrodynamics must be attempted. The existence of shear and bulk stress contributions - \(\pi^{\mu\nu}\) and \(\Pi\) - produce deviations of the microscopic distributions and yields from the equilibrium ones. As mentioned earlier, this study will exclusively consider Grad's 14-moment approximation and the linear Chapman-Enskog expansion in the relaxation time approximation. The distribution function for a fluid out of local equilibrium may be separated as \[f_{i}(x,p)=f_{eq,i}(x,p)+\delta f_{i}(x,p) \tag{7}\] where \(f_{eq,i}(x,p)\) is the equilibrium distribution function (Bose-Einstein or Fermi-Dirac for different particle species) and \(\delta f_{i}(x,p)\) is the non-equilibrium correction. Unfortunately, the separation of the distribution function into the equilibrium contribution and a viscous correction, despite the constraints from matching to hydrodynamics, does not fully specify the momentum-dependence of \(\delta f_{i}(x,p)\). This means that the choice of correction remains a modeling choice with inherent ambiguity that can have a notable impact on hadronic observables [77]. To constrain further, the reasonable assumption is made that hydrodynamics and relativistic kinetic theory are simultaneously applicable at the transition between them. Linearized viscous corrections linearize the correction \(\delta f_{i}\) in the shear stress tensor, bulk viscous pressure, and baryon diffusion current. In this study, baryon diffusion is not considered. In linearized viscous corrections, the expansion coefficients are adjusted to exactly reproduce \(T^{\mu\nu}\). Grad's 14-moment approximation expands the correction \(\delta f_{i}(x,p)\) in momentum moments of the distribution function [48], only truncating at the level with terms involving \(p^{\mu}\) and \(p^{\mu}p^{\nu}\), _i.e._ at hydrodynamic order. The Chapman-Enskog expansion is a gradient expansion around \(f_{eq,i}\). The relaxation time approximation (RTA) is used for the collision term of the Boltzmann equation. Expanding \(f_{i}\) into its equilibrium component and correction and assuming hydrodynamic gradients are small in comparison to the relaxation time, a first order gradient correction for the thermal distribution may be derived [78], ### Hadron Cascade - SMASH Once hadrons have been sampled from a hydrodynamic hypersurface, they can be evolved using kinetic theory via the SMASH transport code [27]. The particles interact with each other, scattering, decaying, and forming resonances. These are computed in SMASH using measured particle properties and channels [79] via a tower of coupled Boltzmann equations. \[p^{\mu}\partial_{\mu}f_{i}(x,p)=C[f_{i}] \tag{8}\] where \(i\) is an index over species. Once again, \(f_{i}(x,p)\) is species-specific, representing Bose-Einstein statistics for particles with integer spin and Fermi-Dirac statistics for particles with half-integer spin. This list of species is given in the SMASH documentation.3 This work uses SMASH Version 1.8. Footnote 3: Available at smash-transport.github.io ## IV Prior specification and experimental design ### Priors The prior distribution, or state of knowledge, for each parameter must be justified for each study. However, the general form of the prior deserves some attention. Most Bayesian inference studies in heavy ion collisions to date have used uniform priors. Existing guidance from Bayesian practitioners in the statistics community suggests using uniform priors with sharp cutoffs only if that is an accurate reflection of the underlying constraint and not as a general non-informative choice. Additionally, priors may be chosen with features such as boundary-avoidance or invariance under reparametrization [80]. Another important consideration is to interrogate what "weakly-" or "non-informative" means in the absence of explicit reference to the likelihood. If the dominant constraint comes from the prior, then the prior is informative. Conversely, if the likelihood is the dominant source of constraint, then the prior is less informative. In order to bias the priors as little as possible and to smoothly move beyond the uniform distribution, this study uses the symmetric Generalized Normal distribution with varying mean \(\mu\), location \(\alpha\), and shape parameter \(\beta\). The shape parameter \(\beta\) controls the tails of the distribution. When \(\beta=\infty\), the distribution becomes the uniform distribution. When \(\beta=2\), the distribution is Gaussian, while when \(\beta=1\), the distribution is Laplacian. This provides a flat plateau with power law tails, smoothly interpolating between the current practice (effectively \(\beta=\infty\)) and priors more reflective of the underlying physics. This distribution has support on the whole real line and can be shifted. Additionally, the Half Generalized Normal distribution exists for instances where a sharp cutoff is reasonable, e.g. positive specific viscosity for non-decrease of entropy. Quantities such as the probability density function (PDF) and cumulative distribution function (CDF) are well defined, as is the entropy of the distribution. The probability density function of the Generalized Normal distribution is \[p(x,\mu,\alpha,\beta)=\frac{\beta}{2\alpha\Gamma(1/\beta)}e^{(-|x-\mu|/\alpha )^{\beta}} \tag{9}\] where \(\Gamma\) is the Gamma function. ### Parameters In this section, the physical meaning of the free parameters investigated is described. The specific choices for individual priors are explicitly specified but the general form of the prior remains the same: a Generalized Normal distribution with a specified shape parameter \(\beta\) and a central 99% interval. Rather than specify values of the location and scale, the central 99% interval is chosen and the parameters that produce this interval are found through numerical optimization. This is more interpretable as it specifies a 99% degree of belief that the parameters are within a certain range and is directly comparable to the 100% central interval used to characterize the uniform distribution. In this study, only parameters in IP-Glasma and MUSIC are varied. The choice of viscous correction is in effect a parameter in the Copper-Frye particlization sampling (as implemented in iS3D), but is fixed for each calculation. The parameters in IP-Glasma are mostly fixed via the IP-SAT model's comparison to deep inelastic scattering experiments. Two parameters in IP-Glasma can be considered poorly constrained and are thus included in the Bayesian study: (i) the proportionality between the saturation scale and color charge densities, and (ii) the onset of hydrodynamics. In this study the strong coupling has been fixed to \(g=2\), a value compatible with the bulk of heavy-ion phenomenology at the energies of the LHC [81]. Each parameter in the initial stage model is now described in more detail and is given a shorthand notation. 1. \(\mu_{Q_{s}}\): Multiplier from the saturation scale to the color charge density profile (\(Q_{s}\propto g^{2}\mu\)). In the CGC, these quantities are proportional but an _a priori_ constraint on this proportionality is not known from theory. 2. \(\tau_{0}\): Proper time of the transition between IP-Glasma and hydrodynamics. In IP-Glasma, the Glasma phase stabilizes within approximately 0.2 fm while flow continues to build as shown in Fig. 1 reproduced from [82]. The onset time of hydrodynamics is not known with certainty, but estimates have been guided by the fact that, parametrically, gluon saturation should be attained for momentum scale smaller than \(Q_{s}\)[83], which corresponds to a time scale \(\sim 1/Q_{s}\). Practically, IP-Glasma initial states with a proper time span \(0.2<\ \tau\ <0.4\) fm have been used [81; 84; 37]. Recent studies with freestreaming [42; 25; 30] extract a longer time to the onset of hydrodynamics than those typically used with IP-Glasma. This work will allow for switching times up to \(\sim 1.2\) fm, informed by longer hydrodynamic onset time and the approach to hydrodynamics [85], to determine if these long onset times are favored when using a pre-equilibrium model with microscopic dynamics. In the relativistic hydrodynamic phase, the temperature dependence of both shear and bulk viscosity is varied. Because of the parametric flexibility in the viscosity, these parameters dominate the analysis. One more parameter is the particlization temperature between hydrodynamics and hadronic transport. It has been proposed to avoid using a parametrization, which correlates values of the viscosity at different temperatures [43], an idea that has recently been explored in other context in heavy ion collisions [86]. This is beyond the scope of this work. This work uses the viscosities as parametrized in [30; 29] but widens the prior ranges. The specific shear viscosity is parametrized as \[\eta/s(T) = (\eta/s)_{kink}+a_{\eta,low}(T-T_{\eta,kink})\Theta(T_{\eta,kink}-T) \tag{10}\] \[+a_{\eta,high}(T-T_{\eta,kink})\Theta(T-T_{\eta,kink})\] where the function has four parameters: \((\eta/s)_{kink}\), \(a_{\eta,low}\), \(a_{\eta,high}\) and \(T_{\eta,kink}\). These control the value of \(\eta/s\) at some kink, the slope below and above the kink, and the temperature of the kink. In practice, this can be less than 0, so the value used is MAX\((0,\eta/s)\). Generally, strong coupling implies low viscosities, and the strongest coupling should be in the deconfinement region (below, quarks are confined, and above asympotic freedom reduces the strength of the interaction). This minimum in shear viscosity has been observed for a large number of systems. [87; 11; 28] For QCD, direct calculations and experimental extractions currently produce a variety of results [88; 89] with variable temperature dependence. The specific bulk viscosity is parametrized as the probability density function of a skewed Cauchy distribution, \[\zeta/s(T) = \frac{(\zeta/s)_{\rm max}\Lambda^{2}}{\Lambda^{2}+\left(T-T_{ \zeta,c}\right)^{2}} \tag{11}\] \[\Lambda = w_{\zeta}\left[1+\lambda_{\zeta}\operatorname{sign}\left(T-T_{ \zeta,c}\right)\right]\] where the function again has four parameters: the maximum of the bulk viscosity \((\zeta/s)_{max}\), the temperature at which the bulk viscosity is maximum \(T_{\zeta,c}\), the width \(w_{\zeta}\) of the bulk viscosity peak and the skewness \(\lambda_{\zeta}\). This Figure 1: Longitudinal and transverse pressure, scaled by the energy density, as a function of proper time in (2+1)D IP-Glasma. Adapted from [82]. parametrization, as highlighted in [30], is based on the expectation that the specific bulk viscosity for QCD matter reaches a peak near the deconfinement transition; this is related to the trace anomaly of QCD or a corresponding dip in the speed of sound in-medium [89; 90; 17; 91; 92]. At high temperature, QCD becomes increasingly conformal and the specific bulk viscosity is expected to smoothly approach zero [93]. The full list of the parameters varied in the relativistic hydrodynamic phase is as follows: 1. \((\eta/s)_{kink}\): The value of \(\eta/s\) at the kink temperature. 2. \(T_{\eta,kink}\): The temperature at which \(\eta/s\) changes slope. 3. \(a_{\eta,low}\): The slope of the \(\eta/s\) below the kink temperature. This is broadly expected to be negative or 0, but has not yet been constrained conclusively by model-to-data comparison. 4. \(a_{\eta,high}\): The slope of \(\eta/s\) above the kink temperature. This is anticipated to be positive definite, but has not yet been constrained conclusively by model-to-data comparison. A theoretical exception to this expectation can be found in the NJL model for SU(3) [94]. 5. \((\zeta/s)_{max}\): The maximum of \(\zeta/s\). 6. \(T_{\zeta,c}\): The temperature of the maximum of \(\zeta/s\). 7. \(w_{\zeta}\): The width of the peak in \(\zeta/s\). 8. \(\lambda_{\zeta}\): The asymmetry of the peak in \(\zeta/s\). 9. \(T_{sw}\), also known as the particlization or switching temperature. A surface at constant temperature is drawn (assuming no baryochemical potential) from which hadrons are sampled with the Cooper-Frye formula, implemented in the iS3D code. The individual hadrons are then described with hadronic transport (SMASH). As stated earlier, the parameters related to IP-Glasma are \(\mu_{Q_{s}}\) and \(\tau_{0}\), the latter defining the boundary with the hydrodynamics phase. With other parameters held fixed, \(\mu_{Q_{s}}\) was varied and the final multiplicity dependence was observed and used to determine a broad range for this parameter. Allowing for an approximate factor of two in the prior yields a 99% prior range, as seen in Table 1. The prior for \(\tau_{0}\) is extended to times considered late by most applications, to ensure those values are properly explored. The remaining parameters whose priors must be motivated are those of the hydrodynamic stage: the 8 parameters of the specific shear and bulk viscosity as well as the particlization temperature. For the parameters of the specific shear viscosity, the parametrizations presented earlier and shown in Fig. 2 are used. The priors for the parametrization of the specific shear viscosity were widened, when compared to those used in a previous study [30], and both signs of the slope below and above the kink were explored. The shape of the specific bulk viscosity allowed for a peaked distribution, with both variable width, asymmetry and normalization. The form of the priors for each parameter is again the symmetric Generalized Normal Distribution or the half symmetric Generalized Normal Distribution if the quantity is commensurate with a sharp cutoff (e.g. is required to be positive definite). The full set of parameter priors, with the central 99% range and the Generalized Normal distribution shape parameter \(\beta\) is collected in Table 1. ### Maximum Projection Designs Previous studies using uniform priors have sampled the allowed parameter space using maximin Latin hypercube sampling (LHS) techniques, which maximize the minimum Euclidean distance between points. Latin hypercubes are designed to provide uniform coverage when projected into 1 dimension while the maximin algorithm helps select points that give a fairly reasonable coverage of the volume. An issue that may arise in surrogate modeling is that not all parameters are equally impactful; some may even have little impact on the final result. As a result, there is a projection of the full design space that impacts the outputs, called the "active subspace". It is not possible to know the active subspace ahead of time, but it is possible to construct a space filling design that maximizes _all_ arbitrary projections of the space to lower dimensions. This is the idea behind the Maximum Projection (MaxPro) design strategy [95]. Specifically, this study will utilize a MaxPro Latin Hypercube Design. The sampling must also be made commensurate with the parameter ranges and priors used. This is accomplished by sampling designs on a unit hypercube with the relevant number of dimensions. The priors are chosen and the percent point function (or quantile) can be straightforwardly calculated. The sample location on each dimension of the unit hypercube corresponds to a percentile of the prior range in each dimension. This ensures uniform coverage of the probability volume by weighting by the prior density. This deformation technique is shown for a simple 2D example in Fig. 3 and its success has already been demonstrated [96]. In this Figure 2: The parametrization of the viscosities. study, 350 design points were used for the primary choice of model, which uses Grad's viscous corrections at particlization. An additional 50 design points, which also maximize the MaxPro metric, were generated for model calculations with Chapman-Enskog viscous corrections. ## V Model validation It is important to investigate which parameters are both reliably constrained using the underlying hybrid model and are reliably emulated by the Gaussian process surrogate model. This step, revisited at the outset of each study, must be performed to ensure that predictions made by the Gaussian process emulators are sensible and will provide physical - rather than spurious - constraint. ### Forward model validation The physical observables we shall consider are divided into two classes that we label "first generation observables" and "next generation observables". This distinction is somewhat arbitrary but receives some support from chronology. The first generation observables broadly describe large-scale features of the fireball and add four-particle radial Fourier coefficients to the set of observables used in a previous Bayes study [29; 30] with the exception of correlated momentum fluctuations. More specifically, the quantities in this class are * \(dN_{\rm ch}/d\eta\): The number of charged hadrons per unit pseudorapidity. Measurements are from the ALICE Collaboration [97]. * \(dN_{i}/dy\), \(i\in\{\)\(\pi\), p, K, etc.\(\}\): Identified charged hadrons per unit rapidity. Measurements are from the ALICE Collaboration [98]. * \(dE_{\rm T}/d\eta\): Transverse energy, defined as \(E_{\rm T}=\sqrt{m^{2}+p_{\rm T}^{2}}\), per unit rapidity. Measurements are from the ALICE Collaboration [99]. * \(\langle p_{\rm T}\rangle_{i}\), \(i\in\{\pi\), p, K\(\}\): Mean transverse momenta of identified hadrons. Measurements are from the ALICE Collaboration [98]. * \(v_{n}\{2\}\): Two-particle radial Fourier coefficients. Measurements are from the ALICE Collaboration [100]. * \(v_{n}\{4\}\): Four-particle radial Fourier coefficients. Measurements are from the ALICE Collaboration [101]. The "next generation observables" explore correlations between geometric features or momentum fluctuations and decompositions of observables into a linear and nonlinear response of the medium. Again, more specifically: Figure 3: Deformation of a 2 dimensional Maximum Projection design on the unit hypercube centred at 0 according to a standard symmetric Generalized Normal distribution with \(\beta=10\). The points of the centred unit hypercube are highlighted with a square box and are shown in blue, while points shown in orange have been deformed as described. \begin{table} \begin{tabular}{l c c c c} Parameter & \(0.5^{th}\) percentile & \(99.5^{th}\) percentile & \(\beta\) & Distribution \\ \hline \(\mu_{Q,c}\) & 0.55 & 0.90 & 10 & Generalized Normal \\ \hline \(\tau_{0}\) [fm] & 0.20 & 1.20 & 20 & Generalized Normal \\ \hline \(\overline{\tau}_{h,kink}\) [GeV] & 0.120 & 0.320 & 20 & Generalized Normal \\ \hline \(a_{n,low}\) [GeV\({}^{-1}\)] & -2.10 & 1.20 & 20 & Generalized Normal \\ \hline \(a_{n,high}\) [GeV\({}^{-1}\)] & -1.20 & 2.10 & 20 & Generalized Normal \\ \hline \((\eta/s)_{kink}\) & 0.00 & 0.30 & 10 & Half Generalized Normal \\ \hline \((\zeta/s)_{max}\) & 0.00 & 0.30 & 10 & Half Generalized Normal \\ \hline \(\overline{T_{\zeta,c}}\) [GeV] & 0.100 & 0.350 & 10 & Generalized Normal \\ \hline \(w_{\zeta}\) [GeV] & 0.02 & 0.18 & 30 & Generalized Normal \\ \hline \(\lambda_{\zeta}\) & -1.0 & 1.0 & 20 & Generalized Normal \\ \hline \(\overline{T}_{sw}\) [GeV] & 0.135 & 0.180 & 10 & Generalized Normal \\ \hline \end{tabular} \end{table} Table 1: Prior hyperparameters and distributions for each parameter varied. Figure 4: Calculations at each design point forming the prior predictive distribution for each observable. Points are experimental data. Figure 5: Emulated vs. computed for all observables considered. Successful emulation is clustered around \(y=x\), shown as a dashed line. Error in the x-direction is emulator uncertainty while error in the y direction is uncertainty from the hybrid model. * Two- and three-plane Scalar Product Event Plane Correlators: Correlations between expansion coefficients \(v_{n}\) reveal patterns of fluctuations in the initial state and non-linear effects in hydrodynamics. Measurements, as well as detailed definitions, are from the ATLAS Collaboration [102]. These patterns are coupled and reproduction of them in parametric models has been shown to be highly model-dependent [103]. The ALICE Collaboration measures similar quantities, which are also used, statistics allowing [104]. * \(\chi_{n,mk}\): Nonlinear response coefficients that quantify mixing between higher- and lower-order modes. These decompose higher order \(v_{n}\) into a linear component from the corresponding position space energy density Fourier coefficients (\(\epsilon_{n}\)) and a non-linear component from lower modes. For example, \(v_{5}=v_{5}^{\rm L}+\chi_{5,32}\,v_{3}v_{2}\). Measurements and more details may be found in [104]. * Linear and non-linear flow modes: these quantify the linear and non-linear response of the flow to collision geometry, similar to the event plane correlators and \(\chi_{n,mk}\) above [104]. * \(\delta p_{\rm T}/\langle p_{\rm T}\rangle\): Correlated transverse momentum fluctuations. This quantifies the correlations between deviations from the mean transverse momentum. If the deviations are uncorrelated over all events, this quantity is 0 [105]. The purpose of using these carefully chosen observables is to efficiently constrain the properties of strongly-interacting matter. For example, the multiplicities constrain the overall energy of the system, the radial Fourier coefficients constrain the momentum- space geometry of the hydrodynamic stage, and next-generation observables couple various aspects of the medium evolution. The set of observables that are reliably calculated and distinguishable from statistical fluctuations are again all of what we will call "first generation observables"; the nonlinear response coefficients \(\chi_{4,22}\), \(\chi_{5,23}\), \(\chi_{6,222}\), and \(\chi_{6,33}\); the linear and nonlinear flow modes \(v_{4}^{L}\), \(v_{4}(\Psi_{2})\), \(v_{5}(\Psi_{23})\), \(v_{6}(\Psi_{2})\), \(v_{6}(\Psi_{3})\); and the event plane correlations \(\rho_{422}\), \(\langle\cos(4(\Phi_{2}-\Phi_{4}))\rangle\), \(\langle\cos(6(\Phi_{2}-\Phi_{3}))\rangle\), \(\langle\cos(6(\Phi_{2}-\Phi_{6}))\rangle\), \(\langle\cos(4(\Phi_{3}-\Phi_{6}))\rangle\), \(\langle\cos(2\Phi_{2}+3\Phi_{3}-5\Phi_{5})\rangle\), \(\langle\cos(2\Phi_{2}+4\Phi_{4}-6\Phi_{6})\rangle\), and \(\langle\cos(2\Phi_{2}-6\Phi_{3}+4\Phi_{4})\rangle\). The calculation of these observables at each design point are shown in Fig. 4. Principal component analysis (PCA) is now performed. In a space defined by the observables, where each dimension corresponds to a particular observed quantity, it is possible to identify correlations. Principal component analysis is a simple technique to "rotate" in observable space into a linear combination of the original axes such that every dimension of the data is linearly independent. This rotation is also invertible, meaning that predictions can be made for the transformed space and inverted back to the observable space. This is useful as it is no longer necessary to interpolate between hundreds of dimensions in the observable space, but rather only interpolate in a \(\mathcal{O}(10)\) dimensional space, which is much more feasible. Another way to think of this rotation is by a decomposition of the data in question to its eigenvalues and eigenvectors. The eigenvalues are the fraction of the total variance in the data described by each eigenvector. 30 principal components explain 90.642% of the variance in the calculations. Finally, emulators are trained on the principal components vs. the parameters and predictions for observables can be made. The emulator predictions at validation points vs. the computed results are shown in Fig. 5. Observables that are only loosely clustered along the \(y=x\) lines in Fig. 5 are not kept for the final analysis and observables that are extremely uncertain are also not included. This constitutes "forward model validation." Given a known set of inputs, the predictions are compared to model calculations and observables the surrogate model predicts poorly are inappropriate for inclusion in a physics study. Finally, the \(\langle\cos(4(\Phi_{2}-\Phi_{4}))\rangle\) event plane correlator is not included as it quantifies the same correlation as the \(\rho_{422}\) correlator and has an overall bias. The finalized set of observables for testing self-consistency and comparison to data is comprised of the first generation observables; the flow modes \(v_{4}^{L}\), \(v_{4}(\Psi_{2})\), \(v_{5}(\Psi_{23})\), \(v_{6}(\Psi_{2})\); and the plane correlations \(\rho_{422}\), \(\langle\cos(2\Phi_{2}+3\Phi_{3}-5\Phi_{5})\rangle\), and \(\langle\cos(2\Phi_{2}+4\Phi_{4}-6\Phi_{6})\rangle\). While not included in the Bayesian calibration, the excluded observables remain excellent candidates for predictions with higher-statistics calculations to test the posterior state of knowledge. Once these observables have been selected, the principal component analysis and Gaussian process emulation is repeated and are found to be sufficiently reliable for performing self-consistency tests and comparisons to data. Further details of the principal component analysis for the final observable set are shown in Fig. 6, where the relationship between the first three principal components (PCs) are shown. The first few principal components contain the majority of the variance of the data and it can be clearly seen that the first three PCs relate clearly to the observables, further supporting the idea that they are successfully reducing the dimensionality of the data with minimal loss of underlying signal. With the final observable set, 30 principal components explain 97.94% of the variance in the data. The full set of principal components to explain the total variance in the data consists of 161 PCs, meaning that the remaining 131 principal components represent 2.06% of the variance in the data, which is almost certainly dominated by noise in the underlying calculations. Note that the presence of exclusively linear correlations between observables must be (and has been) investigated for the final set of chosen observables, but is sufficiently large (a 334 x 334 matrix of plots to show pairwise combinations of every observable in every centrality) as to not fit in this work. #### iv.1.1 Transfer learning for Chapman-Enskog \(\delta f\) Viscous corrections at particlization are an important source of uncontrolled theoretical uncertainty to quantify. An extremely computationally-efficient way to control the uncertainty is using transfer learning. This uses information learned from a source system - in this study, the already validated Grad viscous correction - to learn about a similar target system, the Chapman-Enskog RTA \(\delta f\). By construction, these are both linearized viscous corrections and are designed to be small corrections to the equilibrium distribution function. This is a prime opportunity to use transfer learning to enable Bayesian inference for the first time in heavy ion collisions. Transfer learning is implemented using _emukit_ and _GPy_'s [106; 107] multifidelity emulation framework and follows the proof-of-concept in [46]. We build on this proof of concept by additionally incorporating principal component analysis and evaluating the covariance matrix necessary for evaluating the likelihood function, thereby enabling the use of transfer learning in full-scale Bayesian inference studies. The information contained in the principal component analysis for the Grad viscous corrections (see Fig. 6) is exploited so that the transfer learning can take place on the principal components. The Grad PCA, trained on a large number of design points, can be understood to perform a critical covariance-revealing and noise-filtering function. By acting as a rotation in the observable space, the true underlying signal is contained in the first \(N\) PCs and noise fluctuations are reduced. This reveals mutual information between observables, _e.g._ that one can be fairly confident of \(dN_{ch}/d\eta\) in the 30-40% bin given its value in the 0-5% bin. It also means that observables that require higher statistics to calculate reliably, such as \(\delta p_{T}/\langle p_{T}\rangle\), become correlated with observables that do not, resulting in noise reduction and more successful surrogate modeling. Additionally, by training the transfer learning emulator on the same principal components as the source emulator, the comparison between the two is put on an even footing. A second improvement to the transfer learning is using "transformed parameters", introduced and used in [29; 30; 108]. Although the parametrization of the specific shear and bulk viscosity may appear intuitive and concise, it can present challenges to nonparametric models such as Gaussian processes, since the relationship between the observables and these parameters can be highly non-linear and non-uniform. However, observables are often more straightforwardly-dependent on the value of the specific shear and bulk at a given temperature. For any one set of parameters in Eqs. 10 and 11 there exists one and only one set of values of \(\eta/s\) and \(\zeta/s\) at a set of temperatures, and a one-to-one mapping takes place. Thus, no information is gained or lost by performing this transformation. By using the transformed observables, the transfer learning emulator's mean squared error was reduced by a factor between 2 and 20 for every observable considered as well as corresponding improvement in the distance between the coefficient of determination \(R^{2}\) and its maximum value of one. Finally, software changes were made to make it indistinguishable from the original Emulator object and therefore compatible with existing MCMC software and ready for use. This software, as well as the general improvements to the heavy-ion collisions Bayesian software implemented for this study, will be made available on GitHub [109]. The transfer learning emulator validation begins with comparing emulated predictions to computed values at validation points not used in training, shown in Fig. 7. All the observables considered for the study with Grad viscous corrections are well predicted by the transfer Figure 6: Observable relation to the first three principal components. learning model, in some cases even better than the source emulator trained on the full design. Uncertainties are often larger in the transfer learning model than in the Grad emulator, but this does not interfere significantly with the quality of predictions and is consistent with having two Gaussian Processes, each with their own variance, rather than just one. Predictions by the transfer learning emulator are broadly consistent with the true values and the emulator uncertainty is well-balanced with the computed uncertainty in the most statistics-hungry observables. Were one source of uncertainty systematically larger than the other, this would suggest imbalance between the number of design points and the number of model runs at each design point [110], which must be judged by the most statistics-hungry calculations. In this case, there are the correlated momentum fluctuations and event plane correlators: \(\delta p_{T}/\langle p_{T}\rangle\), \(\rho_{422}\), and \(\langle\cos(2\Phi_{2}+4\Phi_{4}+6\Phi_{6})\rangle_{SP}\). Further worth highlighting is what appears to be a slight emulator bias in the three-plane correlators in Fig. 5 is resolved in the transfer learning emulation, suggesting yet-more accurate predictions. ### Inverse model validation Once again, the model is tested for self-consistency with pseudodata generated by the underlying multistage model at known points in the parameter space that were not used in training the surrogate model. The surrogate model is then used for inference with pseudodata and the resulting posterior is investigated to determine how well it recovers underlying truth. Due to the fact that a particular parametrization has been chosen for the specific shear and bulk viscosity, the test for self-consistency is best compared as, for example, \(\eta/s\) vs. temperature. After all, despite the motivation for the parametrization, the physics is contained in the temperature dependence of the viscosity, not a particular representation. It is cumbersome to show this result for all validation points, but care is taken to show a representative sample of validation points in this section. The parameters related to the hydrodynamic viscosities are shown separately from those not related to viscosity, for former shown as \(\eta/s\) or \(\zeta/s\) vs. temperature. No discernible co Figure 7: Transfer learning emulated vs. computed for all observables considered. Validation points are shown with a consistent color to identify correlations between points. The diagonal dashed line is located at \(y=x\), and denotes perfect prediction. variance is seen between the two groups of parameters. While no covariances are seen, when the model is pushed to the edges of the prior region, the distribution can become bi-modal. Examples are shown in Figs. 8-10. What is important to inspect is if the posterior consistently contains the known truth. For example, does the true value fall within the 90% credible interval (C.I.) approximately 90% of the time? If so, then it is plausible that, provided with a 90% credible interval, a gambler would break exactly even assuming they were presented with fair odds by the bookmaker. This is clearly the case for results shown in Figs. 8 and 10. The sample validation points chosen for these figures additionally demonstrate the resolution of a large, relatively flat bulk viscosity (Fig. 10) and a bulk viscosity with a comparatively sudden peak at high temperature (Fig. 8) in addition to a variety of \(\eta/s\). All are recovered well and within the 90% credible interval, although one needs to consider this in tandem with Figs. 5 and 7 to be confident in closure performance. The posteriors in Fig. 9 demonstrate a strong bimodality for Grad viscous corrections and bias in C.-E. \(\delta f\) and, while the truths are partially recovered, the posteriors seem at odd with physical intuition and are not in particularly good agreement with each other, such as in \(\tau_{0}\). This occurs because the true value of the bulk viscosity peak is _below_ the particlization temperature and a bimodality develops in \(\zeta/s\) for Grad \(\delta f\), while the C.-E. \(\delta f\) attempts to compensate and does not resolve the second \(\zeta/s\) mode and poorly resolves \(\eta/s\). For each peak of \(\zeta/s\), a different value of the switching time between IP-Glasma and MUSIC is preferred as the model is pushed into a corner, causing bimodality in the posterior of the initial condition and particlization parameters. The observable that couples these quantities is \(\delta p_{T}/\langle p_{T}\rangle\), whose pseudodata is noisier than the experimental data, further exacerbating the issue. This is an example of an interpretable failure is an edge case in the parameter space. It is intuitive that the model struggles to reproduce true values of hydrodynamic quantities that are located outside the hydrodynamic evolution. Joint priors (_i.e._ requiring the bulk peak temperature to be greater than particlization) have not yet been developed for heavy ion collision studies and doing so is beyond the scope of this work. Note as well that this is a particular feature of the multi-modal bulk viscosity as the true value of \(T_{sw}\) in Fig. 8 is close to the edge but can still be well Figure 8: Posterior distributions of non-viscous (top) and viscous (bottom) parameters for a sample validation point. The true values are highlighted in black (top). The quoted values are the median and 95% C.I. Figure 9: Posterior distributions of non-viscous (top) and viscous (bottom) parameters for a sample validation point. The true values are highlighted in black (top). This is an important example of interpretable failure. The quoted values are the median and 95% C.I. constrained. Nonetheless, the ability to interpret these failures of the modeling workflow further strengthens the results derived from this study. A reassuring feature of the inferential framework is that all of the closure points reproduce the pseudodata well, as exemplified in Fig. 11. As can be seen, the emulator is not overfitting by going through every potentially noisy data point, but is instead robust to statistical fluctuations in the underlying data. This further suggests that the model is behaving well and is well-conditioned for the problem at hand while also not exhibiting strong bias. An example of low-bias can also be seen in the marginal distributions for \(\mu_{Q_{s}}\) in Figs. 8-10 - the truth is not always exactly located at the peak of the marginal distribution, but instead the peaks are distributed around the true value. Additionally, the two \(\delta f\) models are differentiable and provide further evidence that the transfer learning model is not simply reproducing the source model's results. An exciting feature in these closure tests in comparison to previous studies is the constraint on \(\eta/s\) and \(\zeta/s\) at higher temperatures. In previous studies, constraint was limited to the low temperature regions and the model was insensitive to the high-temperature (or early-time) behavior of the fireball evolution unless the temperature dependence was explicitly specified by the parametrization [29, 25, 30, 31, 42]. In these closure test, for the first time, constraint on the viscosity can be achieved even at high temperature. This raises the exciting prospect that the viscosity of strongly-interacting matter in heavy ion collisions may be constrained to an unprecedented precision without sacrificing accuracy. ## VI Inference with LHC data ### Grad and C-E posteriors Now that the model is known to behave in accordance with expectations for test points and failures are interpretable, the validation pseudodata is exchanged for real experimental data. The previous section has confidently established that the Bayesian parameter estimation produces reasonable results for known inputs, leading to the belief that this should plausibly reveal the underlying properties of experimentally-produced quark-gluon plasma in heavy ion collisions. The repeated validation, observable selection, closure testing, and sanity checks of the surrogate modeling and inference have established beyond a reasonable doubt that the models are reliable and well-conditioned for the problem at hand. The calculations at the design points form the prior predictive distribution and were shown in Fig. 4 for a superset of observables. These calculations cover the experimental results well, although correlations between calculations are difficult to discern and likely introduce some tension. The MCMC is again performed using a parallel tempering algorithm. The above closure test and the below comparison to data are performed using Grad's 14 Figure 11: Posterior predictive distributions with Grad viscous corrections for the posterior shown in Fig. 10 with pseudodata used for comparison shown as data points. Figure 10: Posterior distributions of non-viscous (top) and viscous (bottom) parameters for a sample validation point. The true values are highlighted in black. The quoted values are the median and 95% C.I. moment viscous corrections and while the above closure tests were performed with 10,000 MCMC steps with 10 walkers per dimension and 10 rungs in the parallel tempering temperature ladder, the below comparison to data is performed with 20,000 MCMC steps with 50 walkers per dimension and 20 rungs in the parallel tempering ladder for improved sampling resolution. The trace, moving average, and autocorrelation of the final MCMC chain is shown in Fig. 12 for three sample walkers. It is important to note that these walkers have clearly thermalized, as the trace exhibits no discernible autocorrelation and are thus sampling from the target distribution. With confidence in the MCMC, it is finally time to look at the posterior distribution after comparison with data. The non-viscous parameter posterior for both viscous corrections is shown in Fig. 13, the viscous posterior for both viscous corrections is shown in Fig. 14, and the marginal and joint marginal distributions of the 11-dimensional posterior are shown in Fig. 15. The non-viscous parameters demonstrate clear constraint, particularly in the case of the normalization \(\mu_{Q_{s}}\). The switching time between IP-Glasma and MUSIC is well-localized to early times \(\tau_{0}\lesssim 0.7\) fm, which is in accordance with previous experience and appears not to favor very late hydrodynamic onset times. The particlization temperature \(T_{sw}\) is also well-constrained within the prior region. A recent estimate of the crossover temperature from lattice QCD places it at \(T_{c}=155\pm 1.5\) MeV [111], precisely in the region of highest posterior density for the particlization temperature. The constraint of the particlization temperature is particularly interesting as the chemistry of the hydrodynamic medium is identical to that of [30], which required a much lower particlization temperature with the same viscous correction. This also provides a limit on the lifetime over which the viscosity can act by reducing the lifetime of the fireball, placing limits on the viscous contribution. As demonstrated in the closure tests, the viscosity is not required to be small in this region and so has the potential to be influenced by large viscous corrections. However, this is seen to not be the case: in the temperature region probed by particlization - approximately bounded by 0.14 and 0.18 GeV - the data itself prefers the specific bulk viscosity to be small. In testing for self-consistency, it was found that the model can recover large viscosity at particlization (Fig. 10), meaning that the demand for small viscous corrections is an authentic feature of the data. In Fig. 14, the temperature-dependent specific bulk viscosity \(\zeta/s\) demonstrates a clear peak and the 99% C.I. is inconsistent with 0 below \(T\approx 0.34\) GeV for Grad viscous corrections while for Chapman-Enskog, it is inconsistent with zero over the entire range shown. Randomly Figure 12: MCMC trace, moving average, and autocorrelation from comparison to experimental data with Grad viscous corrections. The C.-E. MCMC behavior is comparable. Figure 13: Posterior distributions of non-viscous parameters from comparison to experimental data with Grad \(\delta f\) (blue, lower triangle of the sub-figure matrix) and Chapman-Enskog \(\delta f\) (red, upper triangle). The quoted values along the diagonal are the median and 95% C.I. of the 1-dimensional marginal distribution. drawn example samples from the Grad posterior are shown in Fig. 16, demonstrating the diversity of choices that are compatible with data. The constraint certainly weakens at high temperature, but the peaked specific bulk viscosity is well-constrained at low and intermediate temperatures. This is the first time a large, nonzero, peaked specific bulk viscosity has been recovered from data. A peaked result is consistent with expectations from previously-used bulk viscosity motivated by purely physical considerations, demonstrating phenomenological self-consistency between purely theoretical considerations and model-to-data comparison, although the extracted peak is (mostly) constrained to be at higher temperatures. The peak of the specific bulk viscosity shifts slightly between the two viscous correction models, but the posteriors are broadly consistent with each other, particularly the 60% credible intervals. Both viscous correction models strongly indicate a peaked nonzero bulk viscosity throughout the hydrodynamic evolution. An unexpected feature of the viscous posterior is a slight preference for a negatively-sloped specific shear viscosity at higher temperatures. This is driven in part by peripheral \(v_{3}\{2\}\), a fluctuation-driven quantity, and central \(v_{4}\{2\}\). As higher temperatures correspond to earlier times in the fireball evolution, this decreasing high-temperature \(\eta/s\) dissipates initial-state fluctuations more slowly. However - and importantly - the high-temperature \(\eta/s\) posterior is still statistically compatible with both a flat line through the 99% credible interval and the AdS/CFT-derived value of \(1/4\pi\)[112]. This is a consideration worth investigating in more depth. Note that a decreasing specific shear viscosity is also the result of a Bayesian analysis with parametric initial conditions which allows for a varying nucleon size [113]. To ensure the quality of the fit and to identify tension in the model, one can inspect the posterior predictive distribution (Fig. 17) and the ratio of the posterior predictive distribution to experimental data (Fig. 18). It is clear from the posterior predictive distributions that the model fits the data well but exhibits tension, seen in the transverse energy and the three-plane correlators. The tension involving \(dE_{T}/d\eta\) is not new to this work and was also seen in [43]. This suggests that it is a feature independent of the pre-equilibrium stage and potentially a feature of either the specifics of the hadronic chemistry or - less likely - a feature of 2+1D hydrodynamics. As the transverse energy captures correlations between particle multiplicity and transverse momentum, the chemical explanation is more plausible. The difficulty reproducing the three-plane correlators is also not new, but the postdictions shown in Fig. 17 are consistent with past tension. Of note is that the C.-E. \(\delta f\) is closer to reproducing these correlations than the Grad viscous corrections. Insight can be gained by investigating the sensitivity of these observables to various parameters. These comparisons are shown in Appendix A. The dominant sensitivities are to normalization and the shear viscosity kink temperature, similar to the anisotropic flow that naturally influence the correlations. The other potential underlying cause of difficulty in matching these observables is geometric - the observables match as well as they can, but the prior predictive distributions do not cover the data. With the geometry in IP-Glasma fixed by nuclear configurations and deep inelastic scattering, insufficient freedom remains. Before leaving this to future analysis, it must be noted that \(\delta p_{T}/\langle p_{T}\rangle\) is also at the edge of the prior predictive region. If the three-plane correlators and the \(p_{T}\) fluctuations are correlated, this has potential to reveal further insight. The correlation between these observables at mid-centrality is shown in Fig. 19 and reveals that these observables are uncorrelated, suggesting that their tension is independent. A future analysis should attempt to address this by revisiting the constraint from deep inelastic scattering simultaneously with observables from heavy ion collisions. The posterior predictive distribution for the correlated \(p_{T}\) fluctuations produces the most accurate postdiction of any IP-Glasma calculation and yields the correct centrality dependence, a feature not seen in other models. Investigating this sensitivity, the overall magnitude is reduced by a larger \((\zeta/s)_{max}\) and constraints \(\tau_{0}\) to early times. This suggests yet further that the bulk viscosity must be further investigated for a narrower, taller peak to better reproduce experimental results. This is beyond the scope of this work.4 Footnote 4: This narrower, taller peak is difficult to resolve without reparametrization of the width of the bulk viscosity or carefully constructing a scale-invariant prior. This too is beyond the scope of this work, but should be strongly considered in future studies. The success of the model with respect to every other observable must be highlighted: nearly every experimental measurement in nearly every observable is consistent with the posterior predictive distribution shown in Fig. 17. This was by no means guaranteed. Bayesian studies in heavy ion physics have broadly exhibited success with parametric models and fewer observables. To Figure 14: Viscous posterior with Grad viscous corrections (blue) and Chapman-Enskog viscous corrections (red) from comparison to experimental data. have a pre-equilibrium stage with microscopic physics produce such results is a thorough and non-trivial validation of the theory and implementation of IP-Glasma. This represents a step forward in rigorously constructing a hybrid model with each stage containing microscopic physics and testing it via comparison to data. ### Post-dictions and predictions with maximum a posteriori parameters Scientific models can be evaluated by how well they can describe experimental measurements in systematic model-to-data comparison, as performed up to this point, but also by how well they predict quantities to which they were not explicitly tuned. A model that can only describe Figure 15: 11-dimensional posterior showing marginal and joint marginal distributions with Grad viscous corrections (blue, lower triangle) and Chapman-Enskog viscous corrections (red, upper triangle) from comparison to experimental data. Values along the diagonal are the median and 95% C.I. of the 1-dimensional marginal distribution. quantities to which it is systematically compared is less useful than a model that, once compared to a carefully-selected set, makes accurate predictions. The Bayesian inference performed in this section was performed using a surrogate model trained at a large number of design points, not the underlying model itself. As a result, before moving on to predictions, it is important to explore the veracity of the MAP points in Table 3. To do this, the model is run as before, but with 6000 collision events from \(0-13\) fm rather than 2500. This increase in statistics allows for higher precision results. First, the veracity of the MAP points is determined via postdiction, in which the underlying computationally-expensive multistage model is compared to quantities used in the inference above. In the following figures, the Grad and C.-E. MAP are shown in blue and red, respectively. The MAP with temperature-dependent \(\eta/s\) is shown as a solid line while constant \(\eta/s\) is shown as a dashed line. Shaded regions denote uncertainty. The charged hadron multiplicity, Fig. 20, compares very favorably with the MAP calculations within the experimental uncertainty for all viscous correction models. A variety of identified particle multiplicities and transverse energy per rapidity slice, Fig. 21, also compare very well, albeit the proton and kaons are overestimated while the pions are underestimated. This balancing act combined with the overall charged hadron multiplicity shows that aspects of the hadron chemistry are imbalanced. As discussed previously, details such as chemical freezeout (c.f. [114; 115]) are not included in this study and are likely to particularly influence higher-mass particles, particularly kaons. The overestimation of the number of higher-mass particles in turn results in an overestimation of transverse energy. Nonetheless, the differences between the MAP calculations imply an influence of viscous corrections on the hadronic chemistry. The mean transverse momentum of identified particles, Fig. 22, further reveals the success of the model-to-data comparison while demonstrating how overestimation of multiplicity combined with good estimation of the transverse momentum results in overestimation of transverse energy. The \(\langle p_{T}\rangle\) shows less tension in the chemical makeup than previous results with the same hydrodynamic equation of state, revealing the role of bulk viscosity and a physically-motivated pre-equilibrium model with microscopic dynamics. The primary difference between Grad and C.-E. MAP calculations is in enhanced proton \(\langle p_{T}\rangle\), in which the C.-E. MAP better reproduces the experimental results. The two-particle integrated \(v_{n}\) further reveal good, albeit not perfect, reproduction of experimental results in Fig. 23. Notably, \(v_{2}\{2\}\) and \(v_{4}\{2\}\) are well described, particularly in central collisions, while \(v_{3}\{2\}\) is underestimated. The underprediction of \(v_{3}\{2\}\) is a feature of nearly every study and remains an object of continuing study. Peripheral \(v_{2}\{2\}\) reveal that the MAP temperature dependence of the Grad shear viscosity results in an overestimate, while the constant shear more closely reproduces the experimental centrality dependence as do the C.-E. MAP calculations. For all \(v_{n}\{2\}\), the MAP prediction of this study performs better than the previous state-of-the-art and the tension revealed here produces useful insight both into temperature-dependent \(\eta/s\) and remaining progress required in describing the geometric fluctuations that drive \(v_{3}\{2\}\). The four-particle integrated \(v_{2}\) is shown in Fig. 24, showing agreement with data until the most peripheral bin where it is overestimated, consistent with the two-particle \(v_{2}\) in Fig. 23, suggesting that these observables capture broadly similar physics and are similarly-well described by the model, although less tension is observed in \(v_{2}\{4\}\) compared to \(v_{2}\{2\}\). The correlated momentum fluctuations \(\delta p_{T}/\langle p_{T}\rangle\), also denoted \(\sqrt{C_{m}}/M\) or \(\sqrt{C_{m}}/\langle p_{T}\rangle\), in Fig. 25 are the first calculations to successfully describe this observable from a model with an IP-Glasma pre-equilibrium state and this description is consistent. The only prior work showing this calculation matches less well and does so by underestimating charged hadron multiplicity [116], while this study is able to simultaneously describe both quantities with a variety of different viscosities and viscous corrections. These fluctuations are also sensitive to the temperature dependence of the specific shear viscosity, where the constant \(\eta/s\) systematically overestimates the data while correctly reproducing the centrality dependence (itself not seen in either other calculations with IP-Glasma or in the previous state-of-the-art), while the temperature-dependent \(\eta/s\) better reproduces the data beginning in mid-central collisions. The C.-E. MAP reproduces the fluctuations more closely, save for the \(\eta/s(T)\) calculation in the most central bin, which is likely the impact of statistical fluctuations. The decomposition of higher order \(v_{n}\) further reveals the ability to simultaneously describe flow observables in Fig. 26. For every quantity other than central \(v_{4}^{L}\), both models produce successful predictions of the experimental data, with the temperature-dependent \(\eta/s\) again overpredicting peripheral flow as seen in \(v_{4}(\Psi_{2})\). Although is Figure 16: Samples from the viscous posterior for Grad viscous corrections after comparison to experimental data. often consistent with the data within uncertainty, \(v_{4}^{L}\) is overpredicted by the C.-E. MAP calculations. Nonetheless, this broad reproduction of the experimental flow decomposition suggests that the momentum-space geometry of the hybrid model successfully reproduces the physical picture in heavy ion collisions. The simultaneous reproduction of flow decomposition and event plane correlation constrains both the initial state geometry and the hydrodynamic evolution. In Fig. 27, the correlators are also well-described by the postdictions and are consistent with experimental uncertainty, save for central \(\langle\cos(2\Phi_{2}+4\Phi_{4}-6\Phi_{6})\rangle\). The purely-even correlations are particularly well described and primarily relate the conversion of event planes of initial state geometry to momentum space via hydrodynamics. The mixed even-odd plane correlations reveal that the fluctuation structure is well described and correlates properly with even planes. This postdiction is also well in line with the posterior predictive distributions, further supporting the accuracy of the surrogate modeling. The postdictions show that MAP parameter values are able to successfully describe the observables used in inference with never-before-seen accuracy for a multistage model with an IP-Glasma pre-equilibrium stage. This alone is a resounding success of the Bayesian inference in this study and conclusively demonstrates the performance of the Gaussian Process emulators as well as the study design. Tension is seen in the hadron chemistry, impacting the transverse energy, as well as in some of the description of the flow harmonics, notably \(v_{2}\{2\}\) and \(v_{3}\{2\}\). However, the decomposition of higher order flow is successful and the overwhelming majority of observables are well-described while the same tension is seen in \(v_{2}\{4\}\), ensuring this is effect is not a result of two-particle correlations. In the case of \(\delta p_{T}/\langle p_{T}\rangle\), successful Figure 17: Posterior predictive distribution with Grad viscous corrections (blue) and C.-E. viscous corrections (red) after comparison to data. description is shown for the first time. The impact of viscous corrections is minimal, showing that the different posteriors are accurately accounting for differences in the underlying model calculations. #### iv.2.1 Predictions Having established the power of the surrogate modeling and demonstrated successful description of a wide range of observables, it is time to turn to predictions of quantities not included in the calibration. Here, "predictions" is used to highlight that these observables were not used in systematic comparisons. As a result, the model is blind to these observables beyond information contained in other quantities. If models are differentiable at this stage, perhaps it can shed light on model quality not revealed in the more limited model-to-data comparison. In the following comparisons, centrality bins are chosen to match experimental results and predictions for bins not shown are simply due to dominance by theoretical uncertainty from a small number of events per bin. The comparisons begin with measures of event plane correlation from ALICE in Fig. 28 and ATLAS in Fig. 29. In both cases, the model predictions are very well-aligned with experimental results. Both \(\rho_{532}\) and \(\rho_{633}\) are accurately predicted within experimental uncertainty, while \(\rho_{6222}\) is accurately predicted below 30% centrality. With respect to the ALICE measurements, the MAP calculations are broadly indistinguishable. A similarly indistinguishable picture is painted by comparison to ATLAS measurements, where \(\langle\cos 2\Phi_{2}+3\Phi_{3}+4\Phi_{4}\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\)\(\rangle\rangle\)\(\rangle\) state are better suited to collisions below 30% despite successful comparison to observables across the whole centrality range. These predictions outperform previous predictions made by a hybrid model with IP-Glasma [84]. A motivation for the use of IP-Glasma as a pre-equilibrium model was its success in simultaneous description of next generation observables, particularly both the event plane correlations and nonlinear response coefficients. With demonstrated success in prediction of event plane correlations not used in model-to-data comparison, predictions for nonlinear response coefficients are shown in Figs. 30 and 31. These broadly describe the experimental results within experimental uncertainty, with slight overestimation in \(\chi_{5,23}\) between 20 and 40% centrality and peripheral \(\chi_{4,22}\). In this case, the model with \(\eta/s(T)\) slightly outperforms predictions with con Figure 19: Correlations between posterior predictive distributions for selected observables for central collisions. Dashed lines denote the central experimental result and x- and y-axis units are the experimental uncertainty for the respective observables. Grad viscous corrections are in blue while Chapman-Enskog viscous corrections are shown in red. stant \(\eta/s\), although they are often consistent within standard error. This demonstrates that a multistage model with an IP-Glasma pre-equilibrium stage is able to produce simultaneous accurate predictions of the event plane correlations and hydrodynamic response with a initial geometry broadly fixed by low-energy nuclear correlations. This strongly suggests that the hydrodynamic phase is accurately described as there is no geometric flexibility Figure 23: Postdiction of \(v_{n}\{2\}\) at Maximum a Posteriori. Figure 27: Postdiction of event plane correlations at Maximum a Posteriori. Data and calculations are shifted for clarity. Figure 24: Postdiction of \(v_{2}\{4\}\) at Maximum a Posteriori. Figure 21: Postdictions of identified hadron multiplicity at Maximum a Posteriori. Figure 26: Postdiction of the decomposition of \(v_{n}\) at Maximum a Posteriori. Figure 20: Postdictions of charged hadron multiplicity at Maximum a Posteriori. Figure 25: Postdiction of \(\delta p_{T}/\langle p_{T}\rangle\) at Maximum a Posteriori. Figure 27: Postdiction of event plane correlations at Maximum a Posteriori. Data and calculations are shifted for clarity. to exploit and the hydrodynamic response to geometry matches that seen in experiment. The centrality dependence is also often accurately captured, such as in \(\chi_{5,23}\), which was not the case in previous calculations. Predictions for the final category of observables used in the analysis are shown in Fig. 32 for the linear and nonlinear flow decomposition. These predictions are accurate and are clearly consistent with experimental results within uncertainties, save for \(30-40\%\)\(v_{5}^{L}\). This demonstrates the continuing success of the hybrid model with IP-Glasma as it is able to both describe and predict a wide range of observables. In the \(v_{5}^{L}\) predictions, the constant \(\eta/s\) prediction is more consistent with the experimental measurement, further supporting an inconclusive preference for one model over the other as the quality of predictions depends on which observable is considered. The final \(p_{T}\)-integrated prediction is made for the modified Pearson correlation between \(v_{2}^{2}\) and \(p_{T}\), shown in Fig. 33. As no experimental results at this energy are available, preliminary results for a higher Pb-Pb collision energy system (\(\sqrt{s_{NN}}=5.02\) TeV) are used [117] for comparison. The predictions made at \(\sqrt{s_{NN}}=2.76\) TeV describe the higher-energy data and its centrality dependence well, which is not seen in T\({}_{R}\)ENTo-based hybrid model predictions and has been shown to be sensitive to nucleon size [118]. The current study has not utilized sub-nucleonic degrees of freedom and has used a nucleon size of 4 GeV\({}^{-2}\). There is no significant difference seen between predictions with different viscous corrections or between \(\eta/s\) and \(\eta/s(T)\). Even with variation of the nucleon width in previous calculations of this quantity with IP-Glasma and T\({}_{R}\)ENTo-based hybrid models, successful prediction of the value and centrality dependence has proved elusive. Hybrid models with T\({}_{R}\)ENTo + freestreaming initial states, as well as previous calculations with IP-Glasma, have sign changes as they become increasingly peripheral. This feature is not seen in the data, nor in this prediction. Based on the prediction in Fig. 33, there is no anticipated collision-energy dependence of this correlation and the IP-Glasma initial state at maximum a posteriori is able to successfully describe this observable. The lack of collision-energy dependence is supported by the comparison of Pb-Pb at \(\sqrt{s_{NN}}=5.02\) TeV data compared to Xe-Xe collisions Figure 31: Prediction of the ALICE \(\chi_{6222}\) nonlinear response coefficients at Maximum a Posteriori. Data and calculations are shifted for clarity. Figure 30: Prediction of ALICE nonlinear response coefficients at Maximum a Posteriori. Data and calculations are shifted for clarity. Figure 28: Prediction of ALICE event plane correlations at Maximum a Posteriori. Data and calculations are shifted for clarity. Figure 29: Prediction of ATLAS event plane correlations at Maximum a Posteriori. Data and calculations are shifted for clarity. at \(\sqrt{s_{NN}}=5.44\) TeV data from ALICE [117]. Of note is that it appears to not yield further constraint on the temperature dependence of \(\eta/s\). Nonetheless, comparing it directly to the previous state-of-the-art Bayesian study using a T\({}_{R}\)ENTo + freestreaming initial state, its appears that the microscopic physics of the IP-Glasma pre-equilibrium stage plays an important role. This represents a true prediction as data at \(\sqrt{s_{NN}}=2.76\) TeV has yet to be published. Up to this point, only \(p_{T}\)-integrated observables have been considered. Differential observables also exist and provide interesting and discriminating probes of the soft sector. However, the boundary between the soft sector and the hard sector (such as jets and jet-medium interactions) is unclear. By considering the integrated quantities up to now, the sensitivity of the inference to the precise location of this boundary is reduced and predictions can be made. This sensitivity is reduced because integrated observables are weighted by the multiplicity, which drops exponentially. By considering each differential \(p_{T}\) bin, this exponentially-decreasing weighting would be removed and each bin would be treated on an equal footing, in turn giving the bins on the boundary of the soft and hard sectors a higher proportional weighting. The first differential observable investigated is the differential charged hadron \(v_{n}\{2\}\), with predictions shown in Fig. 34 compared to experimental measurements from ALICE [100]. Tension is clearly present in reproducing the spectra, with predictions from integrated observables often undershooting at lower transverse momentum and overshooting at higher momenta. Nonetheless, the majority of predictions are consistent with experimental measurements for the first time or the distance from the prediction to measurement has been greatly reduced from the previous IP-Glasma state-of-the-art [84]. The greatest tension is observed in the differential \(v_{2}\{2\}\) in the \(0-5\%\) and \(30-40\%\) centrality bins and low-\(p_{T}\)\(v_{3}\{2\}\) in more peripheral collisions. This low-momentum region is expected to be the region best described by hydrodynamics, suggesting that relevant physics remains missing from the hybrid model. As \(v_{3}\{2\}\) is primarily fluctuation driven, this suggests that fluctuation structure is missing. The underestimate of \(v_{2}\{2\}\) in contrast suggests that a geometric aspect is not included or an aspect of the conversion between position-space and momentum-space geometry remains incomplete. This is not necessarily a concern for the validity of the hydrodynamic description, as the higher-order differential \(v_{n}\) are well-described, but instead suggests that additional physics may be at play. Recent works including the differential momentum spectra suggest that their inclusion in systematic model-to-data comparison can yield insight, but various analysis errors and inclusion of momentum bins in regions where unincluded physics is relevant hinders the interpretation of results [31, 25]. The posterior predictive distribution, rather than single MAP predictions, may provide more insight into the present apparent mismatch of the model predictions and data. The light hadron multiplicity spectra, shown in Fig. 35 for selected central and mid-central centrality bins, paints a complementary picture to that of the integrated multiplicity in Fig. 21. The integrated proton and kaon multiplicity were overestimated, while the pion multiplicity was slightly underestimated; the same is found here. The momentum dependence of the spectra, however, remains well-predicted until the higher momentum region (\(p_{T}>1.75\) GeV), where mini-jets, jet showers, and other hard-sector considerations begin to gain relevance. Beginning in this region, all the identified light hadrons are underpredicted. To include these additional effects is a matter of ongoing theoretical effort and is beyond the scope of this investigation. Postidictions and predictions using four MAP calculations have been shown, comparing Grad and Chapman-Enskog viscous corrections with and without temperature-dependent shear viscosity. The inconclusive preference between viscous correction models is consistent when comparing MAP parameter sets, as is Figure 34: Prediction of differential \(v_{n}\{2\}\) at Maximum a Posteriori for the \(0-5\%\) centrality bin (upper panel) and the \(30-40\%\) centrality bin (lower panel). Figure 33: Prediction of correlation between \(v_{2}^{2}\) and \(p_{T}\) at Maximum a Posteriori, compared to data from a higher-energy collision. Note that data and the JETSCAPE prediction are at \(\sqrt{s_{NN}}=5.02\) TeV while the MAP predictions are at \(\sqrt{s_{NN}}=2.76\) TeV. the inconclusive preference for or against temperature-dependent \(\eta/s\), in keeping with the Bayesian model comparison. ## VII Bayesian model selection Bayesian model comparison can be used to determine if the data exhibits a preference for one model or another, if additional complexity is justified by the model, or even if the model can differentiate between pseudodata and experimental data. This is extremely valuable as it does not attempt to falsify a model, but rather puts it to a binary test to determine which model is the most useful in describing the data. To test, as always, with self-consistency, the first use of Bayesian Model Comparison is to determine if the model can differentiate between pseudodata used for the previous self-consistency testing and experimental data. This hypothesizes the following scenario: a "true" model underlies the experimental data just as a known model underlies the pseudodata generated to test self-consistency. A distinct model is never expected to systematically defeat the true underlying model and, if it did, would be a sign of systematic bias. As a result, the Bayes evidence for the pseudodata is expected to be greater than the Bayes evidence for the experimental data and strong preference is expected from the Bayes factor. This is found when comparing the model estimate of the Bayes evidence for pseudodata and true data using Grad viscous corrections: the \(\ln B\) Bayes Factor (\(\ln B\)) determining which data the model is best suited to ranges from \(118.7\pm 3.1\) to \(147.9\pm 2.3\) in favor of the pseudodata, corresponding to odds of around \(2\times 10^{56}:1\) to \(10^{62}:1\) differentiating the two data sources. Comparable, albeit slightly reduced preference is found using Chapman-Enskog viscous corrections (\(\ln B\sim 90\pm 5\)). This is an overwhelming validation of the model's ability to differentiate the data and demonstrates the self-consistency of the Bayesian model selection. It remains a sobering revelation of just how much information is not yet captured by the model. The self-consistent Bayesian model comparison can now be used to determine if the model exhibits a preference for a variety of features. For example, it can be used to test if the model demands a temperature-dependent shear viscosity by fixing the high and low temperature slopes to 0 and fixing the kink temperature to any value in the prior range as it is meaningless with no change in slope. Performing this comparison yields \(\ln B=0.2\pm 2.4\) in favor of temperature-dependent shear viscosity with Grad viscous corrections and \(\ln B=1\pm 4\) for Chapman-Enskog viscous corrections. On the Jeffreys' scale (Table 2), which provides an odds-based scale for Bayesian model comparison, this is consistent with no preference between the two models, suggesting that the evidence is still inconclusive _in favor or against_ temperature-dependent \(\eta/s\) given the data considered in this study. This suggests that further, and higher-resolution, studies are required to conclusively demonstrate the temperature dependence (or lack thereof) of \(\eta/s\) in heavy ion collisions. The lack of such preference for or against \(\eta/s(T)\) is not surprising. Hybrid models with IP-Glasma have demonstrated considerable success in describing experimental results using a constant specific shear viscosity and the viscous posteriors in this study are themselves consistent with a constant value. In the study requiring a constant \(\eta/s\), the result is well-constrained - \(\eta/s=0.137^{+0.025}_{-0.028}\) for Grad viscous corrections and \(\eta/s=0.125^{+0.021}_{-0.022}\) for Chapman-Enskog viscous corrections, where the uncertainty denotes the 95% C.I. - and with minimal covariance. By inspection, it is apparent that this is entirely consistent with the \(\eta/s(T)\) posteriors in Fig. 13 and nearly spans the full width at the narrowest point. As many Bayesian works require \(\eta/s(T)\) to strictly increase or be constant above a fixed kink temperature, this is also a useful comparison and is performed with only Grad viscous corrections as both models are consistently in agreement. To do this, \(a_{\eta,low}\) is fixed to zero as it is in those studies and \(T_{\eta,kink}\) is fixed to 0.154 GeV. Finally, \(a_{\eta,high}\)'s prior range is reduced to require it to be positive definite. Comparing the evidence for this configuration to the full study produces \(\ln B=3.8\pm 2.6\) in favor of the full study allowing for a negatively-sloped \(\eta/s(T)\). This corresponds to moderate-to-strong evidence on the Jeff \begin{table} \begin{tabular}{c c c} \hline \(|\ln B_{01}|\) & Odds & Probability & Strength of evidence \\ \hline \(<1.0\) & \(\leq 3:1<0.750\) & Inconclusive \\ 1.0 & \(\sim 3:1\) & 0.750 & Weak evidence \\ 2.5 & \(\sim 12:1\) & 0.923 & Moderate evidence \\ 5.0 & \(\sim 150:1\) & 0.993 & Strong evidence \\ \hline \end{tabular} \end{table} Table 2: The Jeffreys’ Scale, reproduced from [119]. Figure 35: Prediction of differential light hadron multiplicity spectra at Maximum a Posteriori for the \(0-5\%\) centrality bin (upper panel) and the \(30-40\%\) centrality bin (lower panel). freys' Scale in Table 2. Comparing the requirement of a positive-definite slope to \(\eta/s(T)\) to a constant \(\eta/s\), the Bayes factor is \(\ln B=3.6\pm 2.6\) in favor of the constant specific shear viscosity. Because the Bayes factor penalizes complexity, the additional complexity is not justified by the data. Next generation observables are employed in this study in the hope of determining the features of \(\eta/s\) and \(\zeta/s\) with greater accuracy and precision. Some studies use next generation correlations that require much greater computational expenditure to attempt to find this constraint, but suffer from parametric initial conditions [120]. It is clear from these Bayesian model comparisons that success in learning the physical specific viscosity of strongly-interacting matter will only come from combining realistic initial conditions and well-chosen observables. A promising candidate for increased constraint are \(v_{n}-p_{T}\) correlations, which are not readily calclable at the precision of this study, but further couple pre-equilibrium geometry to the hydrodynamic evolution [118; 121]. This is investigated later in this work as a prediction made at Maximum a Posteriori. Recent Bayesian works with a T\({}_{R}\)ENTo + freestreaming initial state have been finding success with smaller and smaller specific bulk viscosity [25; 31; 120], contrasting with prior studies demonstrating the need for \(\zeta/s\) to reproduce hadronic observables. By fixing \((\zeta/s)_{max}\) to zero and holding the other parameters fixed to arbitrary values as they no longer have any impact, it is straightforward to assess the demand for nonzero \(\zeta/s\). This comparison results in \(\ln B=34.4\pm 2.4\) in favor of non-zero \(\zeta/s\) when using Grad viscous corrections, corresponding to odds of \(\sim 8\times 10^{14}:1\). With Chapman-Enskog viscous corrections, this preference for the inclusion of bulk viscosity increases to \(\ln B=61\pm 5\), conclusively demonstrating that bulk viscosity is strongly justified when using a physically-motivated pre-equilibrium stage no matter the viscous corrections at particlization. The physical impacts of the lack of bulk viscosity arise in enhancement of the identified particle \(\langle p_{T}\rangle\) and the momentum fluctuations \(\delta p_{T}/\langle p_{T}\rangle\) with simultaneous suppression of \(v_{3}\{2\}\) and the three-plane correlators. The particlization temperature is also forced to the highest possible temperature allowed in the prior while the switching time to MUSIC is required to be as short as possible. This arises from a need to preserve as many initial-state fluctuations as possible as they must reproduce fluctuation-driven final-state observables. The high particlization temperature additionally preserves fluctuations by allowing for less viscous dissipation in the hydrodynamic phase. Comparing the relative likelihood of the viscous correction models is a useful way to assess model applicability and begin to quantify the uncertainty introduced by the choice of viscous correction. Comparing Grad and Chapman-Enskog viscous corrections to data with none of the parameters held fixed, the relative preference for the Grad over the Chapman-Enskog RTA viscous corrections is \(\ln B=-0.1\pm 3.1\) in imperceptibly-slight favor of Grad viscous correction, although this should be interpreted as the models being indistinguishable in this analysis. This indistinguishable nature of the viscous correction models deserves further study. The posteriors, as shown previously, are quite similar but not identical, but are equally well-suited to experimental measurements. As a result, the viscous corrections chosen in a study are an important source of theoretical uncertainty to quantify and not doing so results in an artificially precise posterior. Progress in adding additional constraining observables must not neglect quantification of uncertainty as a parallel goal lest analyses fall into the trap of the bias-variance tradeoff. The goal is not to constrain these quantities the most precisely, but to do so both accurately _and_ precisely. By not including sources of theoretical uncertainties, an analysis focuses on the latter and sacrifices the former. A natural question to ask is why is the model preference between the Grad and Chapman-Enskog viscous corrections indeterminate in this study where, in the only other application of Bayesian model comparison, it was strongly in favor of the Grad \(\delta f\)[29; 30]? The answer is essentially two-fold: first, the viscosity in the previous study were larger at particlization and as a result, enhanced the effect of the corrections; and second, the inclusion of dynamics in the pre-equilibrium stage means that observables are less sensitive to the hydrodynamic viscosity. As the viscosity is not wholly responsible for introducing momentum-state anisotropy, for example, the viscous corrections at particlization impact the observables less, in turn resulting in less model preference between modeling choices which should be small effects by construction. The lack of preference between the two models of viscous corrections is in accordance with prior theoretical expectations. The most likely value in the 11-dimensional parameter space is the Maximum a Posteriori estimate, determined by numerical optimization on the MCMC chain. As the Bayesian model comparison exhibits no preference for or against temperature-dependent specific shear viscosity, estimates of the MAP are provided for both temperature dependence and a lack thereof in Table 3. A variety of interesting features arise in Table 3. First, the lattice QCD estimate of crossover temperature - \(T_{c}=155\pm 1.5\) MeV - is consistent with both Grad MAP estimates of the particlization temperature using Grad viscous corrections, with the MAP estimate from constant \(\eta/s\) nearly identical to the central lattice estimate. Using Chapman-Enskog viscous corrections results in a slightly lower estimate of the particlization temperature, but still close to the estimated crossover temperature, suggesting that the hadrons may behave hydrodynamically for a brief period after recombination. Next, the switching time \(\tau_{0}\) is consistent with IP-Glasma's pressures having come to a steady state (see Fig. 1) and with sufficient time for the build-up of pre-equilibrium dynamics that was hypothesized to be of critical importance in describing the strongly-interacting medium. The parameter \(\mu_{Q_{s}}\) relating the saturation scale \(Q_{s}\) to the color charge density profile has a posterior distribution shown in Fig. 15 corresponding to a MAP estimate reported in Table 3. Note the posterior distribution obtained here for \(\mu_{Q_{s}}\) overlaps that reported in [122] for a fixed number of hot spots. The value of the specific shear viscosity is broadly consistent with other Bayesian results and the constant \(\eta/s\) is very close to past "chi-by-eye" fits of 0.13. The bulk viscosity maximum and width are consistent with a large, peaked bulk viscosity, further supporting a consistent picture between theoretical expectations and prior modeling success. The asymmetry of the bulk viscosity is of interest as it suggests a bulk viscosity peaked at high temperature and slowly decreasing as it approaches the particlization temperature, where it is well-constrained by the data to be small. While the MAP estimates for the bulk viscosity differ in their parameters between \(\eta/s\) and \(\eta/s(T)\), the actual value at any temperature differs by a maximum of \(\sim\)10% below the region where it nears the lower peak location at \(T\approx 0.28\) GeV. The MAP estimates are used to make predictions of observables not used in the model-to-data comparison. Strictly speaking, this is due to computational limitation: the most appropriate comparison is a full posterior predictive distribution with perhaps a surrogate model trained on a reasonable quantity of high-statistics calculations. At the same time, the MAP estimates are the recommended parameters for use in other studies, such as hard sector studies of jet-medium interactions or photon/dilepton calculations, and therefore represent a faithful picture of how the model will be used in practice. ### Bayesian Model Averaging In Bayesian model comparison, the question under investigation is "which model is best suited to the data?" This informs which model to use and how best to use it. A related question is "given two models, how does one best estimate the truth?" For this, Bayesian model averaging (BMA) is employed. In Bayesian model averaging, two posteriors are combined using a weighted average in which the weights are the Bayes evidence [123]. In a simplified example, if two models are equally likely, then the truth is most likely to be in the region where the model posteriors overlap. This is formalized as \[p_{BMA}(x|y)\propto\sum_{i}p_{i}(y)p_{i}(x|y) \tag{12}\] for models indexed \(i\). BMA was first used in heavy ion collisions to perform model averaging of the transport coefficients and later for model averaging of non-viscous parameters [43; 29]. The BMA viscous posteriors are shown in Fig. 36 along with the Kullback-Leibler divergence, which quantifies the distance between two distributions and is used here to calculate the information gained from the prior to the BMA posterior [124]. The BMA posterior for non-viscous parameters is shown in Fig. 37. The BMA viscous posterior clearly demonstrates the value of accounting for the uncertainty due to viscous corrections at particlization by showing the state of knowledge by considering both simultaneously. The two models contribute their constraint throughout the temperature evolution of both \(\zeta/s\) and \(\eta/s\), although the impact is clearer in the specific bulk viscosity due to the differences in constraint between the two models. Particularly of interest is that BMA leverages the information content of both models to produce a more-constrained 60% C.I. than either model independently, demonstrating how to address the bias-variance tradeoff with multiple models in a rigorous way. The KL Divergence in Fig. 36 is also of note: information on the viscosity is gained by comparing to data over the entire temperature region considered, decreasing at higher temperatures that are probed more briefly and earlier in the collision evolution. This is consistent with the only other study to investigate this, but has substantially increased the amount of learning from prior to posterior. As the hydrodynamic, particlization, and hadronic cascade stages were intentionally chosen to be identical, this difference can be ascribed to mi \begin{table} \begin{tabular}{l c c c c} Parameter & Grad \(\delta f\), \(\eta/s\) & Grad \(\delta f\), \(\eta/s(T)\) & C.–E. \(\delta f\), \(\eta/s\) & C.–E. \(\delta f\), \(\eta/s(T)\) \\ \hline \(\mu_{Q_{s}}\) & 0.72341 & 0.70808 & 0.72654 & 0.70858 \\ \hline \(\eta\) [fm] & 0.52127 & 0.51291 & 0.40142 & 0.55159 \\ \hline \(T_{\eta,\rm kink}\) [GeV] & 0.150 & 0.22333 & 0.150 & 0.21123 \\ \hline \(a_{\eta,\rm low}\) [GeV\({}^{-1}\)] & 0.000 & -0.16259 & 0.000 & 0.65272 \\ \hline \(a_{\eta,\rm high}\) [GeV\({}^{-1}\)] & 0.000 & -0.80217 & 0.000 & -0.89472 \\ \hline \((\eta/s)_{\rm kink}\) & 0.13577 & 0.13944 & 0.12504 & 0.14888 \\ \hline \((\zeta/s)_{max}\) & 0.28158 & 0.22085 & 0.17391 & 0.20117 \\ \hline \(T_{\zeta,c}\) [GeV] & 0.31111 & 0.29198 & 0.2706 & 0.25455 \\ \hline \(w_{\zeta}\) [GeV] & 0.02878 & 0.03625 & 0.05255 & 0.04506 \\ \hline \(\lambda_{\zeta}\) & -0.96971 & -0.56235 & -0.14178 & 0.06408 \\ \hline \(T_{\rm sw}\) [GeV] & 0.15552 & 0.15429 & 0.15069 & 0.1513 \\ \hline \end{tabular} \end{table} Table 3: Maximum a Posteriori estimates with Grad’s 14-moment and Chapman-Enskog RTA viscous corrections. Estimates with (denoted \(\eta/s(T)\)) and without (denoted \(\eta/s\)) temperature-dependent specific shear viscosity are reported. croscopic physics in the pre-equilibrium evolution of the plasma. This difference in dynamics is most pronounced at early times in the evolution, roughly corresponding to higher temperatures, where the increased constraint is found. The KL Divergence is also non-monotonic for the BMA posterior, which is commensurate with increased constraint around the peak of \(\zeta/s\) and the kink of \(\eta/s\) as well as constraint of \(\zeta/s\) near particlization. This demonstrates model sensitivity to key phenomenological features and less constraint otherwise. The non-viscous BMA posterior demonstrates this as well and it can be seen in the joint marginal distributions that the BMA posterior is in between those of the two underlying models. This incorporates this source of modeling uncertainty and is the most precise and accurate physical understanding of these quantities yet. The non-viscous posteriors for the Grad and C.-E. viscous correction models are quite similar, demonstrating that these are robust to a modeling choice intended to be a small correction. The largest difference between the two models is in the particlization temperature, which is robustly accounted for in the BMA posterior, and the median value remains consistent between the models. ## VIII Summary and conclusion This study has implemented rigorous Bayesian model-to-data comparison with IP-Glasma for the first time, incorporated transfer learning for the first time, and has demonstrated ongoing inconclusive preference between viscous correction models and between temperature-dependent and temperature-independent shear viscosity. A large number of postdictions and predictions are shown at maximum a posteriori and should be considered the new state-of-the-art theoretical result to which future measurements and calculations should be compared. The posterior distributions are the main results of this study and are our current best estimate of the properties of strongly-interacting matter in ultra-relativistic heavy ion Figure 37: Bayes Model Averaged posterior for non-viscous parameters (orange) shown with with Grad (blue) and Chapman-Enskog (red). The lowest contour shown is the \(5^{th}\) percentile. Figure 36: Bayes Model Averaged viscous posterior shown with with Grad 90% credible interval (blue) and Chapman-Enskog 90% credible interval (red) and the Kullback-Leibler Divergence quantifying information gain from the priors to the BMA posterior in bits (bottom panels). collisions. From a performance point of view, the improved sampling procedure (the ordered Maximum Projection Latin Hypercube, compared with the maximin Latin Hypercube) resulted in a more rapid sampling that covered the design space coupled with an increased fidelity of the surrogate modeling. This large scale simulation involved varying parameters of IP-Glasma, the transport coefficients of the relativistic fluid dynamics phase, and the particlization temperature. A hybrid model with an IP-Glasma initial state was constructed, and closure tests were able to recover input parameters of IP-Glasma, demonstrating self-consistency. This important step has shown that the sensitivity of the chosen final state hadronic observables to the pre-equilibrium phase was sufficient to establish to reliably extract accurate information. The self-consistency of the subsequent phases and elements of the hybrid model had been established in earlier studies, but this work establishes the relevance of IP-Glasma for a large scale statistical study. We have used Bayesian model comparison and Bayesian model averaging to establish the most likely values of the physical quantities included in this work. Special emphasis was put on the specific shear and bulk viscosity coefficients. The temperature dependence of the specific shear viscosity remains statistically indeterminate - i.e. statistically consistent with being flat. Note that previous several calculations with viscous hydrodynamics following IP-Glasma have produced successful phenomenology with a constant specific shear viscosity [7]. On the other hand, the specific bulk viscosity, \(\zeta/s\), was found to be somewhat larger than found in similar previous studies [29; 30; 31; 25], and peaked during the hydrodynamics phase. Importantly, it is strongly inconsistent with being zero. It is clear that the theoretical effort in the field is moving closer to true _ab initio_ modelling of relativistic heavy-ion collisions. What this works also makes clear is that the physical quantities deduced from the analysis of the final states are influenced by the physics of the very early stages of the hadronic reaction. This is true in the case of hadrons, as emphasized in this study, as it is for electromagnetic variables [125; 126; 127; 35]. As is often the case in fields with a plenitude of data, Bayesian model averaging remains the current state-of-the-art in heavy-ion collisions for leveraging the information in multiple models to best constrain the physical understanding of strongly-interacting matter without over-fitting. This is only the second study in this field, following [29] and elaborated in [43], to utilize BMA for improving uncertainty quantification and has further demonstrated its importance. Further sources of unquantified uncertainty still exist in heavy ion collisions, usually at the interface between models at each stage in the evolution of the fireball, but how to incorporate such interface effects in BMA is not yet clear. A strong focus in studying the strongly-interacting matter produced in heavy ion collisions has been to improve the precision of the models; it should be emphasized that the pursuit of arbitrary precision without accounting for sources of uncertainty using techniques such as BMA is a perilous path: it does not fully leverage the information available, and could lead to bias. Simultaneous consideration of observables and uncertainty quantification are required for reliable inference of the physical properties of strongly-interacting matter. ###### Acknowledgements. We are happy to acknowledge useful exchanges with members of the JETSCAPE Collaboration. In addition, we are grateful for useful discussions with S. Bass, D. Everett, D. Liyanage, S. McDonald, N. Miro-Fortier, and M. Singh. This work was funded in part by the Natural Sciences and Engineering Research Council of Canada, in part by the U.S. Department of Energy Grant no. DE-FG-02-05ER41367, and in part by Vanderbilt University. Computations were made on the Beluga, Cedar, Graham, Niagara, and Narval supercomputers managed by Calcul Quebec, SciNet, WestGrid, and other members of the Digital Research Alliance of Canada.
2309.02887
A deep Natural Language Inference predictor without language-specific training data
In this paper we present a technique of NLP to tackle the problem of inference relation (NLI) between pairs of sentences in a target language of choice without a language-specific training dataset. We exploit a generic translation dataset, manually translated, along with two instances of the same pre-trained model - the first to generate sentence embeddings for the source language, and the second fine-tuned over the target language to mimic the first. This technique is known as Knowledge Distillation. The model has been evaluated over machine translated Stanford NLI test dataset, machine translated Multi-Genre NLI test dataset, and manually translated RTE3-ITA test dataset. We also test the proposed architecture over different tasks to empirically demonstrate the generality of the NLI task. The model has been evaluated over the native Italian ABSITA dataset, on the tasks of Sentiment Analysis, Aspect-Based Sentiment Analysis, and Topic Recognition. We emphasise the generality and exploitability of the Knowledge Distillation technique that outperforms other methodologies based on machine translation, even though the former was not directly trained on the data it was tested over.
Lorenzo Corradi, Alessandro Manenti, Francesca Del Bonifro, Francesco Setti, Dario Del Sorbo
2023-09-06T10:20:59Z
http://arxiv.org/abs/2309.02887v1
# A deep Natural Language Inference predictor without language-specific training data ###### Abstract In this paper we present a technique of NLP to tackle the problem of inference relation (NLI) between pairs of sentences in a target language of choice without a language-specific training dataset. We exploit a generic translation dataset, manually translated, along with two instances of the same pre-trained model -- the first to generate sentence embeddings for the source language, and the second fine-tuned over the target language to mimic the first. This technique is known as Knowledge Distillation. The model has been evaluated over machine translated Stanford NLI test dataset, machine translated Multi-Genre NLI test dataset, and manually translated RTE3-ITA test dataset. We also test the proposed architecture over different tasks to empirically demonstrate the generality of the NLI task. The model has been evaluated over the native Italian ABSITA dataset, on the tasks of Sentiment Analysis, Aspect-Based Sentiment Analysis, and Topic Recognition. We emphasise the generality and exploitability of the Knowledge Distillation technique that outperforms other methodologies based on machine translation, even though the former was not directly trained on the data it was tested over. Keywords:Natural Language Inference Knowledge Distillation Domain adaptation ## 1 Introduction Natural Language Processing (NLP) has gained huge improvements and importance in the last years. It has many different applications as it helps in many ways human language productions understanding and analysis in an automated manner. Natural Language Inference (NLI) is one of these applications: it is the task of determining the inference relation between two short texts written in natural language, usually defined as _premise_ and _hypothesis_[4, 21]. This implies the extraction of the meaning of the two texts and then evaluating if the _Premise_ (P) entails the _Hypothesis_ (H) (_entailment_ situation), if the _premise_ and the _hypothesis_ are in contradiction between each other (_contradiction_ situation), or if none of these two situations happen and there is no inference relation among the two texts (_neutral_ situation). This is a challenging task that requires understanding the nuances of language and context, as well as the ability to reason and make logical implications. The relevance of this task can be easily understood by highlighting some of its possible applications. Common tasks based on NLI are Aspect-Based Sentiment Analysis (ABSA), Sentiment Analysis (SA), and Topic Recognition (TR) described in Sec.4. All these tasks, when approached with NLI strategy, are tackled by comparing an input text (e.g., "We really enjoyed the food, it is tasty and cheap, the staff was very nice and kind. However the restaurant is very hard to reach.") and an hypothesis about the input text (e.g., "The position of the restaurant is difficult to be reached") and predicting if the input text either entails, contradicts, or is not related to the hypothesis ("Entailment" is the correct prediction in the previous example). A common problem for many NLP tasks is the fact that the developed models usually require a big amount of natural language productions data, and usually they are made available in the English language. There are many languages that are underrepresented in these NLP dataset contexts and this made interesting tools hard to develop in these other languages, and there is the need to solve this to make these advance in tech available for low represented languages too. Data scarcity may be tacked with different strategies and this work describe some of them relatively to the NLI task in the Italian language. The goal of this research is to build a model with the following traits: 1. it can perform the NLI task in a specific language; 2. based on the sentence embedding operation, such as in [16]; 3. it is able to understand another language, in this case Italian; ; 4. it is able of being general and not requiring any re-train for each specific industrial task (ABSA, SA, TR); We can state the Research Question (RQ) which drive our effort is **RQ: It is possible to build a NLI model with acceptable performances on NLI related downstream tasks in Italian language compliant with (a), (b), (c), (d) constraints, without requiring a language-specific dataset?** The main differences among the proposed models to achieve these aims is the training approach: one is based on Knowledge Distillation (KD)[17] a technique which aim to transfer knowledge from a _Teacher_ model (English-based NLI model) to _Student_ model (that will handle the Italian language).The other model includes a step for the dataset translation from English to Italian by means of a Machine Translation model [18]. The approach exploiting KD has been demonstrated to have NLI capability in the target language, namely Italian, without being exposed to a NLI training dataset in Italian. This model has been named **I-SPIn** (**I**talian-**S**entence **P**air **I**nference) and is available at this link along with all instructions for usage. The remainder of this paper is structured as follows: in Sec. 2 we report literature and datasets discussion, Sec. 3 describes the two implemented approaches, Sec. 4 reports the settings and results of the performed experiments, Sec. 5 reports discussion and conclusions about this work. ## 2 Related Work Common approaches to tackle NLI include Neural Networks, such as Recurrent Neural Networks or Transformer based methods [6, 5]. [6] presents an architecture based both on learning _Hypothesis_ and _Premise_ in a dependent way using bidirectional LSTM and Attention mechanism [1] to extract the text pair representation needed for final classification. The obtained results on SNLI [4] validation set gives a 89% accuracy. [5] describes language representation model (BERT) building a model based on Transformers [20, 22] and pre-training it in a bidirectional way, this has the aim to serve as a pre-trained model that can be fine-tuned on several different tasks including NLI. BERT is fine-tuned and tested on MNLI dataset [21] and reaches around 86% accuracy. Recent researches [15, 23] demonstrate that Transformers models [20] are more suited for the NLI task, consistently surpassing neural models [22]. All of these high performance approaches mainly hold for English language as it is the language in which there is the higher data availability. Some multi-language NLI approaches are proposed in [24], where cross-lingual training, multilingual training, and meta learning are attempted using a dataset extracted from Open Multilingual WordNet. The best model resulted to be the one exploiting meta learning and reached 76% accuracy on True/False classification task of text pairs for the Italian language. [10] represents another work on multi-language NLI where the Excitement Open Platform is presented as open source software for experimenting in NLI related tasks. It has many linguistics and entailment components based on transformations between _Premise_ (P) and _Hypothesis_ (H), edit distance algorithms, and a classification using features extracted from P and H. Italian Language is tested on a manually translated RTE-3 dataset [7] and the best model has 63% accuracy. In the context of an Italian Textual Entailment competition the task Recognizing Textual Entailment (RTE) is proposed. It is similar to NLI task but it only contains two Entailment Yes/No classes. The competition's winner model is described in [3] and it is based on a open source software EDITS based on edit distance reaching 71% accuracy on the convention EVALITA 2009 dataset which is extracted from Wikipedia. [13] presents a model based on translation. The input texts can be in any language and are translated into English using a standalone machine translation system. The authors show that machine translation can be used to successfully perform the NLI related tasks or when P and H are provided in different languages. For Italian, it uses Bing translation and it is tested on EVALITA 2009 dataset reaching 66% accuracy. #### 2.0.1 Datasets The datasets used in this work are described in this paragraph and examples can be found in Appendix 0.A. The Stanford NLI (SNLI) [4] corpus is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the la bels "entailment","contradiction", and "neutral", supporting the task of NLI. The SNLI dataset presents the canonical dataset split -- consisting of train, validation, and test sets. Multi-Genre NLI (MNLI) [21] corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that covers a range of _genres_ of spoken and written text. The train set is composed of sentences with the same genres: "Telephone", "Fiction", "Government", "Slate" and "Travel". The MNLI dataset supports a distinctive cross-genre generalisation evaluation. There is a matched validation set which is derived from the same source as those in the training set, and a mismatched validation set which do not closely resemble any genres seen at training time. RTE datasets (RTE3-ITA and RTE2009) are English-native NLI datasets, manually translated by a community of researchers. The Italian version RTE3-ITA refers to the third refinement of this dataset3. Instead, RTE2009 was submitted for the EVALITA 2009 Italian campaign [2]4. These datasets are only used for testing, since they contain too few observations to be suitable for training. The RTE3-ITA dataset contains 1600 observations, whereas RTE-2009 contains 800 observations. Unlike classical NLI, these datasets present only two labels: "Entailment" and "No-Entailment". Footnote 3: The validation and test datasets can be downloaded at this link. TED2020 [14] is a generic translation dataset. The option (English-Italian) has been selected for training among more than a hundred possible languages. The dataset consists of more than 400k parallel sentences. The transcripts have been translated by a community of volunteers. This dataset is used to make a model understand different languages [17], starting from a language known to the model. ## 3 Method Three different architectures will be detailed throughout the section. In Sec. 3 the objective is to obtain the a model that is able to perform NLI in English. Starting from this model, we propose two parallel approaches to perform NLI in the target language. One is detailed in Sec. 3, and the other is detailed in Sec. 3. Both approaches attempt a domain adaptation and generalisation in the target language -- namely, Italian, while lacking a language-specific dataset. The models' parameters were selected among few different possibilities suggested by online informal documentation and literature. No cross-validation or grid-search analyses have been performed for computational constraints. Therefore, no guarantees on the optimality of the parameters can be made. To reduce computational complexity during the inference phase for the models described in Sec. 3 and Sec. 3, we recommend to split the model to obtain independent instances of encoder and classifier. The proposed methodology is the following: transform all the sentence pairs in vectorial forms -- with the encoder -- first; in a second phase, the classifier will receive the embeddings to return an inference relation. #### 3.2.1 NLI training in the source language The proposed solution makes use of a Transformer [20]. The Transformer lately has become the state-of-the-art architecture for NLP, as detailed in 2. The first step of our methodology is to retrieve a sentence encoder model, based on Transformers. This sentence encoder model is already fine-tuned for general purposes over different languages. The encoder of choice to transform sentences in vectors was Sentence-BERT [16]. It is a fine-tuning of BERT [5], that is a word embedding Transformer model, tailored for the task of sentence embedding. It has the ability to perform sentence embedding faster than BERT as detailed in [16]5, by means of a Siamese training approach [19]. Referring to this model with the term Sentence-BERT is inappropriate, since it has been fine-tuned on RoBERTa [9], that is a larger counterpart of BERT. Hence, the name Sentence-RoBERTa would be more appropriate. In this paper we will adopt the name Sentence-BERT to refer to any siamese structure accepting a sentence pair as input, including the instance of Sentence-RoBERTa to be fine-tuned. Since Transformers are computationally expensive to train from scratch, we decided to test a multilingual version of Sentence-BERT and fine-tune it on SNLI and MNLI merged together to create a single NLI dataset. After a fine-tuning session over the merged NLI dataset, the result is a model based on Transformers, that can proficiently address the NLI task -- only in English though, despite being originally trained on multiple languages. More information about this work available at [11]. Footnote 5: Sentence-BERT was downloaded from this link. The output of the fine-tuned Sentence-BERT is composed of an embeddings pair, containing a vectorial representation of the premise and the hypothesis. Note that the sentence encoder model is invoked two separate times for this operation, for complexity optimisation reasons. The Sentence-BERT output embeddings have been further transformed to maximise and emphasise the relevant information for our task. In detail, the following operations have been applied: * Element-wise product. Captures similarity of the two embeddings, and highlights components of the embeddings that are more relevant than others. * Difference. Asymmetric operation; captures the direction of implication. We want the hypothesis to imply the premise, and not vice-versa. The two transformed embeddings were concatenated and passed as input to a fully-connected Feed Forward architecture of six (6) layers, detailed in Appendix 0.B, with three (3) outputs ("Entailment", "Neutral", "Contradiction"), to predict the probability of the sentence pair to belong to each NLI class. Finally, a softmax function was applied to the three-dimensional vector to obtain the class probabilities (Fig. 1). Execution-wise, the NLI fine-tuning task on a Tesla P100-PCIE-16GB GPU was completed in approximately six (6) hours on the merged NLI training dataset composed of an ensemble of SNLI and MNLI datasets, accounting for more than 1M observations. The main parameters can be found in Appendix B. In our work, we want to enable a multilingual Transformers-based model, previously fine-tuned for a specific task only in one specific language, to proficiency address that specific task in another language. #### 3.0.1 Knowledge Distillation in the target language The second step of our methodology is to employ a training without language-specific NLI training data and we selected the Knowledge Distillation (KD) [17] approach. KD was born as a model compression technique [8], where knowledge is transferred from the teacher model to the student by minimizing a loss function, in which the target is the distribution of class probabilities predicted by the teacher model. KD is a powerful technique since it can be used for a variety of multiple tasks. In our experiments, we employed KD to perform NLI in the target language, with the objective of forcing a translated sentence to have the same embedding -- i.e. location in the vector space -- as the original sentence. The soft targets of the teacher model constitute the labels to be compared with the predictions returned by the student model. The task at hand may fall in the domain adaptation problem sphere. We require a teacher model (encoder) \(T\), that maps sentences in the source language to a vectorial representation. Further, we need parallel (translated) sentences \(D=((source_{1},target_{1}),...,(source_{n},target_{n}))\) with \(source_{j}\) being a sentence in the source language and \(target_{j}\) being a sentence in the target language. We train a student encoder model \(S\) such that \(T(source_{j})\approx S(target_{j})\). For a given mini-batch \(B\), we minimise the Mean Squared Error loss function: \[MSE_{(S,T,D=(source_{j},target_{j}))}=\frac{1}{|B|}\sum_{j\in|B|}(T(source_{ j})-S(target_{j}))^{2} \tag{1}\] Two instances of the encoder described in Sec. 3 have been taken for the experiment. One acts as teacher encoder model \(T\), the other as a student encoder model \(S\). The application of KD has the objective to share the domain knowledge of the teacher encoder model to the student encoder model, and at the same Figure 1: Model structure. Two sentences are transformed in embeddings. The embeddings are compared with a classifier to get the prediction for the sentence pair. time learn a new vectorial representation for the target language. A schematic representation is provided in Fig. 2. The obtained NLI classifier, able to understand Italian, accepts a sentence pair to output a NLI label. Execution-wise, the KD task on a Tesla P100-PCIE-16GB GPU was completed in approximately five (5) hours on the TED2020 (English-Italian) dataset consisting of more than 400k parallel sentences. The main parameters can be found in Appendix 0.B. #### 0.a.0.1 Machine Translation in the target language As an alternative method for our second step, we employ a Large Language Model named No Language Left Behind (NLLB) [18] to address the lack of language-specific NLI training data. To the best of our knowledge, it was not possible to find a comprehensive NLI dataset in Italian. The RTE3-ITA and RTE-2009 datasets, both detailed in Sec. 2, together present about 2500 observations, too few to train a Deep Learning model. Therefore, the dataset used to fine-tune this architecture is the same as in Sec. 3, with an alteration: we perform a translation of the dataset. In fact, the simplest way to perform NLI in a language other than English is to machine translate the ensemble NLI dataset, consisting in SNLI and MNLI merged together. Note that, for memory and performance optimisation, the ensemble NLI training dataset was dynamically translated during execution by invoking the NLLB model for each mini-batch. Execution-wise, this fine-tuning task over the target language on a Tesla P100-PCIE-16GB GPU was completed in approximately ten (10) hours on the translated ensemble NLI dataset, consisting of more than 1M sentence-pairs. The main parameters can be found in Appendix Figure 2: Knowledge Distillation. Teacher encoder model receives source sentences, student model receives target sentences. Student encoder model is updated with new information from the teacher. ## 4 Experiments #### 4.0.1 NLI results in the source language The architecture discussed as in Sec. 3 has been tested over the standard NLI task in English. For SNLI the accuracy reached 80.69% while for MNLI a 77.00% accuracy is reached. #### 4.0.2 NLI results in the target language The architecture discussed as in Sec. 3 -- that is the main focus of this paper -- has been tested over the standard NLI task in Italian, and compared with the alternative architecture based on Machine Translation. The underlying model, an open-source machine translation model, developed by Facebook, named No Language Left Behind [18], was also exploited to obtain a comprehensive Italian NLI dataset, suitable for testing. Results for the SNLI and the MNLI test sets (both translated in Italian) are detailed in Tab. 4. SNLI results in Tab. 4 are not far from the theoretical accuracy cap these models have -- presented for NLI in source language. This could be interpreted as a success for the training of both architectures. The Min F1-Score metric captures the most misclassified class. The Neutral class, in general, has been the most challenging to classify, as translation biases may slightly change the connotation of a sentence. Note that this test is biased towards the Machine Translation-based architecture. Remember that this architecture has been fine-tuned over the translated NLI dataset in the target language; the KD-based architecture, instead, had never seen the NLI dataset in the target language. This suggests that the KD-based architecture may have relevant learning capabilities over this task. Differently from the SNLI dataset, we briefly remark that MNLI datasets are divided into genres, and supports a distinctive cross-genre generalisation evaluation by means of the mismatched validation set. A higher accuracy on the mismatched validation set corresponds to a better generalisation of the model. In the same way as before, also for this test the Machine Translation-based architecture had an objective advantage, by being trained on the same dataset it was tested over. Nonetheless, the KD-based architecture performed better in this test. This dataset tests the generalisation capability and the ability to understand a wide range of contexts of a model, as it contains multiple genres. This could be a motivation to consider the KD-based architecture the most powerful architecture of the two. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & Task & Acc. & Min F1 & Macro-Avg F1 \\ \hline SNLI (IT) & NLI & 74.21 (-1.83\%) & 67.19\% (-4.34\% ) & 74.08\% (-4.94\%) \\ \hline MNLI-Mismatch (IT) & NLI & 72.74\% (+**1.09\%**) & 64.53\% (+**0.55\%**) & 72.78\% (+**1.37\%**) \\ \hline RTE3-ITA & RTE & 67.50\% (+**4.75\%**) & 60.12\% (+**5.55\%**) & 66.35\% (+**4.85\%**) \\ \hline RTE-2009 & RTE & 59.00\% (-0.75\%) & 31.09\% (-2.65\%) & 50.96\% (-1.46\%) \\ \hline \end{tabular} \end{table} Table 1: NLI (IT) results. In addition to the tests above, the architecture has been tested over the RTE datasets. We briefly remind that, unlike classical NLI, these datasets present only two labels: "Entailment" and "No-Entailment". Both our models produce three labels -- "Entailment", "Neutral", "Contradiction" -- as they were trained on SNLI and MNLI. The two-label mapping for this task maps both _Neutral_ and _Contradiction_ to _No-Entailment_as this maximises the accuracy on the validation set. Results for the RTE3-ITA and RTE-2009 test sets are reported in Tab.4 too. The performance difference between the two architectures may be explained by the difference in quality of the target language the two architectures have been exposed to during training. In fact, Machine Translation-based architecture has been trained over a machine translated dataset, whereas the KD-based architecture was trained over a manually translated dataset. This supposition can be made because this dataset has been manually translated in Italian, hence presents a better language quality than the NLI datasets translated in the target language. #### 4.2.1 ABSA results Aspect-Based Sentiment Analysis at EVALITA (ABSITA), detailed in [2], is an ABSA dataset. Contains Italian hotel reviews that may touch different topics (such as price, location, cleanliness, etc.) and a sentiment associated to each topic (knowing that sentiments for different topics may be contrasting). By choosing arbitrary NLI hypotheses, this dataset may emulate a total of three (3) different tasks, namely SA, TR, and ABSA. The core idea behind this setting comes from the desire to query a text -- in NLI, a set of premises (e.g. a set of reviews), in an unsupervised way, to receive specific answers from a predefined list of answers (e.g. the presence of a topic from a list of topics). In the case of open answers, a question-answer architecture would have been more suitable. _Sentiment Analysis_ (SA) is the task to recognise the overall sentiment of a sentence. As detailed above, we would like to exploit the models to apply SA in an unsupervised manner -- to do this, we fix a hypothesis arbitrarily. We assume that the hypothesis we have chosen captures the logical implication that is the core of NLI. Results for the ABSITA dataset are detailed in Tab. 2. Note that the hypothesis has been arbitrarily set to "S \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Dataset & Balancing & Task & Acc. & Min F1 & Macro-Avg F1 \\ \hline ABSITA & 1:1 & SA & 88.12\% (+**3.08\%** ) & 86.89\% (+**3.19\%**) & 88.02\% (+**3.09\%**) \\ \hline ABSITA & 1:1 & TR & 68.09\% (-3.1\% ) & 65.75\% (-3.01\%) & 67.97\% (-3.16\% ) \\ \hline ABSITA & 1:7 & TR & 71.11\% (+**5.27\%**) & 37.94\% (-0.77\%) & 59.56\% (+**2.04\%**) \\ \hline ABSITA & 1:1 & ABSA & 94.03\% (+**6.24\%**) & 93.90\% (+**6.65\%**) & 94.02\% (+**6.35\%**) \\ \hline ABSITA & 1:15 & ABSA & 78.42\% (+**11.39\%**) & 37.66\% (+**8.3\%**) & 62.30\% (+**8.37\%**) \\ \hline \end{tabular} \end{table} Table 2: ABSITA results, over the Sentiment Analysis and Topic Recognition tasks satisfied"), hence "Entailment" refers to the model predicting positive sentiment. The two-label mapping for this task maps _Neutral_ to _Entailment_. _Topic Recognition_ (TR) is the task to recognise whether or not a sentence is about a topic. As detailed above, we would like to exploit the models to apply TR in an unsupervised manner -- to do this, we fix a hypothesis arbitrarily. We assume that the hypothesis we have chosen captures the logical implication that is the core of NLI. Results for the ABSITA dataset are detailed in Tab. 2. The seven (7) in the "Balancing" column stands for the number of different topics in the dataset. The 1:1 balancing has been obtained by randomly sampling sentences from the seven (7) classes that do not compose the target. The two scenarios have been proposed to extensively test the generalisation capability of the models. Note that the hypothesis has been arbitrarily set to "Parlo di pulizia" ("T'm talking about cleanliness"), hence "Entailment" refers to the model predicting the label "cleanliness". The two-label mapping for this task maps _Neutral_ to _Entailment_. _Aspect-Based Sentiment Analysis_ (ABSA) is the task to recognise the sentiment about each sub-topic in a sentence. As detailed above, we would like to exploit the models to apply ABSA in an unsupervised manner -- to do this, we fix a hypothesis arbitrarily. We assume that the hypothesis we have chosen captures the logical implication that is the core of NLI. Results for the ABSITA dataset are detailed in Tab. 2. Note that the hypothesis has been arbitrarily set to "La camera e pulita" ("The room is clean"), hence "Entailment" refers to the model predicting positive sentiment and "cleanliness" label. The two-label mapping for this task maps _Neutral_ to _Contradiction_. ## 5 Conclusions and Discussions To interpret the apparently decent results for NLI in the source language, listed in Sec. 4, we need to consider the fact that, during training, sentence encoders do not look at both inputs simultanously, hence generating good but not top-tier performances. Potentially, we could have obtained slightly better results by making use of a word encoder instead of a sentence encoder, at the cost of a large computational overhead. To address various industrial tasks, we decided to prioritise scalability and responsiveness. The discussed architecture, based on KD, demonstrated to perform better than the other architecture -- that was directly trained over machine translated NLI datasets -- despite having an objective disadvantage. We stress the fact that the proposed architecture was never directly trained over any kind of Italian NLI data. Compared to the other methodology, the KD presents the following advantages: 1. Easier to extend models: we just require few samples for the new languages. 2. Lower hardware requirements: machine translation -- that is an expensive task -- is not needed as an intermediate step. To test our model performances over SA, TR, and ABSA, we employed arbitrary hypotheses. We tried our best to avoid any biases (e.g. hypotheses were chosen by colleagues that had never taken a look at the datasets), but we acknowledge that some bias may have been introduced. This is currently considered an open problem. Different architectures have been tested showing that it is possible to obtain reasonable accuracies over different NLP tasks by fine-tuning a single architecture based on sentence embeddings over the NLI task. We showed that various NLP problems may be mapped into a NLI task -- in this way, we empirically proved the generality of the NLI task. We would like to stress over the lack of need to re-train any models to obtain the results over each specific task. Moreover, lately NLI models find an important academic usage for boosting the consistency and accuracy of NLP models without fine-tuning or re-training [12]. This is because models should demonstrate internal self-consistency, in the sense that their predictions across inputs should imply logically compatible beliefs about the world -- NLI models are trained to achieve that understanding.
2310.11026
Exploring Automatic Evaluation Methods based on a Decoder-based LLM for Text Generation
Automatic evaluation of text generation is essential for improving the accuracy of generation tasks. In light of the current trend towards increasingly larger decoder-based language models, we investigate automatic evaluation methods based on such models for text generation. This paper compares various methods, including tuning with encoder-based models and large language models under equal conditions, on two different tasks, machine translation evaluation and semantic textual similarity, in two languages, Japanese and English. Experimental results show that compared to the tuned encoder-based models, the tuned decoder-based models perform poorly. The analysis of the causes for this suggests that the decoder-based models focus on surface word sequences and do not capture meaning. It is also revealed that in-context learning of very large decoder-based models such as ChatGPT makes it difficult to identify fine-grained semantic differences.
Tomohito Kasahara, Daisuke Kawahara
2023-10-17T06:53:00Z
http://arxiv.org/abs/2310.11026v1
# Exploring Automatic Evaluation Methods ###### Abstract Automatic evaluation of text generation is essential for improving the accuracy of generation tasks. In light of the current trend towards increasingly larger decoder-based language models, we investigate automatic evaluation methods based on such models for text generation. This paper compares various methods, including tuning with encoder-based models and large language models under equal conditions, on two different tasks, machine translation evaluation and semantic textual similarity, in two languages, Japanese and English. Experimental results show that compared to the tuned encoder-based models, the tuned decoder-based models perform poorly. The analysis of the causes for this suggests that the decoder-based models focus on surface word sequences and do not capture meaning. It is also revealed that in-context learning of very large decoder-based models such as ChatGPT makes it difficult to identify fine-grained semantic differences. ## 1 Introduction Neural network-based text generation models are used in various natural language processing tasks, including machine translation, dialogue systems, and text summarization. However, the outputs from these models are open-ended, and there is no single correct answer, making the evaluation of generations difficult. Manual evaluation is often used due to its high accuracy but incurs significant temporal and financial costs. Therefore, automatic evaluation is essential for the rapid development of text generation models. Automatic evaluation methods for text generation, such as BLEU Papineni et al. (2002) and ROUGE Lin (2004), have been based mainly on surface word overlaps between the generated text and the reference text. In recent years, with the development of self-supervised models such as BERT Devlin et al. (2019) and BART Lewis et al. (2020), more accurate automatic evaluation methods have been proposed. For example, BERTScore Zhang et al. (2020) uses word embeddings obtained by these models. Such methods can be classified along two axes: whether the model used is an encoder-based, decoder-based, or encoder-decoder-based architecture of Transformer Vaswani et al. (2017), and whether tuning is performed. While encoder-based methods with tuning are reported to be highly accurate Rei et al. (2020), in-context learning without tuning is the mainstream in decoder-based methods. In recent years, self-supervised decoder-based models have become larger and larger, as seen in GPT-4 OpenAI (2023), Megatron-Turing Smith et al. (2022), and PaLM Chowdhery et al. (2022). These decoder-based self-supervised large language models are referred to as **LLMs** in this paper. However, encoder-based models have remained relatively smaller than decoder-based ones. Based on the above situation, this paper compares various methods, including tuning with encoder-based models and LLMs under equal conditions, on two different tasks, machine translation evaluation and semantic textual similarity (STS), in two languages, Japanese and English. The results revealed the following three observations. 1. When a decoder-based model is tuned, the accuracy is proportional to the model size up to a certain model size, but it reaches a ceiling. 2. Compared to tuned encoder-based models, tuned decoder-based models perform poorly. 3. In-context learning of very large decoder-based models such as ChatGPT1 makes it difficult to identify fine-grained semantic differences. Footnote 1: [https://openai.com/chatgpt](https://openai.com/chatgpt) The analysis of the causes for the poor performance of the tuned decoder-based models suggests that they focus on surface word sequences and do not capture meaning. Note that our study focuses on evaluation methods under the assumption that reference text is available. ## 2 Related Work Automatic evaluation of text generation mainly requires the text generated by a model and the reference text. The classic automatic evaluation metrics, such as BLEU, ROUGE, METEOR Banerjee and Lavie (2005), and CIDEr Vedantam et al. (2015), are based on the n-gram overlap between these two texts. The biggest disadvantage of these metrics is that they do not score well even when synonyms are included, as the n-grams must match exactly for a higher score. TER Snover et al. (2006) and others that base their evaluation on edit distance have similar drawbacks. METEOR aims to overcome this drawback by using a synonym dictionary, but it is unable to perform context-sensitive synonym evaluation. Using embeddings derived from self-supervised models, synonyms can be judged to be similar based on their context. BERTScore Zhang et al. (2020) is a method that embeds the generated text and the reference text respectively by an encoder-based model and calculates a score based on their similarity. BARTScore Yuan et al. (2021) and T5Score Qin et al. (2022) input the source text to the encoder and the target text to the decoder, and calculate a score based on the generation probability of the target text. GPTScore Fu et al. (2023) calculates a score based on the generation probability of the target text by applying in-context learning Brown et al. (2020) to an LLM. G-Eval Liu et al. (2023) proposes a method to have an LLM generate scores directly. In addition, Chen et al. (2023) show that directly generated scores are more accurate than generation probability-based ones when using LLMs. Other evaluation methods increase accuracy by fine-tuning a self-supervised model using datasets consisting of text pairs and their similarity labels. Models trained on translation evaluation datasets include BLEURT Sellam et al. (2020) and COMET Rei et al. (2020), while models trained on STS datasets include Sentence-BERT Reimers and Gurevych (2019). There are also methods such as SimCSE Gao et al. (2021) that learn sentence embeddings by contrastive learning on natural language inference datasets and use them to calculate text pair similarity. Most of these self-supervised methods use encoder-based models. InstructScore Xu et al. (2023) is a method of fine-tuning LLaMA Touvron et al. (2023). However, Xu et al. (2023)'s experiments did not involve tuned LLMs on the target datasets and did not compare them to encoder-based models under equal conditions. In this study, we compare LLMs, which do not have bidirectional attention but larger model size, with encoder-based models, which have bidirectional attention but smaller model size, by tuning them under equal conditions. ## 3 Experimental Setup We compare various methods for text generation evaluation, including tuned encoder-based models and LLMs on equal conditions, on two different tasks, machine translation evaluation and STS, in two languages, Japanese and English. ### Datasets #### 3.1.1 Datasets in English For the experiments in English, we use WMT20 Mathur et al. (2020) and WMT21 Freitag et al. (2021) as the translation evaluation datasets, and STS-B Cer et al. (2017) and SICK Marelli et al. (2014) as the datasets for STS. WMT20 and WMT21 include human-translated texts, machine-translated texts, and their evaluation labels of Direct Assessment (DA) and Multidimensional Quality Metrics (MQM). In our experiments, we adopted the MQM labels that were evaluated by experts and native speakers. Since only the Chinese-to-English translation task is labeled with MQM, we use its datasets (WMT20 MQM and WMT21 MQM). STS and SICK consist of sentence pairs and their similarity labels. Note that for WMT20 and WMT21, the datasets were not pre-separated into train, valid, and test, and we randomly split these datasets with a ratio of 8:1:1. #### 3.1.2 Datasets in Japanese The datasets used in the experiments in Japanese are the WMT20 English to Japanese translation task (WMT20 en-ja) and JSTS included in the Japanese General Language Understanding Evaluation (JGLUE) Kurihara et al. (2022) benchmark. The WMT20 dataset includes human-translated texts, machine-translated texts, and their evaluation labels (Direct Assessment). JSTS is an STS dataset for Japanese, consisting of sentence pairs and their similarity labels. Note that WMT20 en-ja was randomly split at a ratio of train:valid:test=8:1:1 as in the English datasets. ### Tuning of LLMs For the method by LLM tuning, we performed LoRA-tuning of LLMs using datasets of text pairs and their evaluation or similarity labels. We chose LoRA-tuning because it can achieve competitive accuracy with fine-tuning at a lower cost (Hu et al., 2021). #### 3.2.1 Architecture and Input-Output Relationships The architecture and input-output relationship of the LLM's tuning are shown in Figure 1. Given a text pair as an input to the model, their similarity value is returned as an output. The following procedure is used to calculate the similarity. 1. Feed each text of a text pair into an LLM. 2. Obtain the embedding corresponding to the token at the end of each text (the preceding token of the EOS token). 3. Calculate the cosine similarity between the two embeddings. 4. Pass the cosine similarity to a 1-layer FNN and regard its output as the similarity of the text pair. The FNN layer is used to convert the cosine similarity values into a label distribution of the dataset. Based on the results of our preliminary experiments, we decided to use the embedding of the token at the end of a text instead of the special EOS token. #### 3.2.2 Training Method The gold labels (similarity values) in the dataset are normalized between 0 and 1 in advance. We calculate the similarity of a text pair using the procedure described in Section 3.2.1. Next, only the parameters newly added to the model (including the parameters of the FNN) are updated based on the mean squared error between the predictions and the gold labels. Furthermore, the initial values of the FNN are set to 1 for weight and 0 for bias. We employ LoRA-tuning as the tuning method of the LLM for its high performance. For experiments in English, we use the Cerebras \begin{table} \begin{tabular}{l|l|c|c||c|c|c|c} \hline Method & Model & Architecture & Size & WMT20 & WMT21 & STS-B & SICK \\ \hline \hline \multicolumn{2}{l|}{**No-Tuning Methods**} & & & & & & & \\ \hline BLEU & - & - & - & 0.109 & 0.120 & 0.244 & 0.354 \\ Edit Distance & - & - & - & 0.345 & 0.340 & 0.089 & 0.278 \\ BERTScore & RoBERTa-large & Encoder & 355M & 0.306 & 0.294 & 0.405 & 0.455 \\ BARTScore CNN+Para & BART-large & Enc-Dec & 406M & 0.225 & 0.219 & 0.475 & 0.505 \\ OpenAI Embeddings & text-embedding-ada-002 & Encoder &? & 0.184 & 0.181 & 0.655 & 0.627 \\ ChatGPT Zero-Shot & gpt-3.5-turbo & Decoder &? & 0.113 & 0.097 & 0.669 & 0.622 \\ ChatGPT Few-Shot & gpt-3.5-turbo & Decoder &? & 0.175 & 0.136 & 0.618 & 0.656 \\ \hline \multicolumn{2}{l|}{**Tuning Methods (Not Target Dataset)**} & & & & & & \\ \hline BLEURT-20 & RemBERT & Encoder & 576M & 0.345 & 0.323 & 0.620 & 0.574 \\ InstructScore & LLaMA & Decoder & 6.7B & 0.439 & 0.345 & 0.471 & 0.526 \\ \hline \multicolumn{2}{l|}{**Tuning Methods (Target Dataset)**} & & & & & & \\ \hline COMET (WMT21 MQM) & XLM-RoBERTa-large & Encoder & 560M & 0.506 & 0.362 & **–** & **–** \\ RoBERTa Fine-Tuning & RoBERTa-large & Encoder & 355M & **0.699** & **0.391** & **0.737** & **0.658** \\ & & 111M & 0.589 & 0.362 & 0.540 & 0.425 \\ & & 256M & 0.634 & 0.378 & 0.585 & 0.462 \\ LLM LoRA-Tuning & Cerebras-GPT & Decoder & 590M & 0.654 & 0.371 & 0.616 & 0.486 \\ & & 1.3B & 0.663 & 0.383 & 0.625 & 0.483 \\ & & 2.7B & 0.671 & 0.377 & 0.661 & 0.512 \\ & & 6.7B & 0.665 & 0.370 & 0.681 & 0.530 \\ \hline \hline \end{tabular} \end{table} Table 1: Kendall’s correlation coefficients between the predictions by the automatic evaluation metrics and the labels in the experiments in English. Figure 1: The architecture and input-output overview of the LLM’s tuning. GPT models2 with parameter sizes ranging from 111M to 6.7B. These models are tuned on WMT20 MQM for the translation evaluation task and on STS-B for the STS tasks, respectively. In other words, the models trained with WMT20 MQM are evaluated on WMT20 MQM and WMT21 MQM, and the models trained with STS-B are evaluated on STS-B and SICK. Footnote 2: [https://huggingface.co/cerebas](https://huggingface.co/cerebas) For experiments in Japanese, we use the GPT-2 and GPT-NeoX models developed by rinna3, ranging from the 37M model to the 3.6B model. We trained models on each of the two datasets in Section 3.1.2. Footnote 3: [https://huggingface.co/rinna](https://huggingface.co/rinna) ### Baselines For comparison, we adopt the following baselines: BLEU, character edit distance, fine-tuned RoBERTa-large (Liu et al., 2019), BERTScore4, BARTScore5, OpenAI Embeddings (Neelakantan et al., 2022), in-context learning of ChatGPT (gpt-3.5-turbo), BLEURT6, COMET7 and InstructScore8. For fine-tuned RoBERTa, as described in Section 3.2.2, we trained models on WMT20 MQM and STS-B for the English experiments and on the two datasets shown in Section 3.1.2 for the Japanese experiments, respectively. For BERTScore, the training data is used to select the best output layer to obtain the embeddings. For OpenAI Embeddings, the scores are the cosine similarity of the obtained embeddings. The prompt used in ChatGPT's in-context learning is shown in Appendix A. We also had a preliminary experiment with in-context learning of Cerebras-GPT as well as ChatGPT, but were unable to generate scores successfully. It is assumed that the model size of few billion is too small for in-context learning. We do not tune BLEURT, but instead use BLEURT-20 (Pu et al., 2021), which is trained in multiple languages. For COMET, we use the model trained on WMT21 MQM. We do not apply COMET to the STS datasets because COMET is a metric for automatic translation evaluation and requires three inputs: pre-translated text, human-translated text, and machine-translated text. Our hyperparameters for training are shown in Appendix B. Footnote 4: [https://github.com/Tiitiger/bert_score](https://github.com/Tiitiger/bert_score) Footnote 5: [https://github.com/neulab/BARTScore](https://github.com/neulab/BARTScore) Footnote 6: [https://github.com/google-research/bluetur](https://github.com/google-research/bluetur) Footnote 7: [https://unbabel.github.io/COMET](https://unbabel.github.io/COMET) Footnote 8: [https://github.com/xu1998hz/SEScore3](https://github.com/xu1998hz/SEScore3) Note that BARTScore, COMET, and InstructScore, only support English and hence are not used for experiments in Japanese. ## 4 Experimental Results and Analysis ### Main Results Kendall's correlation coefficients between the predictions by the automatic evaluation metrics and the gold labels in English and Japanese are shown in Tables 1 and 2, respectively. For all datasets in both languages, RoBERTa-large with fine-tuning achieved the highest accuracy. For LoRA-tuned LLMs, there is a tendency for the accuracy to be proportional to the model size up to a certain model size, but it reaches a ceiling. Also, even models with overwhelmingly larger parameter sizes than \begin{table} \begin{tabular}{l|l|c|c||c|c} \hline \hline Method & Model & Architecture & Size & WMT20 & JSTS \\ \hline \hline \multicolumn{6}{l}{**No-Tuning Methods**} \\ \hline BLEU & - & - & - & 0.226 & 0.353 \\ Edit Distance & - & - & - & 0.242 & 0.321 \\ BERTScore & Waseda RoBERTa-large & Encoder & 337M & 0.319 & 0.558 \\ OpenAI Embeddings & text-embedding-ada-002 & Encoder &? & 0.237 & 0.611 \\ ChatGPT Zero-Shot & gpt-3.5-turbo & Decoder &? & 0.187 & 0.709 \\ ChatGPT Few-Shot & gpt-3.5-turbo & Decoder &? & 0.205 & 0.690 \\ \hline \multicolumn{6}{l}{**Tuning Methods (Not Target Dataset)**} \\ \hline BLEURT-20 & RemBERT & Encoder & 576M & 0.315 & 0.569 \\ \hline \multicolumn{6}{l}{**Tuning Methods (Target Dataset)**} \\ \hline RoBERTa Fine-Tuning & Waseda RoBERTa-large & Encoder & 337M & **0.396** & **0.729** \\ & & & 37M & 0.342 & 0.600 \\ & & & 110M & 0.378 & 0.644 \\ LLM LoRA-Tuning & Rinna-gpt & Decoder & 336M & **0.396** & 0.677 \\ & & & 1.3B & 0.370 & 0.659 \\ & & & 3.6B & 0.380 & 0.687 \\ \hline \hline \end{tabular} \end{table} Table 2: Kendall’s correlation coefficients between the predictions by the automatic evaluation metrics and the labels in the experiments in Japanese. RoBERTa-large showed low accuracy. For ChatGPT's in-context learning, the accuracy on the STS datasets was comparable to that of the tuning-based methods, but its accuracy on the translation evaluation datasets was low. Note that most of the p-values were very close to 0. ### Analysis of Why Tuned LLMs are Inferior From Tables 1 and 2, we observe that LoRA-tuned LLMs, which have by far a larger number of parameters than RoBERTa-large, are inferior in terms of performance. We analyze the causes of this from the experimental results in English. The most significant difference between the two models is that RoBERTa, an encoder-based model, has bidirectional attention, while an LLM has unidirectional attention. Here, we hypothesized that unidirectional attention focuses more on surface word sequences as opposed to bidirectional attention. To confirm this hypothesis, we calculated the correlations of the predictions of RoBERTa and LLMs to BLEU and character edit distance, which are the metrics based on superficial word sequences. The results are shown in Table 3. As hypothesized, the results show that the correlations to both BLEU and edit distance are stronger for LLMs than the encoder-based model. The fact that the correlation decreases as the model size increases in LLMs suggests that the larger the model size, the better the prediction is able to capture not only the surface word sequences but also the meaning of the text. However, even with a model size of 6.7B, the LLM is still not as accurate as RoBERTa. ### Analysis of the Inability of ChatGPT's In-context Learning While ChatGPT's in-context learning showed high accuracy on the STS datasets, it did not perform well on the translation evaluation datasets. We analyze the causes of this from the experimental results in English. In our experiments, the prompts were created to score on a scale of 0 to 100. However, in the output scores, there were many cases where the last digit was 0 or 5 in both zero-shot and few-shot settings. Also, as shown in Figure 2, the label distributions of the translation evaluation datasets are skewed between 0.9 and 1.0, compared to the STS datasets, which have gently sloping distributions. Therefore, most of the predictions in the translation evaluation datasets are 95, etc., and this is thought to have caused the accuracy drop. Thus, it is clear that ChatGPT's in-context learning has difficulty in identifying fine-grained semantic differences. ## 5 Conclusion In this paper, we compared various automatic evaluation methods for text generation in two languages, Japanese and English. We showed that fine-tuned encoder-based models are the strongest when training data is available, and in-context learning of ChatGPT is equally accurate when the variance of scores is large. Our analysis also revealed that tuned LLMs are less accurate than tuned encoder \begin{table} \begin{tabular}{l|c||c|c|c|c|c|c|c|c} \hline Model & Size & \multicolumn{4}{c}{BLEU} & \multicolumn{4}{c}{Edit Distance} \\ & & WMT20 & WMT21 & STS-B & SICK & WMT20 & WMT21 & STS-B & SICK \\ \hline \hline RoBERTa-large & 355M & 0.126 & 0.127 & 0.237 & 0.363 & 0.394 & 0.511 & 0.046 & 0.262 \\ \hline \multirow{4}{*}{Cerebras-GPT} & 111M & 0.213 & 0.212 & 0.281 & 0.553 & 0.491 & 0.612 & 0.077 & 0.491 \\ & 256M & 0.192 & 0.216 & 0.292 & 0.553 & 0.455 & 0.615 & 0.081 & 0.486 \\ \cline{1-1} & 590M & 0.187 & 0.211 & 0.268 & 0.559 & 0.432 & 0.583 & 0.087 & 0.487 \\ \cline{1-1} & 1.3B & 0.175 & 0.225 & 0.277 & 0.545 & 0.425 & 0.574 & 0.096 & 0.483 \\ \cline{1-1} & 2.7B & 0.178 & 0.231 & 0.263 & 0.549 & 0.428 & 0.567 & 0.058 & 0.478 \\ \cline{1-1} & 6.7B & 0.181 & 0.205 & 0.259 & 0.552 & 0.441 & 0.522 & 0.068 & 0.472 \\ \hline \end{tabular} \end{table} Table 3: Kendall’s correlations between the metrics based on superficial word sequences and the predictions by models with tuning in the experiments in English. Figure 2: Label distribution of the test datasets used in the English experiments. -based models because of their focus on surface word sequences. ## Limitations Our experiments assume the presence of a training dataset. If no dataset for training exists, refer to the results without the **Tuning Method (Target Dataset)** to compare the metrics in Tables 1 and 2. ## Acknowledgements This work was supported by a joint research grant from LINE Corporation.
2310.16749
DISCO: A Large Scale Human Annotated Corpus for Disfluency Correction in Indo-European Languages
Disfluency correction (DC) is the process of removing disfluent elements like fillers, repetitions and corrections from spoken utterances to create readable and interpretable text. DC is a vital post-processing step applied to Automatic Speech Recognition (ASR) outputs, before subsequent processing by downstream language understanding tasks. Existing DC research has primarily focused on English due to the unavailability of large-scale open-source datasets. Towards the goal of multilingual disfluency correction, we present a high-quality human-annotated DC corpus covering four important Indo-European languages: English, Hindi, German and French. We provide extensive analysis of results of state-of-the-art DC models across all four languages obtaining F1 scores of 97.55 (English), 94.29 (Hindi), 95.89 (German) and 92.97 (French). To demonstrate the benefits of DC on downstream tasks, we show that DC leads to 5.65 points increase in BLEU scores on average when used in conjunction with a state-of-the-art Machine Translation (MT) system. We release code to run our experiments along with our annotated dataset here.
Vineet Bhat, Preethi Jyothi, Pushpak Bhattacharyya
2023-10-25T16:32:02Z
http://arxiv.org/abs/2310.16749v1
# DISCO: A Large Scale Human Annotated Corpus for Disfluency Correction in Indo-European Languages ###### Abstract Disfluency correction (DC) is the process of removing disfluent elements like fillers, repetitions and corrections from spoken utterances to create readable and interpretable text. DC is a vital post-processing step applied to Automatic Speech Recognition (ASR) outputs, before subsequent processing by downstream language understanding tasks. Existing DC research has primarily focused on English due to the availability of large-scale open-source datasets. Towards the goal of multilingual disfluency correction, we present a high-quality human-annotated DC corpus covering four important Indo-European languages: English, Hindi1, German and French. We provide extensive analysis of results of state-of-the-art DC models across all four languages obtaining F1 scores of 97.55 (English), 94.29 (Hindi), 95.89 (German) and 92.97 (French). To demonstrate the benefits of DC on downstream tasks, we show that DC leads to 5.65 points increase in BLEU scores on average when used in conjunction with a state-of-the-art Machine Translation (MT) system. We release code to run our experiments along with our annotated dataset here2. Footnote 1: Version of this paper that includes Hindi samples can be found here. Footnote 2: [https://github.com/vineet2104/DISCO](https://github.com/vineet2104/DISCO) ## 1 Introduction Humans often think and speak simultaneously in conversations, introducing erroneous words in utterances Gupta et al. (2021). These words do not contribute to semantics of a sentence and hence can be removed to create fluent and easy-to-interpret utterances. Disfluency Correction (DC) is defined as the removal of such disfluent elements from spoken utterances Shriberg (1994). **Motivation:** Apart from making sentences readable and interpretable, DC also helps downstream natural language processing tasks like Machine Translation (MT) Rao et al. (2007); Wang et al. (2010). Removing disfluencies shortens sentences, making it easier for automatic MT systems to translate these utterances. Moreover, the removed erroneous words are not translated which makes the output translation fluent containing all semantics from the source sentence. Table 1 illustrates examples where Google MT produces disfluent and difficult-to-read English translations of disfluent sentences in 3 languages - Hindi, German and French, establishing the need for DC. Previous work in DC has leveraged variety of machine learning models for removing disfluent utterances from text Ostendorf and Hahn (2013); Rasooli and Tetreault (2015); Zayats et al. (2016). However, data in DC is scarce, limiting the use of large transformer models. Switchboard Godfrey et al. (1992), the most extensively available open-source DC corpus, contains English spoken utterances with only 5.9% disfluent words in the entire dataset Charniak and Johnson (2001). Synthetic Data Generation (SDG) has emerged as a viable solution to the data scarcity problem Passali et al. (2022); Kundu et al. (2022). However, SDG can be challenging as it needs expert grammatical knowledge and the data created can often fail to mimic complex disfluencies encountered in real-life dialogues Gupta et al. (2021). \begin{table} \begin{tabular}{l l} \hline \hline **Disfluent Sentence** & **Google MT output** \\ \hline je veux je veux euh enregistrer une une euh video sur instagram & I want I want uh record a uh video on instagram \\ ich brauche eine fahrt ah eine fahrt ah h Hence there is a dire need to develop DC datasets with utterances from real-life conversational situations. Existing datasets have focused on increasing the available data in English. This paper presents a high-quality DC corpus in English and widely spoken languages like Hindi, German and French. Our dataset significantly expands the available data in English and Hindi. To the best of our knowledge, we are the first to create an open-source DC corpus for German and French3. Our contributions are: Footnote 3: Although Cho et al. (2014) annotated the KIT lecture corpus (Stuker et al., 2012) for disfluencies in German, their data is not shared publically. 1. A human-labeled dataset of 12K+ disfluentent text utterance pairs in 4 languages: English, Hindi, German and French with extensive data analysis (Section 3.4). 2. Experimenting with various state-of-the-art techniques for DC ranging from traditional ML models to large transformers (Table 5). Our best models (fine-tuned multilingual transformers) achieve an F1 score of 97.55 (English), 94.29 (Hindi), 95.89 (German) and 92.97 (French). Our results in English and Hindi are competitive with other approaches, but we do not report direct improvement due to the different testing datasets used. 3. Improving BLEU score of a state-of-the-art MT system by 5.65 points in Hindi-English and German-English language pairs after automatic disfluency removal from source sentences (Table 10). Similar analyses for other language pairs are a part of our future work. ## 2 Related Work The study of disfluencies as a spoken language phenomenon was first proposed in Shriberg (1994). DC has been established as a vital post-processing task for ASR transcripts Rao et al. (2007); Wang et al. (2010). Although earlier DC systems were based on translation methods (Honal and Schultz, 2003), current research covers two additional methods: parsing-based and sequence tagging-based techniques. Translation-based methods use a noisy channel approach towards DC hypothesizing that disfluent sentences are fluent sentences with noisy elements (Jamshid Lou and Johnson, 2017; John \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt}} \hline **Disfluency Type** & **Description** \\ \hline \hline \multirow{2}{*}{Filler} & Words like _uhh_, _err_, _uhmm_ that are often uttered to retain turn of speaking. Each language has a different set of filler words commonly uttered. \\ \multirow{4}{*}{Repetition} & Consists of words or phrases that are repeated in conversational speech \\ \multirow{4}{*}{Correction} & Disfluencies that consist of words incorrectly spoken and immediately corrected with a fluent phrase \\ \multirow{4}{*}{False Start} & Examples where the speaker changes their chain-of-though mid sentence to utter a completely different fluent phrase \\ \multirow{4}{*}{Fluent} & Examples which do not contain any disfluent words or phrases \\ & & \\ \cline{1-1} & & \\ \hline \end{tabular} \end{table} Table 2: Types of sentences observed in the DISCO corpus. All disfluencies are marked in red; EN-English, DE-German, FR-French, HI-Hindi. Examples in languages other than English, with their corresponding gloss and transliteration can be found in Appendix E son and Charniak, 2004; Zwarts and Johnson, 2011). Parsing-based methods use techniques such as dependency parsing to predict syntactic structure of an utterance along with disfluent elements (Rasooli and Tetreault, 2015; Honnibal and Johnson, 2014; Wu et al., 2015). Sequence tagging methods work well for disfluency removal from real-life spoken utterances, assigning disfluent/fluent label to every word in the sentence (Hough and Schlangen, 2015; Ostendorf and Hahn, 2013; Zayats et al., 2016; Chen et al., 2022). Language clues and part-of-speech tags based systems have also been explored for DC (Bove, 2008; Christodoulides et al., 2014). There is a notable gap in literature regarding real data annotation in DC, with Switchboard (Godfrey et al., 1992) and Salesky et al. (2018) being the most extensive open-source labeled datasets for English DC. Although Gupta et al. (2021) introduced a dataset for disfluencies in English question answering, they have not been annotated for disfluent words. Without labeled data, various zero-shot, few-shot, and multi-task learning techniques have been proposed, which train on multilingual data, creating and utilizing synthetically generated disfluent sentences (Wang et al., 2018; Passali et al., 2022; Kundu et al., 2022; Bhat et al., 2023). In this work, we experiment with sequence tagging methods for DC. ## 3 DISCO: A Dataset for Disfluency Correction This section analyzes the DISCO corpus, created with the help of English, Hindi, German and French language experts. DISCO contains parallel disfluent-fluent sentence pairs in the above four languages and English translations of fluent sentences in Hindi and German along with disfluency and domain labels. ### Terminology Shriberg (1994) defines disfluencies as a composition of Reparandum, Interregnum and Repair (Figure 1). _Reparandum_ refers to words erroneously uttered by the speaker. The speaker acknowledges that a previous utterance might be incorrect using _interregnum_, whereas _repair_ contains words that correct mis-spoken words. Disfluent utterances might consist of an interruption point- a spoken phenomena like speech pauses. DC removes reparandum and interregnum while retaining repair to make the output sentence more fluent. We study four types of disfluencies observed in our dataset: Filler, Repetition, Correction and False Start. Additionally, there are some fluent sentences present in our corpus. Table 2 describes each type of sentence with some real examples from the DISCO dataset. ### Data Collection Method Goel et al. (2023) released an open-source dataset containing real-life utterances of humans with AI agents for task-oriented dialogue parsing. We extract disfluent sentences and domain labels in English, Hindi, German and French from this corpus. These utterances consist of human dialogues like making notes, monitoring fitness, adding new contacts, opening apps, etc. All sentences are shared with respective language experts for fluent sentence creation and disfluency-type annotation. ### Annotation Protocol and Challenges For each language, we hired external annotators from reputed translation agencies with experience in data annotation. They were asked to create fluent sentences corresponding to disfluent utterances along with disfluency type labels. Each annotator was paid competitively based on current market standards (approximately $ 0.018 per word). Since we follow a sequence tagging approach towards DC, the annotators were asked to only remove disfluent words from utterances without changing word order or correcting original words/phrases. Due to budget constraints, we could not utilize the entire dataset in German and French from Goel et al. (2023). However, we carefully select sentences in these languages to sufficiently cover all disfluency types with varied length and complexity of utterances. Table 3 summarizes the total amount of data created and the amount of disfluency present in the corpus. Since for every language, only one annotator created fluent sentences and disfluency type labels, ensuring high quality data was very important. We Figure 1: A disfluent utterance in English, marked with various components of disfluencies. strongly encouraged the annotators to flag all dubious instances, after which the authors take a majority vote of retaining/removing doubtful disfluent words using high quality translation tools and subject knowledge wherever necessary. Flagged examples and our reasoning for specific annotations have been discussed in Appendix A. ### Key Statistics The DISCO corpus is carefully created to ensure healthy representation of various disfluency types and complexity of sentences. Table 4 describes average length of disfluent and fluent sentences for each language. Our analysis shows that in similar context, commands to AI agents are shorter in German and French than in English and Hindi. The standard deviation of the disfluent sentences demonstrates that the dataset also contains longer utterances, more than ten words long, in each language that are relatively difficult to correct. We showcase the distribution of disfluent sentences across disfluency types in figure 2. Our corpus also contains a good distribution of sentences across various task domains. Readers are urged to refer to Appendix B for the domain-level distribution and other important plots pertaining to the corpus. \begin{table} \begin{tabular}{l l l} \hline **Lang** & **Mean length** & **Mean length** \\ & **of disfluent sentences** & **of fluent sentences** \\ \hline En & 9.19 \(\pm\) 2.85 & 7.45 \(\pm\) 2.59 \\ Hi & 10.18 \(\pm\) 3.60 & 8.24 \(\pm\) 3.12 \\ De & 7.25 \(\pm\) 3.12 & 5.71 \(\pm\) 2.84 \\ Fr & 7.42 \(\pm\) 3.05 & 6.08 \(\pm\) 2.87 \\ \hline \end{tabular} \end{table} Table 4: Average length of disfluent and fluent utterances in the DISCO corpus for each language; English, Hi-Hindi, De-German, Fr-French. \begin{table} \begin{tabular}{l l l l} \hline **Lang** & **No. of sentence** & **No. of words** & **\% disfluent** \\ & **pairs** & & **words** \\ \hline En & 3479 & 31994 & 18.99 \\ Hi & 3180 & 32435 & 18.99 \\ De & 3096 & 22451 & 20.93 \\ Fr & 3005 & 22489 & 17.72 \\ \hline \end{tabular} \end{table} Table 3: Total count of disfluent-fluent pairs in DISCO and percentage of disfluency present; En-English, Hi-Hindi, De-German, Fr-French. Figure 2: Distribution of sentences across disfluency types for all four languages in DISCO. ### Helper Datasets We also use some helper datasets, extracting unlabeled sentences to enable few shot learning-based experiments on DISCO. **LARD:**: Contains synthetically generated English disfluent sentences using rule-based disfluency injection in fluent sentences (Passali et al., 2022). **Samanantar:**: Consists of 49.7 million parallel sentences between English and 11 Indic languages (Ramesh et al., 2021). Source sentences were collected across many domains such as newspapers, government public archives, Wikipedia, etc. The corpus consists of fluent sentences, and we only use Hindi sentences for our experiments. **GermEval 2014:**: Consists of 31K German fluent sentences collected from Wikipedia and various news corpora (Benikova et al., 2014). Originally used for Named Entity Recognition, we utilize unlabeled sentences from the train split. **DiaBLa:**: Released by Bawden et al. (2021), this corpus consists of 5700+ sentence pairs for English-French MT. The dataset is curated from written and informal interactions between native speakers in both languages. ## 4 Dataset Evaluation This section describes the experiments we perform to evaluate the DISCO corpus. Our evaluation strategy measures the efficacy of the corpus for robust disfluency correction in a wide variety of cases. Moreover, we also test the ability of our trained models to correct disfluencies for improving downstream machine translation. ### Data Processing All parallel sentence pairs are passed through a punctuation removal module to reduce the number of tokens for classification. As per the structure of disfluencies described in section 3.1, we consider fluent terms to always follow disfluent terms in an utterance. Disfluent utterances are marked with the positive label (1) and fluent utterances with the neutral label (0) (Kundu et al., 2022). ### Baseline Models We use a combination of smaller ML models, larger transformer models and transformers with adversarial training. All models are trained on an 80:10:10 train:valid:test split for each language. #### 4.2.1 ML Baselines Previous work has shown the efficacy of using Conditional Random Fields (CRFs) and Recurrent Neural Network (RNN) based techniques for token classification in DC (Ostendorf and Hahn, 2013; Hough and Schlangen, 2015). These models require fewer labeled data and are ideal for low-resource domain-specific training (Simpson et al., 2020). Token-level features from a powerful multilingual transformer, XLM-R (Conneau et al., 2020), were used for finetuning the CRF and RNN models. #### 4.2.2 Transformer Baselines Transformers (Vaswani et al., 2017) are large and powerful neural networks capable of learning complex text representations for many downstream NLP tasks. We experiment with three multilingual transformers: mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020) and MuRIL (Khanuja et al., 2021). Finetuning for sequence tagging is performed by adding a classification head (on top of these transformers) that performs sub-word level binary prediction. Prediction of a word to be disfluent/fluent is the prediction of the first sub-word to be disfluent/fluent. #### 4.2.3 Transformer with Adversarial Training (Seq-GAN-BERT) In low-resource settings, adversarial training helps transformers improve the representations it learns for downstream tasks. We use the Seq-GAN-BERT model (Bhat et al., 2023), which supports adversarial training for transformers utilizing labeled and unlabeled data for token classification-based DC. Unlabeled data is used from helper datasets specified in section 3.5. We obtain the best results using MuRIL transformer as the base model in Seq-GAN-BERT. ### Experimental Setup CRF and RNN models are trained using the Flair-NLP framework (Akbik et al., 2019) till the validation cross-entropy loss saturates. We start with a learning rate of 0.1 and reduce it by half each time the model does not improve for three consecutive epochs. Transformer models are trained using the popular transformers package (Wolf et al., 2020). We use a learning rate of 2e-5 and a weight decay of 0.01. All transformer models are trained for 40 epochs using the Adam optimizer (Kingma and Ba, 2014). #### 4.3.1 Hardware support All ML baselines were trained with A100 GPUs provided by Google Colab. Transformers were trained with one NVIDIA GeForce RTX P8-11GB GPU per experiment. ## 5 Results and Analysis We thoroughly analyse all experiments performed in DC. This section also discusses some case studies highlighting strengths and weaknesses of our best models. Our experiments in analyzing the impact of DC on MT provides interesting linguistic insights into the phenomenon of disfluencies. ### Disfluency Correction All results are reported using the F1 score metric [1, 16]. Combined results across all four languages are described in table 5. As the complexity of models increases, the overall accuracy also increases. Transformer architectures perform better than CRF and RNN-based models consistently. In each language, the best models produce 90+ F1 scores on blind test sets, indicating that our corpus successfully solves the data scarcity problem. As expected, F1 scores of multilingual transformers are close due to similiar range of parameters that are fine-tuned for token classification based DC. Performance across disfluency types is described in table 6. We observe that the model performs poorly for fluent sentences in English and French due to fewer samples in the test set. In Hindi and German, false starts are the most difficult disfluencies to correct. Further examination reveals that our model often under-corrects longer false starts, especially in the presence of other disfluencies like fillers. Our model performs robustly across all domain types of utterances. Readers are strongly urged to refer to Appendix C for domain-level analysis of DC results. Although existing DC datasets are of diverse domains, our experiments show that models trained on DISCO outperform test sets from other DC datasets (Appendix D). Table 7 discusses some important case studies containing inferences produced by our model on unseen test sentences. Our models accurately correct complex cases such as multiple disfluencies and change in thought/speech plan. However, it also over-corrects clarifications and treats it as a correction to remove significant meaning. We observe that multi-words, a critical linguistic phenomenon in Indian languages, are often over-corrected to simplify the text. More case studies appear in Appendix E along with gloss and English transliteration for non-roman text. ### Impact of Disfluency Correction on Downstream Machine Translation We use a strong baseline NLLB MT system [10] to compare English translations produced with and without disfluency removal (Appendix F) to understand the impact of DC models on an important downstream NLP task. The Ground Truth (GT) translations for Hindi-English and German-English were created by respective language experts. We use the sacrebleu package [20] to calculate BLEU scores between: T1 (Translations without DC) and GT; T2 (Translations with Automatic DC) and GT; and T3 (Translations with Human DC) and GT. Table 10 summarises our results in both language pairs. DC improves downstream MT for Hindi-English by \begin{table} \begin{tabular}{c c c c c} \hline \hline **Type** & **En** & **Hi** & **De** & **Fr** \\ \hline Filler & 99.78 & 100.00 & 99.37 & 98.72 \\ Repetition & 98.48 & 92.81 & 97.13 & 98.58 \\ Correction & 91.72 & 91.54 & 98.48 & 93.94 \\ False Start & 97.19 & 85.04 & 90.91 & 98.00 \\ Fluent* & 66.67 & 91.03 & 96.30 & 57.14 \\ \hline \hline \end{tabular} \end{table} Table 6: F1 scores for every disfluency type in each language using our best DC model. *We report F1 score of fluent class here because for fluent class, true positives is equal to zero. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **En** & **Hi** & **De** & **Fr** \\ \hline CRF & 59.15 & 35.70 & 53.01 & 43.60 \\ RNN & 83.28 & 69.96 & 81.11 & 82.50 \\ mBERT & 96.94 & 88.08 & 93.70 & **92.97** \\ XLMR & 95.95 & 91.31 & **95.89** & 92.35 \\ MuRIL & 96.65 & **94.29** & 92.00 & 92.48 \\ Seq-GAN- & **97.55** & 93.71 & 88.95 & 86.23 \\ BERT & & & & \\ \hline \hline \end{tabular} \end{table} Table 5: Results in DC for each language. For Seq-GAN-BERT, we report best results with helper datasets (section 3.5): English (En): LARD, Hindi (Hi): Samanantar, German (De): GermEval 2014, French (Fr): DiaBLa. Since we are the first to create DC corpus in German and French and with existing English and Hindi datasets being vastly different in its properties and sources, we do not provide zero-shot metrics of our best models on other datasets. **6.44** points and for German-English by **4.85** points in BLEU score. We also observe that human DC outperforms automatic DC, highlighting scope of improvement of DC models. Table 8 shows that translation BLEU score improves for every disfluency type after DC. Moreover, in repetition and false starts, the automatic removal of DC slightly outperforms Human DC. The most significant improvement in BLEU score is observed in fillers, with the lowest improvement in corrections. Our models also improve the scores across all domains, as described in Appendix F. We also compared the downstream MT improvement caused by a large transformer (MuRIL) trained separately on both DISCO and (Kundu et al., 2022) for Hindi DC followed by downstream Hindi - English translation using NLLB. Table 11 highlights that MuRIL trained on DISCO leads to a **3.66** BLEU score improvement relative to the baseline. #### 5.2.1 Case Study: Examining a few translations with and without disfluency correction Table 9 discusses some interesting cases where disfluencies impact the English translation of Hindi and German sentences. Although removing disfluencies in most cases helps MT, there are few examples where DC leads to worse output. ## 6 Conclusion and Future Work This paper introduces the DISCO dataset for disfluency correction in four widely spoken languages: English, Hindi, German and French. Our work highlights the importance of large-scale projects in NLP that scale the amount of labeled data available. Spoken interactions between humans and AI agents are riddled with disfluencies. Eliminating disfluencies not only improves readability of utterances but \begin{table} \begin{tabular}{l l l l l} \hline \hline **Lang** & **Type** & **Disfluent Sentence** & **Prediction** & **Comments** \\ \hline En & C & is etrading i mean ameri- & is ameritrade the top trading app currently & Correctly identifies change in main content \\ & R & add a 20minute 2mile walk walk to myfitnesspal & add a 2mile walk to myfitnesspal & Correctly removes repeated word _walk_ but mistakes _2mile_ as a correction to _20minute_ \\ De & R,C & trage die ubung die & trage die laufübung ein & Model correctly detects correction of _übung_ (exercise) to _laufübung_ (running exercise) & Model fails to remove false started phrase _zeig mir wie_ as it has an independent meaning \\ Fr & F,R & envoyez mon message euuh & envoyez mon message audio & Model correctly identifies that the user repeats the phrase to denote an audio message. \\ & F & enregistrer une video hd & enregistrer une video & Model incorrectly thinks _hd_ is a avec instagram & disfluent term and not the abbreviation of High Definition \\ \hline \hline \end{tabular} \end{table} Table 7: Inference examples from DC models; En-English, De-German, Fr-French, Hi-Hindi; F-Filler, R-Repetition, C-Correction and FS-False Start. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Lang** & **Setup** & **F** & **R** & **C** & **FS** \\ **Pair** & & & & & \\ \hline Hi-En & No DC & 31.79 & 41.45 & 29.37 & 38.67 \\ & ADC & 42.34 & 47.92 & 35.94 & 45.54 \\ & HDC & 42.96 & 47.69 & 38.71 & 45.23 \\ De- & No DC & 37.95 & 39.36 & 38.95 & 57.31 \\ En & & ADC & 51.20 & 48.50 & 50.76 & 67.35 \\ & HDC & 51.20 & 48.63 & 51.32 & 68.40 \\ \hline \hline \end{tabular} \end{table} Table 8: Effect of each disfluency type and its removal on downstream MT for Hindi-English (Hi-En) and German-English (De-En) language pairs. F-Filler, R-Repetition, C-Correction and FS-False Start. also leads to better downstream translations. Our dataset, which consists of roughly 3000 parallel disfluent-fluent sentences in each language, significantly reduces the data scarcity problem in DC. This allows training of large transformer models to correct spoken disfluencies from written transcripts with high accuracy. Lack of conversational translation datasets has led to most MT systems trained on fluent text. Our experiments show that such models if used in conversational settings do not perform well. By adding a DC model in the pipeline, which is often a smaller model with an incremental increase in latency, one can improve the downstream translations outputted by an MT system that does not adjust to conversational phenomena. Moreover, our dataset in German - English and Hindi - English can also be used to finetune conversational MT models. Future work lies in experimenting with better ML models for sequence tagging-based DC supporting multilingual training. These should also incorporate linguistic features like reparamdum, interregnum and repair. Multimodal DC presents a promising direction as it has the capability of using both speech and text features for correction tasks Zhang et al. (2022). Additionally, trained DC models must be evaluated using diverse samples from various domains and dialects. Special efforts must be made to collect disfluent speech transcripts to be annotated and trained for DC in other low-resource languages. ## 7 Acknowledgements We would like to thank the anonymous reviewers and area chairs for their suggestions to strengthen the paper. This work was done as part of the Bahubhashak Pilot Project on Speech to Speech Machine Translation under the umbrella of National Language Technology Mission of Ministry of Electronics and IT, Govt. of India. We would also like to thank the project managers, internal and external language translators at the Computation for Indian Language Technology (CFILT) IIT Bombay. ## Limitations Our work consists of two limitations. Firstly, since our annotation process consisted of one annotator for each language, we could not report metrics such as inter-annotator agreement or Cohen's kappa to prove the validity of our dataset. However, since DC is a relatively more straightforward task and consists of only removing disfluent words from \begin{table} \begin{tabular}{c c c} \hline \hline **Setup** & **Hi-En** & **De-En** \\ \hline MT without DC & 36.19 & 37.08 \\ MT with ADC & 42.63 & 41.93 \\ MT with HDC & 43.52 & 42.10 \\ \hline \hline \end{tabular} \end{table} Table 10: Effect of DC on downstream MT for Hindi-English (Hi-En) and German-English (De-En) language pairs. ADC: Automatic Disfluency Correction, HDC: Human Disfluency Correction \begin{table} \begin{tabular}{l l l l l} \hline \hline **Lang** & **Disfluent** & **Predicted** & **Translations** & **Observations** \\ **Pair** & **Sentence** & **Fluent Sentence (ADC)** & & \\ \hline De- & 30 nein & 50 & 50 minuten & **T1:** 30 no 50 minutes start jogging & Correction from 30 to 50 \\ En & minuten & joggen & **T2:** Start running for 50 minutes & minutes is identified by \\ & joggen & & **T3:** Start jogging for 50 minutes. & DC which leads to fluent \\ & starten & & & translations T2 and T3 \\ & mache & eine & **T1:** So make a wide angle video & ADC mistakenly \\ & weitwinkel & video & **T2:** Make a video & moves both utterances of \\ & also & eine & & **T3:** Make a wide angle video & _weitwinkel_ (wide angle) \\ & weitwinkel & & & leading to downstream \\ & video & & & translation error \\ \hline \hline \end{tabular} \end{table} Table 9: Examining some examples where disfluencies impact machine translation output for German-English (De-En) language pair \begin{table} \begin{tabular}{c c c} \hline \hline **Model (Dataset)** & **BLEU Score** \\ \hline MuRIL (DISCO) & 42.63 \\ MuRIL Kundu et al. (2022) & 38.97 \\ \hline \hline \end{tabular} \end{table} Table 11: Comparing the performance of DC systems trained on different datasets on the Hindi - English DISCO MT improvement task when used with a state-of-the-art MT system (NLLB) spoken utterances, the authors were able to verify many samples as a part of their analysis. Moreover, the structure of disfluencies helps us recognize disfluency types easily. We have also provided a few flagged cases where annotators discussed their queries with us and how we resolved them. Secondly, we do not compare trained models on DISCO with other datasets due to varied domain of existing datasets. We found that existing datasets like Switchboard (Godfrey et al., 1992), LARD (Passali et al., 2022) and Kundu et al. (2022) all consisted of utterances from very diverse data sources. However we include experiments in Appendix D that highlight the robustness of models trained on DISCO. ## Ethics Statement This work publishes a large scale human annotated dataset for disfluency correction in 4 Indo-European languages - English, Hindi, German and French. We have taken all steps to ensure that the data is collected and annotated using all ethical means. The source sentences of our dataset are extracted from Goel et al. (2023) which release the data using the CC by 4.0 license, allowing us to remix, transform, and build upon the material for any purpose. We also follow a stringent data annotation protocol with consent from the annotators and ensuring they are aware of the risks associated with data creation. We also mention the compensation paid to them for their contribution in section 3.3. Since this project is not sponsored by a federal body, we do not use the IRB approval for our work. However, attention is paid to the quality of our dataset with flagged cases discussed extensively with annotators to ensure appropriate resolution (Appendix A). A thorough and extensive analysis of our corpus is performed, details of which are provided in section 3. All datasets used in conjunction with our corpus are open-source and cited appropriately in section 3.5. We understand that the dataset might have some mistakes, and we will continuously work on monitoring and resolving such issues once the corpus is published for open-source research. Our robust results across domains and types of sentences ensure that changes to the dataset do not pose any technical issues or risks to both developers and model users.
2307.15857
Parameter identifiability in PDE models of fluorescence recovery after photobleaching
Identifying unique parameters for mathematical models describing biological data can be challenging and often impossible. Parameter identifiability for partial differential equations models in cell biology is especially difficult given that many established \textit{in vivo} measurements of protein dynamics average out the spatial dimensions. Here, we are motivated by recent experiments on the binding dynamics of the RNA-binding protein PTBP3 in RNP granules of frog oocytes based on fluorescence recovery after photobleaching (FRAP) measurements. FRAP is a widely-used experimental technique for probing protein dynamics in living cells, and is often modeled using simple reaction-diffusion models of the protein dynamics. We show that current methods of structural and practical parameter identifiability provide limited insights into identifiability of kinetic parameters for these PDE models and spatially-averaged FRAP data. We thus propose a pipeline for assessing parameter identifiability and for learning parameter combinations based on re-parametrization and profile likelihoods analysis. We show that this method is able to recover parameter combinations for synthetic FRAP datasets and investigate its application to real experimental data.
Maria-Veronica Ciocanel, Lee Ding, Lucas Mastromatteo, Sarah Reichheld, Sarah Cabral, Kimberly Mowry, Bjorn Sandstede
2023-07-29T01:21:02Z
http://arxiv.org/abs/2307.15857v2
# Parameter identifiability in PDE models of fluorescence recovery after photobleaching ###### Abstract Identifying unique parameters for mathematical models describing biological data can be challenging and often impossible. Parameter identifiability for partial differential equations models in cell biology is especially difficult given that many established _in vivo_ measurements of protein dynamics average out the spatial dimensions. Here, we are motivated by recent experiments on the binding dynamics of the RNA-binding protein PTBP3 based on fluorescence recovery after photobleaching (FRAP) measurements in RNP granules of frog oocytes. We consider a simple reaction-diffusion model of the protein dynamics, and show the limitations of current methods of structural and practical parameter identifiability for this model and data. We propose a pipeline for assessing parameter identifiability and for learning parameter combinations based on re-parameterization and profile likelihoods analysis. We show that this method is able to recover parameter combinations for synthetic FRAP datasets and investigate its application to real experimental data. parameter identifiability, partial differential equations, profile likelihood, FRAP, RNA binding proteins ## 1 Introduction Many mathematical models of biological processes aim to test relevant biological mechanisms, which are characterized using parameters. Estimating the underlying parameters helps connect and validate mathematical models with existing measurements and thus provide insights into mechanistic understanding of the biological process. However, mathematical models can suffer from identifiability issues, meaning that it may not be possible to uniquely determine the model parameters from the available data. Identifiability is thus a crucial problem in parameter estimation, and various approaches from statistics, applied mathematics, and engineering have been devised to address it [1, 2, 3, 4, 5, 6]. Identifiability problems are typically categorized into structural identifiability, which involves issues arising from the model structure alone, and practical identifiability, which involves issues with parameter estimation stemming from the incorporation of real and noisy data [7]. In mathematical biology, many of these approaches have been more extensively tested and used in models of epidemic and disease treatment dynamics or in systems biology models [8]. For example, [6, 9, 10] review theoretical results and algorithms for structural and practical identifiability of linear and nonlinear ordinary differential equations (ODE) models, with applications to disease dynamics and systems biology processes. In the study of macromolecular dynamics inside cells, spatial movement - characterized by diffusion, transport, and binding dynamics - can be significant and has an impact on the parameters that describe a given model. As a result, partial differential equations (PDEs) that incorporate the dynamics of proteins as a function of time and space are often an appropriate modeling framework. However, PDEs present challenges when studying identifiability measures, since these equations have more variables, contain derivatives, and include boundary conditions [8]. Fewer studies have thus dealt with identifiability for PDE models. Prior work has shown that parameters in PDE systems that model data obtained from FRAP (fluorescence recovery after photobleaching) experiments in cell biology may not be identifiable, leading to large variations of predicted parameter distributions and thus in potentially spurious predictions of cellular dynamics [11]. Here, we are motivated by questions surrounding protein and RNA dynamics in _Xenopus laevis_ frog oocytes [12, 13]. Protein and RNAs organize into membraneless compartments called ribonucleoprotein (RNP) granules (also called localization bodies or L-bodies) in the developing oocytes. We consider a reaction-diffusion PDE model of FRAP microscopy experiments for the dynamics of an RNA binding protein that is enriched in these granules and investigate the limitations of existing structural and practical identifiability techniques for this model and data. We also propose a custom pipeline for extracting identifiable parameter combinations for the proposed PDE model based on time-series FRAP data. We illustrate the application of this framework for both synthetic and experimental FRAP datasets. Given additional biological information on relevant binding rate parameters, this approach may allow the inference of all individual model parameters. ## 2 Biological motivation and fluorescence microscopy data RNP granules are membraneless compartments containing RNA and other proteins, serving diverse biological functions. In developing _Xenopus laevis_ oocytes, maternal mRNAs are packaged into large RNP granules that localize to specific subcellular locations, in a process that is required for embryonic patterning [12]. The assembly of RNAs into these RNP granules (termed localization bodies or L-bodies) requires the interaction of RNAs with RNA binding proteins (RBPs). The data suggest that the protein dynamics are influenced by the strength and number of interactions of RBPs with the non-dynamic RNA in L-bodies [13]. An example of a multivalent RNA-binding protein is PTBP3, which is highly co-localized with L-bodies in _Xenopus laevis_ oocytes [13]. PTBP3 has four domains (termed RRM1, RRM2, RRM3, and RRM4) that can bind to RNA, making it an ideal model for studying the strength of interactions within L-bodies. In particular, experimental manipulations in this system can generate PTBP3 RNA-binding mutants, where the ability of one or more RNA-binding domains to bind to RNA is abolished [13]. Quantifying the binding of PTBP3, and its mutants, to RNA would therefore be useful in contributing to our understanding of how protein dynamics are regulated in L-bodies and other RNP granules. An important experimental technique for assessing protein dynamics _in vivo_ is fluorescence recovery after photobleaching (FRAP). FRAP is a well-established approach to studying the binding and diffusion of molecules in cells [14]. This microscopy experiment relies on bleaching a small region in a cell expressing a fluorescent protein or nucleic acid, and quantifying the recovery of fluorescence in that bleach spot over time. Its output therefore averages out any spatial information and provides a time-series dataset consisting of the amount of fluorescence in the bleach spot over time. In this work, we use the FRAP experimental measurements in [13] in order to determine parameter regimes of interest. These FRAP datasets consist of fluorescence recovery curves that are adjusted to correct for photobleaching during image acquisition, as we previously outlined in [15]. In these experiments, the fluorescence in the bleach spot (a square with side \(l=10\mu\)m) is recorded at 5-second intervals for a total of \(500\) seconds. ## 3 Mathematical modeling of FRAP ### PDE model of protein dynamics We model the dynamics of PTBP3 using a system of linear reaction-diffusion PDEs. The variables we study correspond to concentrations of PTBP3 in different dynamical states: \(f(x,y,t)\) denotes the concentration of free protein and \(c(x,y,t)\) refers to the concentration of bound complexes at location \((x,y)\) and time \(t\). We assume PTBP3 can transition between the diffusing and stationary states, so that the dynamics is described by the PDE system: \[\frac{\partial f}{\partial t} =D\Delta f-\beta_{2}f+\beta_{1}c\,,\] \[\frac{\partial c}{\partial t} =\beta_{2}f-\beta_{1}c\,, \tag{1}\] where \(D\) denotes the diffusion constant in the diffusing state, \(\beta_{1}\) is the rate of transition from the stationary to the diffusing state, and \(\beta_{2}\) is the rate of transition from the diffusing to the stationary state. This model is equivalent to the reaction-diffusion system we previously studied in [11] for non-localizing RNA dynamics and has also been previously used and analyzed in other works, including [16]. Our goal is to estimate the reaction rate parameters \(\beta_{1},\beta_{2}\) and the diffusion constant \(D\) from experimental FRAP data. A key assumption underlying this model is that the binding interactions of PTBP3 involve a single binding state. Four binding domains have been identified for PTBP3, of which two were shown to bind to the non-dynamic L-body RNA [13]. Mathematical models involving multiple independent binding sites are more challenging to evaluate due to the increased dimension of the parameter space, and generally show similar FRAP behaviors [16]. We therefore proceed with the simplifying assumption of a single binding site for the PTBP3 reaction. We comment on the limitations of this assumption in the Discussion. ### Postbleach intensity profile model To determine initial conditions for the concentrations of PTBP3 in the PDE model (1), we consider a model of the experimental FRAP postbleach intensity profiles on the focal plane of the fluorescence distribution [11]. The photobleaching process in FRAP is usually assumed to be an irreversible first-order reaction of the form \[\frac{\partial C_{b}}{\partial t}(x,y,t)=-\alpha K(x,y)C_{b}(x,y,t) \tag{2}\] for the fluorophore concentration \(C_{b}(x,y,t)\), where \(\alpha\) is a bleaching parameter and \(K(x,y)\) is the effective bleaching intensity distribution. Since the initial condition of model (1) corresponds to the spatial concentration of fluorophores at the first postbleach time, we therefore seek: \[C_{b}(x,y,0)=C_{0}e^{-\alpha K(x,y)}\,. \tag{3}\] The FRAP experiments in [13] use square bleach regions of interest (ROIs). We therefore adapt the approach in [17], which considers a rectangular FRAP bleach spot. The effective bleaching intensity distribution \(K(x,y)\) is calculated as the convolution of the bleach geometry \(B(x,y)\) and the time-averaged bleaching intensity distribution \(\langle I_{b}(x-x^{\prime},y-y^{\prime},t)\rangle\): \[K(x,y)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}B(x^{\prime},y^{\prime}) \langle I_{b}(x-x^{\prime},y-y^{\prime},t)\rangle dx^{\prime}dy^{\prime}\,. \tag{4}\] We assume a square photobleach area with side length \(l\) and a Gaussian photobleaching intensity distribution [17]: \[B(x,y) =\begin{cases}1,&\text{if }|x|<l/2\text{ and }|y|<l/2\\ 0,&\text{otherwise}\end{cases}\,, \tag{5}\] \[\langle I_{b}(x,y,t)\rangle =I_{0}e^{-2\frac{x^{2}+y^{2}}{r^{2}}}\,, \tag{6}\] where \(r\) is the effective radius of the distribution. We therefore obtain for the effective bleaching intensity distribution: \[K(x,y) =I_{0}\int_{-l/2}^{l/2}e^{-\frac{(x-x^{\prime})^{2}}{r^{2}}}dx^{ \prime}\int_{-l/2}^{l/2}e^{-\frac{(y-y^{\prime})^{2}}{r^{2}}}dy^{\prime}\,, \tag{7}\] \[=I_{0}\int_{(x-l/2)/r}^{(x+l/2)/r}e^{-u^{2}}du\int_{(y-l/2)/r}^{( y+l/2)/r}e^{-v^{2}}dv\,,\] (8) \[=\tilde{I}_{0}\left[\operatorname{erf}\left(\frac{x+l/2}{r} \right)-\operatorname{erf}\left(\frac{x-l/2}{r}\right)\right]\left[ \operatorname{erf}\left(\frac{y+l/2}{r}\right)-\operatorname{erf}\left(\frac{ y-l/2}{r}\right)\right]\,. \tag{9}\] Plugging this into (3) for the initial fluorophore concentration yields: \[C_{b}(x,y)=C_{0}e^{-\tilde{\alpha}\left[\mathrm{erf}\left(\frac{x+1/2}{r}\right)- \mathrm{erf}\left(\frac{x-1/2}{r}\right)\right]\left[\mathrm{erf}\left(\frac{x +1/2}{r}\right)-\mathrm{erf}\left(\frac{x-1/2}{r}\right)\right]}\,. \tag{10}\] Since the experimental postbleach profiles show some asymmetry along the two spatial dimensions (see Figure 1A), we extract fluorescence profiles \(C_{b}(x)\) and \(C_{b}(y)\) in the \(x\) and \(y\) directions from the postbleach intensity data and fit them to expressions of the form: \[C_{b}(x) =C_{x}e^{-\alpha_{x}\left[\mathrm{erf}\left(\frac{x+1/2}{r_{x}} \right)-\mathrm{erf}\left(\frac{x-1/2}{r_{x}}\right)\right]}\,, \tag{11}\] \[C_{b}(y) =C_{y}e^{-\alpha_{y}\left[\mathrm{erf}\left(\frac{x+1/2}{r_{y}} \right)-\mathrm{erf}\left(\frac{y-1/2}{r_{y}}\right)\right]}\,. \tag{12}\] In particular, we estimate the parameters \(r_{x}\) and \(\alpha_{x}\) by fitting the fluorescence profile \(C_{b}(x)\) to equation (11) and parameters \(r_{y}\) and \(\alpha_{y}\) by fitting the fluorescence profile \(C_{b}(y)\) to equation (12) using standard nonlinear least-squares estimation in Matlab using the function nlinfit. Here we use \(l=10\)\(\mu\)m, consistent with the experiments in [13]. We illustrate sample postbleach intensity profiles and the corresponding fitted curves in Figure 1B. Since the estimated \(\alpha_{x}\) and \(\alpha_{y}\) parameter values are very similar for all datasets considered, we use the following final form for the initial fluorophore concentration: \[C_{b}(x,y)\sim C_{0}e^{-\tilde{\alpha}\left[\mathrm{erf}\left(\frac{x+1/2}{r_ {x}}\right)-\mathrm{erf}\left(\frac{x-1/2}{r_{x}}\right)\right]\left[\mathrm{ erf}\left(\frac{x+1/2}{r_{y}}\right)-\mathrm{erf}\left(\frac{x-1/2}{r_{y}} \right)\right]}\,. \tag{13}\] The remaining parameter \(C_{0}\) is determined separately, together with the parameters of interest that describe the protein dynamics. Finally, the initial conditions for the model equations (1) are given by: \[f(x,y,t=0) =pC_{b}(x,y)\,,\] \[c(x,y,t=0) =(1-p)C_{b}(x,y)\,, \tag{14}\] where the initial postbleach profile \(C_{b}(x,y)\) is given in (13) and the parameter \(p\in[0,1]\) denotes the initial fraction of PTBP3 protein in the diffusing state, which we will also determine from the data as described below. As shown in [11], parameter estimation for FRAP experiments is sensitive to the initial condition given by the postbleach profile. We therefore use these data-informed initial conditions for all the studies carried out in this work. ### Deterministic parameter estimation In testing the techniques proposed here, we consider both synthetic and experimental FRAP data. The experimental fluorescence intensity data is collected in [13] at every \(5\)s intervals up to a total time of \(500\)s. We adjust the microscopy Figure 1: A) An image of the vegetal cytoplasm of a _Xenopus laevis_ oocyte expressing fluorescently-labeled PTBP3 (red) in L-bodies is shown, with a \(10\)\(\mu\)m photoleach square ROI. Yellow dashed lines show sample extraction of the fluorescence profiles \(C_{b}(x)\) and \(C_{b}(y)\) in the \(x\) and \(y\) directions from the postbleach intensity data. B) Fitted fluorescence postbleach profiles along the \(x\) and \(y\) directions. data by correcting for background fluorescence and dividing the resulting fluorescence recovery by the intensity of a neighboring ROI for each time point, as we previously described in [15]. We denote the real FRAP data by \(\mathrm{FRAP}_{\mathrm{true}}(t)\). The corresponding quantity from the FRAP model described in Section 3.1 is then denoted by \(\mathrm{FRAP}(t,\mathbf{\theta})\) and calculated as \[\mathrm{FRAP}(t,\mathbf{\theta})=\int_{-l/2}^{l/2}\int_{-l/2}^{l/2}(f+c)(x,y,t,\bm {\theta})dxdy\,. \tag{15}\] Here, \(\mathbf{\theta}\) is the vector of parameters of interest and \(l\) is the side of the square bleach ROI. We let \(\mathbf{\theta}=(D,\beta_{1},\beta_{2},p,C_{0})\) and note that \(D,\beta_{1},\beta_{2}\) are kinetic parameters describing the dynamics of PTBP3 proteins (equations (1)), while \(p\) and \(C_{0}\) are parameters that describe the initial postbleach profile in each protein population (equations (13) and (14)). As in [11], we numerically integrate equations (1) using an efficient exponential time-differencing fourth-order Runge-Kutta scheme [18, 19] for time integration coupled with Fourier spectral methods for space discretization to solve for \(\mathrm{FRAP}(t,\mathbf{\theta})\). We then use the MATLAB optimization routine lsqnonlin to determine the parameter set that minimizes the \(L^{2}\)-norm difference between the true and model FRAP curves: \[\hat{\mathbf{\theta}}=\min_{\mathbf{\theta}}\|\mathrm{FRAP}_{\mathrm{true}}(t)-\mathrm{ FRAP}(t,\mathbf{\theta})\|^{2}\,. \tag{16}\] We previously found that the initial guesses for parameters describing FRAP dynamics can be key in ensuring convergence in deterministic parameter estimation for this type of data [11]. As in prior work, we thus carry out parameter sweeps that sample through values of \(D,\beta_{1},\beta_{2},p\), while \(C_{0}\) is kept fixed throughout the sweeps as it is informed from the postbleach intensity data. After evaluating the \(L^{2}\)-norm difference between the FRAP curves generated with each parameter combination and the true data, we choose the parameter sets that yield the smallest differences as initial guesses for the optimization routine to estimate \(\mathbf{\theta}=(D,\beta_{1},\beta_{2},p,C_{0})\). We apply this framework to FRAP recovery data from experiments in stage II oocytes that test the dynamics and interactions of PTBP3 with specific RNA Recognition Motifs (RRMs) in L-bodies [13]. In particular, we apply deterministic parameter estimation to experiments with wild-type PTBP3 (WT, Set 1), single RRM mutant PTBP3 (mut3 in [13], Set 2), and double RRM mutant PTBP3 (mut34 in [13], Set 3). The estimated kinetic parameters for the dynamics of PTBP3 are given in Table 1 for several FRAP datasets from [13]. It remains challenging, however, to determine the level of confidence we can place in the results of the deterministic parameter estimation. Given the space-averaged FRAP data, it is very likely that some parameters of the spatio-temporal PDE model are not identifiable. As we have previously observed in [11], while some parameters may show consistency within each wild-type or mutant setting, there is still wide variability, especially in the ranges of the reaction rates \(\beta_{1}\) and \(\beta_{2}\). In the following, we thus focus on synthetic data generated using PDE model (1) in order to investigate parameter identifiability of given FRAP data. In generating synthetic FRAP data, we consider three parameter regimes roughly inspired from the results of the parameter estimation procedure for the three wild-type and mutant settings (Sets 1-3). In addition, we consider an effective diffusion parameter regime (Set 4) as previously studied in [16] for equations (1). In this regime, the reaction dynamics are much faster than diffusion, leading to rapid local equilibrium of the reaction process. This leads to FRAP recovery curves which can be characterized by the single parameter combination \(D_{\mathrm{eff}}=\frac{D}{1+\beta_{2}/\beta_{1}}\) termed the _effective diffusion coefficient_ in [16]. We provide these parameter regimes in Table 2. These parameter values are used to generate synthetic FRAP data using model (1), which we further use to assess parameter identifiability using established techniques in Section 5 and for benchmarking our proposed method in Section 6. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Cell & PTBP3 Type & \(D\) (\(\mu\)m\({}^{2}\)/s) & \(\beta_{1}\) (1/s) & \(\beta_{2}\) (1/s) & \(p\) & \(C_{0}\) \\ \hline 1 & WT & \(0.26\) & \(7.6\times 10^{-5}\) & \(9.6\times 10^{-9}\) & 0.75 & 2.1 \\ 2 & WT & \(0.22\) & \(3.5\times 10^{-3}\) & \(2.2\times 10^{-2}\) & 0.66 & 1.15 \\ 3 & WT & \(0.84\) & \(1.0\times 10^{-3}\) & \(3.1\times 10^{-5}\) & 0.92 & 0.64 \\ 1 & mut3 & \(0.54\) & \(9.4\times 10^{-5}\) & \(1.1\times 10^{-9}\) & 0.83 & 2.7 \\ 2 & mut3 & \(0.56\) & \(9.9\times 10^{1}\) & \(7.2\times 10^{-1}\) & 0.23 & 0.5 \\ 3 & mut3 & \(1.6\) & \(5.5\times 10^{-3}\) & \(4.6\times 10^{-4}\) & 0.31 & 0.79 \\ 1 & mut34 & \(1.93\) & \(3.1\times 10^{-5}\) & \(5.8\times 10^{-10}\) & 0.73 & 2.5 \\ 2 & mut34 & \(1.41\) & \(2.5\times 10^{-5}\) & \(4.7\times 10^{-8}\) & 0.82 & 3.2 \\ 3 & mut34 & \(6.2\) & \(9.6\times 10^{-2}\) & \(1.0\times 10^{-1}\) & 0.24 & 0.85 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of deterministic parameter estimation for several wild-type (WT) and mutant PTBP3 FRAP datasets from [13]. ## 4 Methods for practical and structural parameter identifiability Our goal is to investigate the identifiability of kinetic parameters of FRAP models. We begin by reviewing established methods for practical and structural parameter identifiability in differential equations models. Throughout this work, the parameter learning and identifiability methods considered will apply more generally to a model of the form \[\frac{\partial\mathbf{x}}{\partial t}=f(\mathbf{x},t;\mathbf{\theta})\,, \tag{17}\] where \(\mathbf{\theta}=(\theta_{1},\theta_{2},...,\theta_{n})\) is the vector of model parameters of interest and the model output is given by \[y=g(\mathbf{x},t;\mathbf{\theta})\,. \tag{18}\] For our application, model (17) will consist of the partial differential equations in (1), and the model output \(y\) will consist of the time-series FRAP measurements defined in (15). In this section, we provide a brief overview of established methods of parameter identifiability for models and output as in equations (17) and (18). As commonly done in studies of identifiability analysis, we distinguish between structural identifiability, which considers issues with identifying parameters based on the model structure alone, and practical identifiability, which considers issues that stem from identification based on real and potentially noisy data [7]. ### Structural identifiability An established technique for assessing the local structural identifiability of a model consists of constructing the Fisher Information Matrix (FIM). This matrix, denoted by \(F\), captures the amount of information contained in the model output \(y(\mathbf{t})\) about the set of parameters \(\mathbf{\theta}\)[7]. Here we assume that the data measurements are available at times \(\mathbf{t}=(t_{1},t_{2},...,t_{m})\). Based on the concept of sensitivity identifiability introduced in [20] and reviewed in [9], this technique requires calculating the sensitivity matrix: \[X=\left(\frac{\partial y}{\partial\theta_{1}}(\mathbf{t};\mathbf{\theta^{0}})\quad \frac{\partial y}{\partial\theta_{2}}(\mathbf{t};\mathbf{\theta^{0}})\quad\ldots\quad \frac{\partial y}{\partial\theta_{n}}(\mathbf{t};\mathbf{\theta^{0}})\right)\,,\] where \(\mathbf{\theta^{0}}\) is a set of baseline parameters around which the sensitivities are evaluated. The Fisher Information Matrix is then given by the symmetric \(n\times n\) matrix \(F=X^{T}X\). Studies [9, 20] show that identifiability of the parameter set \(\mathbf{\theta}\) requires nonsingularity of matrix \(F\). In practice, the parameter sensitivities \(\frac{\partial y}{\partial\theta_{i}}(\mathbf{t};\mathbf{\theta^{0}})\) are approximated numerically, and the parameter set \(\mathbf{\theta}\) is considered unidentifiable when \(\det(F)\) is small [7]. The rank of the matrix \(F\) gives the number of identifiable parameter combinations [7, 9, 21], but the method cannot identify the form of the combinations. More recent studies have combined the FIM method with techniques for practical identifiability or subset selection for ordinary differential equation models to determine subsets of parameters that can be estimated from given data [7, 21]. While FIM reflects local structural identifiability, one framework to assess generic structural identifiability is based on differential algebraic methods. This framework was initially developed for ordinary differential equations models but was recently extended in [8] to age-structured PDE models for disease spread. This approach requires converting the model system to input-output equations consisting of a set of monic polynomial equations expressed in terms of the known model output \(y\) and its derivatives, as well as in terms of rational coefficients depending on the model parameters \(\mathbf{\theta}\)[8]. This work builds on a substitution-based approach as in [22, 23] to eliminate unobserved variables and to obtain a system whose identifiability features are equivalent to those of the original system. Specifically, identifiability is evaluated based on the coefficients of the monomial terms in the reduced system [8]. ### Practical identifiability using Bayesian inference A commonly used method for assessing practical identifiability is Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling [24, 25]. As described above, suppose we are interested in observed data \(y\) and model parameter \(\mathbf{\theta}\) \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Parameters & \(D\) (\(\mu\)m\({}^{2}\)/s) & \(\beta_{1}\) (1/s) & \(\beta_{2}\) (1/s) & \(p\) & \(C_{0}\) & \(\det(F)\) \\ \hline Set 1 & \(0.1\) & \(10^{-3}\) & \(10^{-3}\) & 0.5 & 1.5 & \(1.3\times 10^{-4}\) \\ Set 2 & \(1\) & \(10^{-3}\) & \(10^{-4}\) & 0.25 & 0.75 & \(1.0\times 10^{-2}\) \\ Set 3 & \(2\) & \(10^{-5}\) & \(10^{-3}\) & 0.75 & 2 & \(3.8\times 10^{-2}\) \\ Set 4 & \(0.1\) & \(5\) & \(5\) & 0.5 & 1.5 & \(7.7\times 10^{-17}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Set of parameter regimes chosen to generate synthetic FRAP data using model 1. The last column displays the determinant of the Fisher information matrix for each parameter set (Section 4.1 and 5.1). Then, according to Bayes' theorem, \[p(\mathbf{\theta}|y)\propto p(y|\mathbf{\theta})p(\mathbf{\theta}), \tag{19}\] where \(p(\mathbf{\theta})\) denotes the prior distribution of \(\mathbf{\theta}\), \(p(y|\mathbf{\theta})\) denotes the likelihood function, and \(p(\mathbf{\theta}|y)\) denotes the posterior distribution of \(\mathbf{\theta}\). The likelihood function represents the extra information that \(y\) contributes to our understanding of \(\mathbf{\theta}\). In the Bayesian inference approach, we seek to estimate the posterior distribution, which specifies the distribution \(\mathbf{\theta}\) given our knowledge of \(y\) and \(p(\mathbf{\theta})\). Here, we can consider \(\mathbf{\theta}\) to be identifiable if we can estimate a relatively concentrated posterior. We estimate the posterior distributions of parameters in the model using MCMC simulation. Specifically, we use the Delayed Rejection and Adaptive Metropolis (DRAM) MCMC algorithm, a variation of the Metropolis-Hastings MCMC algorithm [26]. A standard Metropolis-Hastings algorithm starts with a Markov Chain at initial position \(\mathbf{\theta}_{i}\) and accepts candidate move \(\mathbf{\theta}_{i+1}\) with probability \(\alpha\), where \[\alpha=\min\biggl{[}1,\frac{p(\mathbf{\theta}_{i+1}|y)}{p(\mathbf{\theta}_{i}|y)} \biggr{]}. \tag{20}\] Proposals in Metropolis-Hastings are sampled from a multivariate normal distribution [25]. The DRAM algorithm has two advantages over the standard Metropolis-Hastings: DRAM incorporates (1) delayed rejection and (2) adaptive Metropolis samplers [26]. After the standard Metropolis-Hastings rejects a candidate move, delayed rejection proposes subsequent moves in lieu of remaining at the same position. With an adaptive Metropolis approach, the proposal distribution of Metropolis-Hastings is based on past samples in the Markov chain. Combined, adaptive Metropolis enhances DRAM's ability to explore the range of good proposal distributions, while delayed rejection improves DRAM's flexibility in its local exploration of the parameter space [26]. Practical identifiability in a Bayesian setting can be determined graphically or through diagnostic statistics. In general, characteristics like poorly converging Markov Chains, label-switching, and multimodal or overly broad distributions indicate poor identifiability [24, 25, 27]. ### Practical identifiability using profile likelihood analysis Computing the full MCMC posterior distributions for parameters of interest as described in Section 4.2 is known to be computationally expensive, especially for partial differential equations models [25]. An alternative approach to assessing practical parameter identifiability is to carry out a profile likelihood analysis. This method requires setting up the normalized likelihood function \[\mathcal{L}(\mathbf{\theta};y)=\frac{p(\mathbf{\theta};y)}{\sup_{\mathbf{\theta}}p(\mathbf{ \theta};y)}\,,\] where \(p(\mathbf{\theta};y)\) is the likelihood function as in Section 4.2. This normalized likelihood assumes fixed data \(y\) and is a function of the parameters \(\mathbf{\theta}\). We then let \(\mathbf{\theta}=(\psi,\mathbf{\lambda})\), where \(\psi\) is a scalar parameter of interest whose identifiability we are interested in assessing, and \(\mathbf{\lambda}\) is a vector of nuisance parameters. The profile likelihood for the interest parameter \(\psi\) is then given by: \[\mathcal{L}_{p}(\psi;y)=\max_{\mathbf{\lambda}}\mathcal{L}(\psi,\mathbf{\lambda};y)\,. \tag{21}\] In practice, this means that for each value of parameter \(\psi\) chosen from a grid in an appropriate interval, parameters \(\mathbf{\lambda}\) are optimized out. This yields optimal nuisance parameter values \(\lambda^{*}(\psi)\) for each grid value of \(\psi\); see [25]. If the measurement noise is assumed to be normally distributed as \(\epsilon\sim N(0,\sigma^{2})\), then: \[p(y;\psi,\mathbf{\lambda})=\left(\frac{1}{2\pi\sigma^{2}}\right)^{n/2}\exp\left(- \frac{1}{2\sigma^{2}}\|y-y_{\text{sim}}(\psi,\mathbf{\lambda})\|^{2}\right), \tag{22}\] where \(y_{\text{sim}}\) consists of the model solutions at \(n\) time points [25, 28]. The profiling calculation in equation (21) is then equivalent to solving a nonlinear least-squares optimization problem for each grid value of the parameter of interest \(\psi\). The shape of the profile likelihoods can provide rich information about whether parameters can be inferred from measurement data [28], as can be seen in the cartoon in Figure 2. In particular, flat regions of the likelihood profile for a parameter of interest indicate that the parameter is practically or structurally unidentifiable [7, 28]. In this situation and for some applications [22], it is sometimes useful to examine the relationship between the interest parameter and each fitted nuisance parameter, in the flat regions of the profile likelihood. These are called subset profiles and can help uncover the form of potential identifiable combinations of parameters for the given model [22]. Applications of practical and structural parameter identifiability to PDE models for synthetic FRAP data We now turn to applying the parameter identifiability methods outlined in Section 4 for the PDE model (1) describing the dynamics of PTBP3 protein during fluorescence recovery after photobleaching. We consider synthetic FRAP datasets generated using the parameter regimes outlined in Table 2 and determined based on the procedure in Section 3.3. In the notation of Section 4, the relevant model output is \(y(t;\boldsymbol{\theta})=\mathrm{FRAP}(t;\boldsymbol{\theta})\) as defined in equation (15). ### Structural identifiability First, we aim to determine the local structural identifiability of kinetic model parameters \(D,\beta_{1},\beta_{2}\) given completely accurate FRAP data using the Fisher information matrix method described in Section 4.1. To construct this matrix, we first calculate the sensitivities of the output with respect to the model parameters. For example, we seek: \[\frac{\partial\mathrm{FRAP}}{\partial D}(t;\boldsymbol{\theta})=\int_{-l/2}^{ l/2}\int_{-l/2}^{l/2}\left(f_{D}+c_{D}\right)(x,y,t;\boldsymbol{\theta})dxdy\] where \(f_{D}=\frac{\partial f}{\partial D}\) and \(c_{D}=\frac{\partial c}{\partial D}\). The partial derivatives of the protein concentrations satisfy the system: \[\frac{\partial f_{D}}{\partial t} =D\Delta f_{D}-\beta_{2}f_{D}+\beta_{1}c_{D}+\Delta f\,,\] \[\frac{\partial c_{D}}{\partial t} =-\beta_{1}c_{D}+\beta_{2}f_{D}\,,\] \[\frac{\partial f_{\beta_{1}}}{\partial t} =D\Delta f_{\beta_{1}}-\beta_{2}f_{\beta_{1}}+\beta_{1}c_{\beta_{ 1}}+c\,,\] \[\frac{\partial c_{\beta_{1}}}{\partial t} =\beta_{2}f_{\beta_{1}}-\beta_{1}c_{\beta_{1}}-c\,,\] \[\frac{\partial f_{\beta_{2}}}{\partial t} =D\Delta f_{\beta_{2}}-\beta_{2}f_{\beta_{2}}+\beta_{1}c_{\beta_{ 2}}-f\,,\] \[\frac{\partial c_{\beta_{2}}}{\partial t} =\beta_{2}f_{\beta_{2}}-\beta_{1}c_{\beta_{2}}+f\,. \tag{23}\] We solve the above sensitivity equations simultaneously with integrating model (1) using the numerical methods outlined in Section 3.3. Then the sensitivity matrix is given by: \[X=\left(\frac{\frac{\partial\mathrm{FRAP}}{\partial t}(\boldsymbol{t}; \boldsymbol{\theta}^{\boldsymbol{0}})}{\|\frac{\partial\mathrm{FRAP}}{ \partial D}(\boldsymbol{t};\boldsymbol{\theta}^{\boldsymbol{0}})\|_{2}}\quad \frac{\frac{\partial\mathrm{FRAP}}{\partial t}(\boldsymbol{t};\boldsymbol{ \theta}^{\boldsymbol{0}})}{\|\frac{\partial\mathrm{FRAP}}{\partial D}( \boldsymbol{t};\boldsymbol{\theta}^{\boldsymbol{0}})\|_{2}}\quad\frac{\frac{ \partial\mathrm{FRAP}}{\partial D}(\boldsymbol{t};\boldsymbol{\theta}^{ \boldsymbol{0}})}{\|\frac{\partial\mathrm{FRAP}}{\partial D}(\boldsymbol{t}; \boldsymbol{\theta}^{\boldsymbol{0}})\|_{2}}\right)\,,\] where \(\boldsymbol{\theta}^{\boldsymbol{0}}\) corresponds to the baseline parameter regimes of interest in Table 2. Here we have normalized each column by the \(L^{2}\) norm of the corresponding sensitivity vector, to account for the different magnitudes of the parameters. The Fisher Information Matrix \(F=X^{T}X\) is then a \(3\times 3\) matrix whose determinant is displayed in the last column of Table 2 for the relevant parameter regimes considered. As expected, the matrix is singular for the effective diffusion parameter regime (Set 4), where we only expect to be able to identify one parameter combination (effective diffusion) in Figure 2: Interpretation of profile likelihoods in terms of structural and practical identifiability [28]. this instance. For the other parameter regimes corresponding to wild-type and mutant protein binding settings (Sets 1-3), the determinant of the matrix is small, however it is difficult to conclude whether all or only subsets of the parameters are locally structurally identifiable given FRAP recovery data. To assess general structural identifiability, we also apply the differential algebra approach recently outlined in [8] for age-dependent PDE models. We focus on the simplification of the reaction-diffusion PDE model (1) to one spatial dimension \(x\). Since FRAP recovery data requires averaging out the sum of the protein concentrations in each state over a given spatial domain corresponding to the bleaching region (\(x\in[-l/2,l/2]\)), we start by considering model output: \[z(x,t)=f(x,t)+c(x,t)\,.\] Re-writing system (1) in terms of the total protein concentration \(z(x,t)\) and the concentration of bound protein complexes \(c(x,t)\) yields: \[\frac{\partial z}{\partial t} =Dz_{xx}-Dc_{xx}\,,\] \[\frac{\partial c}{\partial t} =\beta_{2}z-(\beta_{1}+\beta_{2})c\,. \tag{24}\] The goal is to express this system in terms of model output \(z\) and its derivatives. By differentiating the first equation in (24) with respect to time and using substitution to eliminate variable \(c\), we obtain: \[0=-z_{tt}+Dz_{txx}+\beta_{1}Dz_{xx}-(\beta_{1}+\beta_{2})z_{t}\,. \tag{25}\] This input-output equation is written as a polynomial equation in terms of derivatives of \(z\). As in [8], we rank the terms within the polynomial by assuming that derivatives with respect to time are ranked higher than those with respect to space. To ensure a monic polynomial in \(z_{txx}\), we therefore divide equation (25) by \(D\) and obtain the following set of polynomial coefficients: \(\{1,-\frac{1}{D},\beta_{1},-\frac{\beta_{1}+\beta_{2}}{D}\}\). This provides a map from parameter space to the polynomial coefficients, which can be used to determine identifiability information for the model equations [8]. This map is clearly injective and therefore suggests that parameters \(\{D,\beta_{1},\beta_{2}\}\) are structurally identifiable, provided that the time and spatial derivatives of \(z\) in equation (25) are available. In practice, however, the total protein fluorescence concentration through time and space \(z(t,x)\) is often not available from fluorescence recovery experiments or is only accessible as very noisy and diffuse images. In the rare instances when this is available, derivatives of this concentration would need to be numerically approximated, incurring additional errors. Typically, the only available measurement data from FRAP experiments is the spatially-averaged quantity \(y(t)=\int_{-l/2}^{l/2}z(x,t)dx\) (or equation (15) for the 2-dimensional system), which provides substantially less information. For \(y(t)\), Equation (25) becomes: \[0=-y_{tt}+2Dz_{tx}(l/2,t)+2\beta_{1}Dz_{x}(l/2,t)-(\beta_{1}+\beta_{2})y_{t}\,. \tag{26}\] The derivatives of the total concentration of protein at the boundaries of the bleach point are however not available from the data. Therefore, the current framework for using the differential algebraic approach cannot provide insight into structural identifiability of the model parameters in this setting. ### Practical identifiability using Bayesian inference We then investigate the practical identifiability of parameters \(D,\beta_{1},\beta_{2}\) given FRAP data using the MCMC DRAM algorithm described in Section 4.2. In addition, we use this Bayesian inference approach to study the practical identifiability of parameters \(p\) (from equation (14)) and \(C_{0}\) (from equation (13)). We start with initial parameter guesses \(D^{*},\beta_{1}^{*},\beta_{2}^{*}\), \(p^{*}\), \(C_{0}^{*}\) determined as outlined in Section 3.3. Since MCMC DRAM requires its sampling intervals to be bounded [26], we set parameter bounds that are one order of magnitude lower and higher than the initial guesses. The exceptions are \(p\), where the maximum bound is \(1\) (since it denotes a fraction), and \(C_{0}\), where the maximum bound is set to \(C_{0}^{*}+1\). We carry out MCMC DRAM on synthetic FRAP data generated using the parameter sets in Table 2 for \(10,000\) sampling iterations. We determine convergence of the resulting Markov Chains using the Geweke diagnostic test [29]. A higher Geweke test score indicates a higher probability of convergence in the corresponding Markov Chain. Table 3 shows that, while the Geweke test suggests strong convergence in the Markov Chains at \(10,000\) iterations for \(D\), \(p\), \(C_{0}\) and moderate convergence for \(\beta_{1}\), there does not appear to be strong evidence of convergence for \(\beta_{2}\), despite the large number of iterations. To determine practical identifiability based on MCMC DRAM, we study the univariate and bivariate marginal parameter distributions estimated by the inference algorithm. Across the Table 2 parameter regimes, we find that some of the MCMC DRAM-estimated marginal distributions for \(D\), \(\beta_{1}\), \(\beta_{2}\), \(p\), and \(C_{0}\) appear broad or multimodal, suggesting a lack of practical identifiability. Figure 3 shows the estimated parameter distributions for Parameter Set 2, where the distribution of rate \(\beta_{2}\) is especially broad. In addition, assessing practical identifiability using this Metropolis-Hastings MCMC algorithm carries a high computational cost. Study [25] also observed this for applications to PDE models of cell scratch assays. We find that the method is even less computationally feasible for the FRAP model, where the concentrations of interest are tracked in two spatial dimensions. ### Practical identifiability using profile likelihood analysis We next compute profile likelihoods for the kinetic parameters of interest (\(D,\beta_{1},\beta_{2}\)) in the FRAP model. By visualizing the residuals from fitting the experimental FRAP data using model (1) as described in Section 3.3, we conclude that the observation noise can be assumed to be normally distributed for the purpose of our application. We therefore choose the fixed standard deviation of the measurement noise in equation (22) as \(\sigma=0.1*\mathrm{mean}(\mathrm{FRAP}_{\mathrm{gen}}(t))\) based on the true synthetically-generated FRAP curve corresponding to each wild-type or mutant parameter regime. The profile likelihood calculation then reduces to carrying out nonlinear least-squares optimization to optimize out the nuisance parameters, which we carry out using the lsqnonlin function in Matlab. For example, recall from Section 4.3 that, when interested in the identifiability of the diffusion coefficient \(D\), we fix values of \(D\) from an appropriate grid. We use a uniform grid for parameter \(D\) on an interval given by \([D^{\star}/10,10D^{\star}]\), where \(D^{\star}\) is the starting parameter guess determined through the initial deterministic procedure outlined in Section 3.3. For each value of \(D\) in this grid, we maximize the profile likelihood (equation (21)), which yields values \(\beta_{1}^{\star}(D)\) and \(\beta_{2}^{\star}(D)\) for the optimized nuisance parameters. The likelihood of each parameter of interest is visualized in Figure 4 for FRAP data generated using Parameter Set 2. While \(D\) and \(\beta_{2}\) appear to be identifiable, \(\beta_{1}\) is practically non-identifiable, even with perfect synthetic FRAP data. Similar results, where one of the rates \(\beta_{1}\) and \(\beta_{2}\) is practically unidentifiable, are observed in the other parameter regimes. Profile likelihood analysis also provides all the information needed to generate subset profiles, which in this case help visualize the relationship between each rate as an interest parameter and the other rate as the optimized nuisance parameter, following the approach in [22]. Figure 5 shows the inferred linear relationship between the rate parameters. Since we explore the application of the methods to synthetically-generated FRAP recovery curves, the true values of the parameters are indicated using a red circle in Figure 5. We find that the true parameters indeed lie on the curves outlining the relationship between the reaction rates. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Parameters & \(D\) & \(\beta_{1}\) & \(\beta_{2}\) & \(p\) & \(C_{0}\) \\ \hline Set 1 & \(0.878\) & \(0.639\) & \(0.492\) & \(0.838\) & \(0.997\) \\ Set 2 & \(0.996\) & \(0.414\) & \(0.129\) & \(0.741\) & \(0.931\) \\ Set 3 & \(0.975\) & \(0.602\) & \(0.573\) & \(0.961\) & \(0.936\) \\ \hline \hline \end{tabular} \end{table} Table 3: Geweke Diagnostic Scores for parameter convergence using MCMC DRAM carried out for the parameter sets in Table 2. Figure 3: MCMC DRAM-estimated A) univariate and B) bivariate marginal distributions for noiseless FRAP data generated using Parameter Set 2 in Table 2. ## 6 Investigating parameter relationships in FRAP models Since we find that one of the reaction rates in our model is consistently unidentifiable based on the Bayesian inference and profile likelihood methods described above, we further investigate the relationship between the reaction rates \(\beta_{1}\) and \(\beta_{2}\). For a set value of the diffusion coefficient \(D\), we vary rates \(\beta_{1}\) and \(\beta_{2}\) on a grid and compute the least-squares error between the FRAP curve generated using these parameters and the true synthetic data. Figure 6 shows the contour plot of this error between generated and true FRAP recovery data for fixed \(D=1\mu m^{2}/s\) (motivated by Parameter Set 2) and \(\beta_{1}\) and \(\beta_{2}\) chosen from a square grid with \(50\) equally spaced values between \(10^{-4}/s\) and \(2\times 10^{-3}/s\) and between \(10^{-5}/s\) and \(2\times 10^{-4}/s\), respectively. The values of \(\beta_{1}\) and \(\beta_{2}\) that minimize the least-squares error between true synthetic data and the generated FRAP curves lie in the dark blue region of the contour plot in Figure 6. This figure shows that the likelihood along the line highlighted in red is the same, so that we cannot distinguish the points along this curve. This curve also coincides with the relationship between the optimized and interest parameter rates in the profile likelihood analysis in Figure 5. While contour plots as in Figure 6 are different for each diffusion coefficient \(D\), we observe similar behavior for the diffusion coefficients characterizing the other parameter regimes in Table 2. In addition, these contour plots are not computationally expensive to generate, since they only use forward simulations of the PDE model for fixed parameter values (and, in particular, do not involve any optimization). Parameter estimation cannot distinguish between points \((\beta_{1},\beta_{2})\) that lie on the same contour level of the error function (the least-squares error between numerical and true synthetic data). We therefore choose a curve \(\Gamma\) in the \((\beta_{1},\beta_{2})\)-plane that intersects each level curve transversely at a unique point (an example of such a curve \(\Gamma\) is shown as the yellow dashed curve in Figure 6) and parametrize \(\Gamma\) using a new variable \(s\). The goal then becomes to identify the value of \(s\) that corresponds to the point on \(\Gamma\) at which the error function restricted to the curve \(\Gamma\) attains its minimum: the contour Figure 4: Profile likelihoods for each interest parameter on the \(x\) axis given noiseless FRAP data synthetically generated using model (1) and Parameter Set 2 in Table 2. Figure 5: Subset profiles for each interest rate parameter on the \(x\) axis and the corresponding optimized nuisance rate parameter on the \(y\) axis given noiseless FRAP data synthetically generated using model (1) and Parameter Set 2 in Table 2. The true reaction rate parameters are indicated with red circles. curve identified by the value of \(s\) then provides a relationship between the rates \(\beta_{1}\) and \(\beta_{2}\). Thus, while this approach is not able to identify the parameters \((\beta_{1},\beta_{2})\) uniquely, it provides an implicit relation that these parameters must obey. We now discuss in more detail how we implement the proposed parameter-estimation algorithm. Since we find that contour plots as in Figure 6 do not change considerably across different diffusion coefficients, we first fix one value for \(D\) and select a grid in the \((\log_{10}\beta_{1},\log_{10}\beta_{2})\)-plane in order to inform our choice of the curve \(\Gamma\). For each point on the grid and the fixed value of \(D\), we generate synthetic FRAP recovery datasets and carry out profile likelihood analysis for the rate parameters \((\beta_{1},\beta_{2})\), which results in the values of the error function for each point on the grid. Using these values, we use linear interpolation to compute the tangent vectors to the contour curves of the error function at the grid points. An example of the resulting vector field is shown in Figure 7 (blue arrows), where we fixed the diffusion coefficient \(D=1\mu m^{2}/s\) and selected \(7\) values equally spaced on a log scale from \(10^{-5}\) to \(10^{-2}\) for the reaction rates \(\beta_{1}\) and \(\beta_{2}\). We can now choose a curve \(\Gamma\) that crosses the contour curves of the error function transversely: we can either choose the transverse curve \(\Gamma\) in explicit analytical form or else again use linear interpolation and a forward Euler scheme applied to the gradients of the vector field to compute such a transverse curve \(\Gamma\) numerically. For this vector field, we can re-parameterize the relationship between \(\beta_{1}\) and \(\beta_{2}\) using: \[\log_{10}\beta_{1} =s+\sqrt{s^{2}+1}-5 \tag{27}\] \[\log_{10}\beta_{2} =-s+\sqrt{s^{2}+1}-5\,,\] which yields the yellow curve in Figure 7. The assumption we make is that the chosen curve \(\Gamma\) intersects each level curve transversely in a unique point for all values of \(D\). Figure 7 also shows that the vector field generated with \(D=0.8\mu m^{2}/s\) (green dashed arrows) is very similar to the one generated with \(D=1\mu m^{2}/s\) (blue arrows), and that the curve \(\Gamma\) chosen above (yellow) is still appropriate for this different diffusion coefficient. To demonstrate this proposed framework, we consider a FRAP dataset generated using \(D^{0}=0.8\mu m^{2}/s\) and the reaction rates in Parameter Set 2: \(\beta_{1}^{0}=10^{-3}\)/s, \(\beta_{2}^{0}=10^{-4}\)/s, yielding the ground truth point \(P\) in Figure 8B. We also assume we have the parameterization of the curve \(\Gamma\) using the variable \(s\) as described in equation (27). We carry Figure 6: Contour plots of the least-squares error between FRAP data generated using \(D=1\mu m^{2}/s\) and rates \(\beta_{1}\) and \(\beta_{2}\) from the grid shown and true synthetic data generated using Parameter Set 2 in Table 2. Figure 7: Grid of inferred slopes based on profile likelihood analysis for the relationship between parameters \(\beta_{1}\) and \(\beta_{2}\) for FRAP datasets generated using \(D=1\mu m^{2}/s\) (blue) or \(D=0.8\mu m^{2}/s\) (green dashed) and the indicated reaction rates. The curve \(\Gamma\) that crosses the contour curves of the error function for \(D=1\mu m^{2}/s\) transversely is shown in yellow. out profile likelihood analysis for parameters \(D\) and \(s\) for this dataset as described in Section 5.3. Figure 8A shows clear peaks in the profiles for these parameters, demonstrating that the diffusion coefficient and the r-parameterized \(s\) parameter are practically identifiable. The peak in the diffusion coefficient profile is achieved at \(D^{*}=0.785\mu m^{2}/s\), close to its true value of \(D=0.8\mu m^{2}/s\). The peak in the \(s\) profile is achieved at value \(s^{*}=0.704\). This value of \(s^{*}\) identifies the intersection point \(Q^{*}=(\log_{10}\beta_{1},\log_{10}\beta_{2})\) of the contour curve (green star in Figure 8B) along which the error function is minimized with the transverse curve \(\Gamma\). Focusing on the identified value of the diffusion coefficient \(D^{*}\) (roughly \(0.8\mu m^{2}/s\)), we can then generate contour plots as in Figure 6 and slope grids as in Figure 7. Using linear interpolation and a forward Euler scheme for the tangent vector, we numerically compute the contour curve of the error function that passes through \(Q^{*}\), which then provides the curve on which the ground-truth parameters \((\log_{10}\beta_{1},\log_{10}\beta_{2})\) must lie (green curve in Figure 8B). Notably, this curve is very close to the ground truth point \(P\) (red star in Figure 8B). ## 7 Application to an experimental FRAP dataset We also illustrate the application of the framework proposed in Section 6 to a FRAP experimental dataset corresponding to the dynamics of PTBP3 protein with a single RRM mutant (mut3 in [13]). This mutant has only one RNA-binding domain that can bind to the non-dynamic L-body RNA. We first carry out deterministic parameter estimation (as described in Section 3.3) for this fluorescence recovery dataset and obtain an estimate of the value of the diffusion coefficient \(D_{0}\approx 0.535\mu m^{2}/s\). We then fix this value for \(D\) and vary the rate parameters on a grid in the \((\log_{10}\beta_{1},\log_{10}\beta_{2})\)-plane to generate synthetic datasets and inform the choice of the transverse curve \(\Gamma\). Figure 9B shows that the same choice of curve \(\Gamma\) from equations (27) is appropriate here as well. As in the synthetic data setting investigated in Section 6 and Figure 8, we carry out profile likelihood analysis for parameters \(D\) and \(s\) for this experimental dataset. Figure 9A shows that the profiles for these parameters have clear peaks, indicating that they are practically identifiable. The peak in the diffusion coefficient profile is achieved at \(D^{*}=0.545\mu m^{2}/s\), close to the value we originally estimated. The peak in the \(s\) profile is achieved at value \(s^{*}=-0.031\). This value of \(s^{*}\) identifies the intersection point \(Q^{*}=(\log_{10}\beta_{1},\log_{10}\beta_{2})\) of the contour curve (green star in Figure 9B) along which the error function is minimized with the transverse curve \(\Gamma\). We then use linear interpolation to numerically compute the curve on which we predict that the true parameters \((\log_{10}\beta_{1},\log_{10}\beta_{2})\) must lie on (green curve in Figure 9B). Panel C of Figure 9 shows the original FRAP fluorescence recovery data (in blue) as well as the fit using two parameter sets chosen along the green curve in Figure 9B: \(Q^{*}\) yields the green solid line curve fit in Figure 9C and \(\hat{Q}\) yields the black dashed line curve fit in Figure 9C. As expected, both parameter sets chosen along the curve that outlines the predicted relationship between \(\beta_{1}\) and \(\beta_{2}\) yield very close fits to the data. Figure 8: (A) Profile likelihoods for each interest parameter on the \(x\) axis (diffusion coefficient \(D\) and parameter \(s\) on curve \(\Gamma\)) given noiseless FRAP data generated using \(D=0.8\mu m^{2}/s\) and rates as given in Parameter Set 2 in Table 2. The red star in the left panel corresponds to the ground truth value \(D_{0}=0.8\mu m^{2}/s\), while the maximum of the profile likelihood is achieved at \(D^{*}=0.785\mu m^{2}/s\). (B) Grid of inferred slopes as in Figure 7 (blue), overlaid with transverse curve \(\Gamma\) (yellow), original ground truth parameter set \(P\) (red), and trace of error-minimizing contour curve as well as its intersection point \(Q^{*}\) with the transverse curve (green). ## 8 Discussion In the present work, we propose methods for assessing parameter identifiability and for learning identifiable parameter combinations based on a partial differential equations model of a biological system. Here, we are specifically motivated by the recent discovery that RNA localizes together with RNA binding proteins in L-body RNP granules during the development of frog oocytes [12]. PTBP3 is a specific multivalent RNA binding protein, for which protein dynamics are regulated by RNA-binding in L-bodies [13]. Experimental measurements of PTBP3 dynamics are quantified using FRAP. We model the recovery of protein fluorescence in these experiments using reaction-diffusion partial differential equations, characterized by the diffusion coefficient and the binding and unbinding rate parameters. The FRAP model we investigate here is a linear two-state PDE system, with a postbleach initial condition that we derive based on the square bleach spot used in the experiments in [13]. We first sought out insights from application of established methods of parameter identifiability to our PDE model of protein dynamics during FRAP. In particular, we evaluated structural parameter identifiability, which is based on model structure alone, using the Fisher Information Matrix [9, 20] and differential algebra approaches [8]. Despite the simple Figure 9: (A) Profile likelihoods for each interest parameter on the \(x\) axis (diffusion coefficient \(D\) and parameter \(s\)) for a single RRM mutant experimental FRAP dataset. The red star in the left panel corresponds to the estimated value \(D_{0}=0.535\mu m^{2}/s\), while the maximum of the profile likelihood is achieved at \(D^{*}=0.545\mu m^{2}/s\). (B) Grid of inferred slopes for fixed diffusion coefficient \(D_{0}=0.535\mu m^{2}/s\), overlaid with transverse curve \(\Gamma\) (yellow), and trace of error-minimizing contour curve. The intersection point \(Q^{*}\) of the error-minimizing curve with \(\Gamma\) is denoted by a green star, while another point \(\hat{Q}\) on the error-minimizing curve is shown as a black circle. (C) Fit of the experimental FRAP curve (blue) with simulated FRAP data generated using rate parameter sets given by \(Q^{*}\) (green solid line) and by \(\hat{Q}\) (black dashed line) indicated in panel (B). linear reaction-diffusion structure of the model, we find that structural identifiability is either difficult or impossible to establish for the PDE model using these methods. Practical parameter identifiability considers issues in parameter inference due to the noisy features of real data. We therefore use experimental datasets for wild-type and mutant PTBP3 protein dynamics from [13] and our previously-developed deterministic parameter estimation pipeline in [11] to roughly inform parameter regimes of interest. Using synthetic FRAP data generated using these parameter regimes, we investigate methods of practical identifiability based on Bayesian inference and profile likelihoods for the FRAP model. We find that practical identifiability using Bayesian inference has a high computational cost, due to the MCMC sampling of the parameter space that is required. The results based on these methods suggest that certain parameters are practically unidentifiable, but it remains challenging to determine the parameter relationships that could be inferred based on the available FRAP data. Recent work on subdiffusive protein motion in FRAP has also shown that only some of the model parameters were able to be identified from FRAP data in certain regimes studied [30]. Since the existing methods point to identifiability issues for the reaction rates in the FRAP PDE model, we further investigate the relationships between the rate parameters using synthetically-generated FRAP datasets and contour curves of the error function between data and simulated recovery curves for a range of binding and unbinding rate parameter choices. The framework we propose for identifying parameter combinations involves constructing a transverse curve to the contour curves of the error function. We thus re-parameterize the PDE model of FRAP using the protein diffusion coefficient and a parameter that characterizes this transverse curve. Carrying out profile likelihoods for these parameters identifies the level curve on which the true parameters must lie. We demonstrate that this approach recovers the original parameter values for synthetic datasets and predicts the relationship between reaction rates for experimental FRAP data. The pipeline we propose has the potential to extend to identifying parameter relationships in other PDE models of biological systems. However, the approach becomes more challenging for larger numbers of parameters that need to be identified. For the application motivating this work, we have used the simplifying assumption that the PTBP3 reaction uses a single binding site; this is appropriate for the mutant studied in Figure 9, which has a single RNA binding domain capable of binding to L-body RNA [13]. For systems where multiple independent binding sites are appropriate, parameter identifiability and inference are likely more difficult to investigate due to the increased dimension of the parameter space. More generally, the specific insights we provide on parameter combinations that are identifiable in FRAP are dependent on the assumption that the reaction-diffusion model we use is appropriate. We have previously studied settings where active transport of proteins needs to be included and impacts parameter estimation [11]. Recent work has also shown that experimental FRAP data cannot distinguish between normal diffusive and subdiffusive motion in large regions of parameter space [30]. Future work could aim to develop broadly-applicable methods of structural and practical parameter identifiability for PDE models of fluorescence microscopy data. ## Acknowledgments Ding, Mastromatteo, and Reichheld were partially supported by the NSF under grant DMS-174429. Sandstede was partially supported by the NSF under grants DMS-2038039 and DMS-2106566. The experimental work was funded by R01GM071049 from the NIH to Mowry. ## Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2302.01024
SSO-Monitor: Fully-Automatic Large-Scale Landscape, Security, and Privacy Analyses of Single Sign-On in the Wild
Single Sign-On (SSO) shifts the crucial authentication process on a website to to the underlying SSO protocols and their correct implementation. To strengthen SSO security, organizations, such as IETF and W3C, maintain advisories to address known threats. One could assume that these security best practices are widely deployed on websites. We show that this assumption is a fallacy. We present SSO-MONITOR, an open-source fully-automatic large-scale SSO landscape, security, and privacy analysis tool. In contrast to all previous work, SSO-MONITOR uses a highly extensible, fully automated workflow with novel visual-based SSO detection techniques, enhanced security and privacy analyses, and continuously updated monitoring results. It receives a list of domains as input to discover the login pages, recognize the supported Identity Providers (IdPs), and execute the SSO. It further reveals the current security level of SSO in the wild compared to the security best practices on paper. With SSO-MONITOR, we automatically identified 1,632 websites with 3,020 Apple, Facebook, or Google logins within the Tranco 10k. Our continuous monitoring also revealed how quickly these numbers change over time. SSO-MONITOR can automatically login to each SSO website. It records the logins by tracing HTTP and in-browser communication to detect widespread security and privacy issues automatically. We introduce a novel deep-level inspection of HTTP parameters that we call SMARTPARMS. Using SMARTPARMS for security analyses, we uncovered URL parameters in 5 Client Application (Client) secret leakages and 337 cases with weak CSRF protection. We additionally identified 447 cases with no CSRF protection, 342 insecure SSO flows and 9 cases with nested URL parameters, leading to an open redirect in one case. SSO-MONITOR reveals privacy leakages that deanonymize users in 200 cases.
Maximilian Westers, Tobias Wich, Louis Jannett, Vladislav Mladenov, Christian Mainka, Andreas Mayer
2023-02-02T11:28:59Z
http://arxiv.org/abs/2302.01024v1
SSO-Monitor: Fully-Automatic Large-Scale Landscape, Security, and Privacy Analyses of Single Sign-On in the Wild ###### Abstract Single Sign-On (SSO) shifts the crucial authentication process on a website to to the underlying SSO protocols and their correct implementation. To strengthen SSO security, organizations, such as IETF and W3C, maintain advisories to address known threats. One could assume that these security best practices are widely deployed on websites. We show that this assumption is a fallacy. We present SSO-Monitor, an open-source fully-automatic large-scale SSO landscape, security, and privacy analysis tool. In contrast to all previous work, SSO-Monitor uses a highly extensible, fully automated workflow with novel visual-based SSO detection techniques, enhanced security and privacy analyses, and continuously updated monitoring results. It receives a list of domains as input to discover the login pages, recognize the supported Identity Providers (IdPs), and execute the SSO. It further reveals the current security level of SSO in the wild compared to the security best practices on paper. With SSO-Monitor, we automatically identified 1,632 websites with 3,020 Apple, Facebook, or Google logins within the Tranco 10k. Our continuous monitoring also revealed how quickly these numbers change over time. SSO-Monitor can automatically login to each SSO website. It records the logins by tracing HTTP and in-browser communication to detect widespread security and privacy issues automatically. We introduce a novel deep-level inspection of HTTP parameters that we call SmartParams. Using SmartParams for security analyses, we uncovered URL parameters in 5 Client Application (Client) secret leakages and 337 cases with weak CSRF protection. We additionally identified 447 cases with no CSRF protection, 342 insecure SSO flows and 9 cases with nested URL parameters, leading to an open redirect in one case. On top, SSO-Monitor reveals privacy leakages that deanonymize users and allow user tracking without user awareness in 200 cases. ## 1 Introduction Single Sign-On (SSO) allows websites to quickly register and login users to their accounts by using popular Identity Providers (IdPs) like Apple, Facebook, and Google. The user authentication is provided by implementing two de-facto standards for SSO: OAuth Authorization Framework 2.0 (OAuth) and OpenID Connect 1.0 (OIDC). Both protocols provide a flexible and user-transparent way to share resources, such as profile information, between the website, which acts as a Client Application (Client), and an IdP. For users, this offers the opportunity to handle only one central account at the IdP, but still be able to use multiple Clients. Authentication flaws are among the OWASP Top Ten [34] vulnerabilities and, as such, of prime importance. Previously, passwords played a major role in authentication, but since they are known to be problematic [14, 16, 48], SSO seems to be the promising solution. However, SSO is evolving quickly. For example, among the four proposed OAuth protocol variants (grant types) from 2012 [13], only one (the code grant) is still considered secure if combined with various extensions [28]. Developers can hardly follow and implement the recommendations. To address these issues, researchers investigated SSO repetitively. Prior Work The majority of related work [50, 3, 45, 53, 30, 54, 35, 25, 51, 36, 17] implemented SSO security tools that require manual detection and execution of SSO logins. Due to the lack of a fully automated evaluation, researchers often limit the evaluation to a small subset of the most frequently used websites, such as Alexa or Tranco 1k and one IdP. This subset makes it hard to estimate how frequently common vulnerabilities appear and the implementation of security best practices are adopted on Figure 1: SSO-Monitor’s Fully-Automatic Workflow. It starts with the input of a Tranco list and employs search engines to discover a website’s login page. Then, it uses both a visual-based and pattern-based approach to detect the SSO login buttons. It executes SSO and conducts security and privacy analyses on the login traces. SSO-Monitor outputs landscape, security, and privacy results. the Internet. Other studies automated the SSO detection and execution but did not investigate SSO security and privacy [57, 18, 5] or only parts of the login flow [32]. Still, some researchers [56, 41, 11, 10] performed large-scale SSO security and privacy analyses. However, many results are not reproducible since the used tools are not publicly available. Challenges in Monitoring SSO. Evaluating the security of large-scale SSO-deployments automatically is a challenging task. First, it is hard to quickly and reliably determine which websites support SSO when only its domain is known. In contrast to previous work, we implemented a novel visual-based SSO detection for increased accuracy. Second, large-scale security and privacy evaluations are restricted to passive traffic analyses due to ethical considerations. To gain more information and thus conduct more comprehensive evaluations, we are the first to introduce a new deep parameter inspection technique (SmartParams). This approach allows for uncovering low-entropy security parameters (CSRF protection), nested parameters inside HTTP traffic (secret leakages), and URL parameters (open redirect), which prior work missed. Our work answers the following three research questions. RQ1: How can we continuously monitor the SSO landscape at scale? We noticed that the SSO support on websites varies over time so that prior surveys cannot represent the current SSO landscape. Websites are frequently adding new IdPs and removing others. The wide deployment of Apple's IdP proves that introducing new IdPs can entirely revamp the SSO landscape in only three years. Therefore, we see the demand for an automated approach as mandatory before any empirical evaluations of real-world SSO implementations can be conducted. Although automatically monitoring the SSO landscape might seem to be an engineering problem, numerous novel challenges and in-depth research must be conducted. We designed a modular architecture to solve these challenges: (1) We establish a methodology to automatically find the login page for a given domain with search engines. (2) We detect the IdPs that a website supports by recognizing their logos and searching for patterns and keywords. (3) We automatically execute SSO logins, including interactions with IdPs such as consenting. For proofing the correctness of our approach, we conducted a _manual_ ground truth analysis on the Tranco 1k and compared the results with our automated approach. We then extended the automated analyses to the Tranco 10k and provide novel insights on the SSO landscape. We identified 3,020 SSO logins in the Tranco 10k and provide details on their supported IdPs and protocol details (see SS6). RQ2: How secure are current SSO logins? As a result of several discovered vulnerabilities in the recent years, IETF created a constantly updated draft addressing all known security issues [28, 27]. The IETF also created multiple additional documents, such as JWT best practices [40], PKCE [39], and mTLS [4], to strengthen SSO. The question arises if all these security considerations and improvements are implemented to protect the users relying on SSO. We systematize the current state of the applied security mechanisms on the Internet. In 342 cases Clients transfer sensitive data via the user's browser, which is deprecated and considered dangerous in SSO. With respect to CSRF, we identified 447 logins with an entirely missing protection. With SmartParams, we extended the list of vulnerabilities with 337 additional logins due to a recognized weak CSRF protection, for instance, less then 20 bytes entropy. Furthermore, we identified nested URL parameters in 9 cases that could lead to open redirects and 5 Client secret leakages with SmartParams's deep inspection. RQ3: How private are current SSO logins? To authenticate users, Clients and IdPs exchange sensitive user-related information. This exchange must only happen transparently and after the user's consent. We are the first to identify that this is not always the case. We estimated 200 cases in which the Clients and IdPs exchange private user information secretly without user awareness. SSO-Monitor SSO-Monitor is our answer to RQ1, RQ2, and RQ3. It can conduct an automated evaluation of large-scale SSO deployments using the Tranco 10k. We depict its basic idea in Figure 1. Once it detects SSO support on a website, SSO-Monitor sequentially signs in using Apple, Facebook, and Google. Next, SSO-Monitor repeats the automated login a second time so that it can distinguish random from static parameters. SSO-Monitor then automatically identifies security and privacy issues. SSO-Monitor combines novel insights on how SSO schemes are implemented with state-of-the-art engineering techniques. Contributions We make the following key contributions: * We systematize known SSO analysis techniques (SS3). We compare prior tools in Table 1, and we show how SSO-Monitor differs to them with novel SSO detection and analysis techniques. * We present SSO-Monitor, our systematic and modular approach for large-scale SSO analyses (SS4). SSO-Monitor is open-source1 and only requires a list of domains as input. It identifies the login page for each domain, detects which IdPs are supported, and starts the authentication process. SSO-Monitor records the traffic that it later analyzes on security and privacy issues. Footnote 1: For the submission, we provide the source code and screenshots of SSO-Monitor on [https://tinyurl.com/sso-monitor](https://tinyurl.com/sso-monitor). For anonymity reasons, we will publish the artifacts after the review phase. * We publish an in-depth overview of the usage of SSO in the Tranco 1k (manual ground truth analysis \(\rightarrow\) SS5) and Tranco 10k (automated analysis with SSO-Monitor \(\rightarrow\) SS6) to answer RQ1. * We use SSO-Monitor to analyze the security of 3,020 SSO logins across 1,632 websites to answer RQ2 (SS7). Besides 337 weak and 447 missing CSRF protections, it uncovers 342 obsolete protocol flows, and 215 protocol mix-ups. Since SSO-Monitor inspected nested and encoded data structures, it identified 5 Client secret leakages, and an open redirect attack. * We reveal 200 privacy breaches among 1,632 websites that support SSO to answer RQ3 (SS8). We are the first to identify that websites secretly sign in their users using SSO without their knowledge and awareness. Responsible Disclosure We notified the affected sites as part of our ongoing responsible disclosure process to achieve a more secure and private SSO landscape. We relied on well-established security reporting mechanisms from prior work [5, 43, 20, 44] to collect the contact emails: (1) the security.txt file (2) the WHOIS record (3) off-the-shelf search engine [31] and website [2] email crawlers, and (4) the standard aliases security@, abuse@, webmaster@, and info@. We sent the email from our institutional email address to verify our identity and maximize credibility. While we participate in the active discussions with the vendors, some of them have resolved the issues. We appreciate for being acknowledged and rewarded with bug bounties. ## 2 Background: Single Sign-On Schemes Figure 2 depicts a basic SSO scheme. It consists of an user who wants to log in on the Client's website using an IdP. SSO protocols can be divided into front-channel communication, which is exposed to the user's browser (steps 1-4 and 7), and back-channel communication (steps 5-6), which is invisible to the user. SSO messages in the front-channel can be sent via HTTP or In-Browser Communication (InBC) techniques, such as the _postMessage_ API [52, SS9.3] and _Channel Messaging_[52, SS9.4] API. HTTP communication is standardized in SSO, but recent research [17] showed a strong shift towards the use of InBC techniques in SSO, which rely on JavaScript. Login Request and Response The SSO login flow starts with the user requesting access to a restricted resource, for example, to _profile.html_. To authenticate the user, the Client sends the login request to the IdP via the user's browser. This message contains parameters specific to the particular SSO protocol in use. In OIDC, the login request contains the identity of the Client (client_id), the target to which the IdP must send the login response (redirect_uri), and optional security parameters (i.e., state, nonce, code_challenge,...). The login response contains the tokens (code,access_token,id_token) that the Client uses to authenticate the user. User Authentication & Consent on the IdP Before the login response can be sent back to the Client, the user must authenticate to the IdP. Protocols based on OAuth, such as OIDC or Facebook Connect, also ask the user to provide consent on resources to be accessed by the Client. Token Request and Response SSO protocols can authenticate the user either by using only the information in the login response, or by using the back-channel. OIDC offers both variants, which can be configured in the login request using dedicated parameters (response_type). If the back-channel authentication is used, the Client sends a token request to the IdP. This HTTP message contains authentication information of the Client (e.g., client_id and client_secret) as well as information from the login response. In OIDC, the login response contains a code, a one-time-use token that is bound to the Client. Once the Client redeems the code in the token request on the IdP, it retrieves the token response that holds a JSON Web Token (JWT) with the user's identity. ## 3 Systematization of Known SSO Tools We investigated related work on tools performing SSO security and privacy analyses or using SSO for automated account sign-ins and registrations. Table 1 summarizes our comparison grouped into categories that we found useful for answering RQ1-3. Availability The minority of SSO analysis tools (6/19) are publicly available. We strongly believe that this attitude prevents the community from proceeding with research and yields the reimplementation of already solved tasks. For instance, all related works implement individual SSO automation pipelines instead of reusing existing ones. Thereby, we provide SSO-Monitor as an open-source foundation for future large-scale analyses of the Internet's SSO ecosystem. SSO Scope To execute SSO, the authentication and consent steps must be automated for _each_ IdP. For this reason, prior tools often support a _single_ IdP (5/19). With SSO-Monitor, we support the three most popular IdPs [32]: Apple, Facebook, and Google. Interestingly, prior tools work best for English websites due to their pattern-based SSO detection. This is mirrored in the selection of top sites lists, for example, Zhou and Evans [56] use the region-specific Alexa list. SSO-Monitor's visual-based SSO detection works for websites of all languages and regions. We also consider InBCs on a large scale, which recently got attention in SSO [17]. Figure 2: SSO Scheme. The Client website delegates the user’s authentication to the IdP. The user’s browser transfers messages in the front-channel using HTTP or InBCs with JavaScript (1-4, 7). Back-channel messages (5-6) are based on HTTP and invisible in the user’s browser. Login Page Detection Prior work used different approaches to find the login page For example, they tested for common paths (i.e., /login) [18, 10], only crawling links including login-related keywords (\(\rightarrow\) selective), or running a Depth-First Search (DFS) crawl [41, 57, 11, 18, 5, 32]. Others [56] assumed the homepage as login page [56, 11, 18, 5]. In practice, websites can include the SSO button on a deeply nested login page. Thus, starting with the input of a domain, tools first have to aggregate a candidate pool of login pages. Therefore, we assess two techniques: (1) Breath-First Search (BFS) crawling with a depth of 2 visiting links including login-related keywords first (\(\rightarrow\) prioritized), and (2) querying search engines. SSO Detection The login page candidates are scanned for SSO buttons. All prior work followed a programmatic, _pattern-based_ detection approach, which uses string-matching algorithms to identify keywords. For instance, if a <button> contains the keyword _login_, it is tested for SSO. This approach suffers from False Negatives (FNs), as SSO buttons can take any shape and include arbitrary keywords in any language. For example, they can be <button> tags with onclick listeners or nested like <a><img />/</a>. SSO-Monitor introduces a novel, _visual-based_ detection approach, which identifies the IdPs' logos contained in SSO buttons. We randomly sampled a subset of 50 websites with SSO and found that 49 of them include logos in all of their SSO buttons. SSO-Monitor combines the keyword-based approach with the novel visual-based approach to maximize the detection rate on any website. SSO Execution We found that 5 of 19 tools could execute SSO to login on the Client. However, SSOScan [56] was last updated in 2015 and not adapted to today's SSO logins. Authorscope [57] is for mobile apps. Shepherd [18] and Cookie Hunter [5] both use SSO logins as fallback for generic post-authentication studies. SSO-Monitor is the first to automate the Apple login, including its 2FA. Account Registration and Verification Studies focused on post-authentication mechanisms also require post-SSO account registration and verification. For instance, the user is asked to submit additional data that is not provided by the IdP and confirm the email address. Therefore, email verification [5, 10], SMS verification [10], and CAPTCHAs [10] have been automated. For SSO-Monitor, we consider them as out of scope as we examine the protocol messages that are _always_ exchanged during _any_ login. Also, SSO-Monitor is open source and releasing automated account registration tools to the public may raise ethical concerns. Continuous Monitoring To the best of our knowledge, SSO-Monitor is the first to provide a constantly updated top sites list of websites with SSO, similar to Tranco [19]. Only SAAT [10] recently compared the SSO landscape over a period of 50 days. They noticed a dynamic landscape, which SSO-Monitor is the first to monitor continuously. Ground Truth Estimation We compare the automated SSO detection engine of SSO-Monitor against the Tranco 1k to estimate its accuracy. Prior work [56, 18, 5] randomly sampled only small subsets of websites for such estimations (i.e., 20 [5], 50 [18], and 169 [56]). In sum, SSO-Monitor detects 97% of all SSO login buttons, and it executes a total of 2,811 SSO logins. SSO Analyses SSO-Monitor runs a comprehensive and systematic study on the real-world adoption of the OAuth security best practices [29]. Prior studies [56, 41, 53, 54, 25, 51] already investigated selected parameters (i.e., state) but missed to conduct in-depth parameter inspections. With SmartParams, we fill this gap. Regarding privacy, \begin{table} \begin{tabular}{l|c c c c c c c c c|c c c|c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Modability**}} & \multicolumn{2}{c}{**SSO Data**} & \multicolumn{2}{c}{**SSO Data**} & \multicolumn{2}{c}{**SSO**} & \multicolumn{2}{c}{**SSO**} & \multicolumn{2}{c}{**SSO**} & \multicolumn{2}{c}{**SSO**} & \multicolumn{2}{c}{**SSO**} & \multicolumn{2}{c}{**SSO**} \\ & (\%) & (Chint) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline \hline **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** \\ \hline **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** \\ \hline **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** \\ \hline **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** \\ \hline **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** & **SSO** \\ \hline **SSO** & **S SSO-Monitor first reveals that websites are logging in to their users secretly without their awareness. ## 4 SSO-Monitor: Design and Architecture In this section, we introduce the design and the architecture of SSO-Monitor, see Figure 3. On input of a domain, SSO-Monitor finds the login page, the supported IdPs, executes the SSO login, and derives security and privacy results. Design SSO-Monitor can investigate the SSO landscape, its security, and its privacy fully automatically. Figure 3 depicts its general idea that is split into four modules. The _ground truth_ is our initial manual investigation of the SSO landscape, see SS5. SSO-Monitor guides the analyst via an interactive interface to configure the SSO support for a specific website. By contrast, the _landscape detection_ works fully automatically, see SS6. To estimate the automatic detection's success rate, we compare our results with the ground truth. The remaining two parts, _SSO security_ (SS7) and _SSO privacy_ (SS8) conduct multiple automatic sign-ins based on the automatic landscape detection. Both record the HTTP traffic and all InBCs during these sign-ins for their actual analysis. Architecture The application architecture of SSO-Monitor consists of a master node and a variable number of worker nodes. The master distributes all analysis tasks to the workers on a per domain basis. Therefore, SSO-Monitor scales efficiently by adding new workers. All artifacts and reports are centrally collected and stored on the master node. Additionally, the master provides a web-based management interface. Hence, all administrative tasks and analysis reports can easily be executed and viewed. ## 5 Ground Truth: Tranco 1k SSO Landscape We need a ground truth to implement and evaluate our automated discovery and analysis of SSO. We manually analyzed the websites out of the ground truth, and we use our analysis results as reference for SSO-Monitor's automatic evaluations. In our case, the ground truth allows us to choose proper strategies for the automation of SSO. This process includes choosing appropriate parameters and fine-tuning our automated SSO detection engine for high-level accuracy. Methodology We use the Tranco list2[19] generated on 15 November 2021 as the foundation for our manual investigations. For each domain in the list, we manually visited the appropriate website. We looked for login possibilities on the site, and if available, documented the _supported SSO providers_ that can be used to sign in. We further noted the _domain_, _login page_, _timestamp_, and _automation hurdles_. Footnote 2: Available at [https://transo-list.eu/list/6Z2X](https://transo-list.eu/list/6Z2X). Results In Table 2, we depict the results of our ground truth analysis. Out of the Tranco 1k, 760 websites (76%) implement a login page and thus support user authentication. The remaining sites do not implement user authentication (106, 11%) or are not reachable (134, 13%). We further found that 278 sites (28%) support SSO with at least one IdP. The most used IdP is Google on 244 websites (24%), strictly followed by Facebook on 214 sites (21%). Apple started its SSO support back in 2019. Since then, its importance has significantly grown, as it is already supported on 122 websites (12%). We determined that the automated detection and execution of SSO is not possible on 55 sites due to technical constraints, such as required user interaction. Thus, we evaluate the success rate of our automated approach against 223 websites and 463 SSO logins, respectively. ## 6 Automatic Evaluation: SSO Landscape In this section, we present our SSO landscape evaluation, which contains the methodology (SS6.1), login page discovery (SS6.2, SS6.3), SSO detection (SS6.4, SS6.5), detection rate and performance (SS6.6), continuous monitoring (SS6.7), and contemporary Tranco 10k SSO landscape (SS6.8). ### Methodology In this paper, we concentrate on the continuous monitoring of SSO in the wild. The monitoring requires computational resources over a long period. Thus, we define the following requirements for the detection: (1) as few resources as possible as much as needed, (2) high detection rate, and (3) robustness. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Websites** & **\textless{}** & **\textless{}** & **\textless{}** & \(\sum\)**Logins** \\ \hline Authentication supported & 760 (76\%) & – & – & – & – \\ SSO Login supported & 278 (28\%) & 244 & 214 & 122 & 580 \\ Automated SSO possible & 223 (22\%) & 197 & 167 & 99 & 463 \\ \hline \hline \end{tabular} \end{table} Table 2: Ground Truth. SSO is supported on 28% of the Tranco 1k. On 22%, we can execute SSO fully automated. Figure 3: Design and Architecture of SSO-Monitor. It fully automates the SSO analysis in four steps: (1) ground truth estimation, (2) landscape evaluation, (3) security analysis, and (4) privacy analysis. Only the first step requires user interaction and is executed _once_ to estimate its detection accuracy. A Available Resources Our goal is to monitor a large number of websites once a year continuously. Thus, the required resources to discover the login page and analyze the SSO support should not exceed 52 minutes in the worst case: \[max_{time}=\frac{12(month)*30(d)*24(h)*60(m)}{10.000(websites)*1(worker)}=52\ min/ \text{s} \tag{1}\] We also require high detection rates with low False Positives (FPs) and FNs. Our goal is to achieve an at least 90% correct detection of SSO. Many of the websites, however, are not available in English. We should be able to analyze and discover SSO on such websites. Preparation Steps (1) We use Selenium to automate the navigation on websites. However, many websites detect when the browser is automatically navigated and activate CAPTCHAs. To circumvent this limitation, we use the selenium-stealth3 plugin. (2) We disable the cookie banners by installing the browser extension I don't care about cookies4 and thus reduce the risks of manual interactions to a minimum. (3) To execute Selenium on headless servers, we use a virtual monitor 5 and the Google Chrome browser with a pre-configured window size. Footnote 3: [https://pypi.org/project/selenium-stealth/](https://pypi.org/project/selenium-stealth/) Footnote 4: [https://addons.mozilla.org/en-US/firefox/addon/](https://addons.mozilla.org/en-US/firefox/addon/) Footnote 5: [https://github.com/google/virtualenv/](https://github.com/google/virtualenv/) ### Login Page Discovery: Crawling Crawling is used by the majority of related work [57, 11, 41, 18, 5] to discover login pages. However, we suggest that this approach suffers from low reliability and cost-benefit ratio. Methodology We configured Scrapy6 to BFS-crawl the Tranco 1k from our ground truth with a depth of 2. We took advantage of Scrapy's built-in LinkExtractor module, which automatically detects and extracts all clickable links on a page. To eliminate FPs, we filtered out third-party links. We followed crawling best practices [6, SS5] like the robots.txt denylist and adaptive request throttling. Footnote 6: [https://scrapy.org/](https://scrapy.org/) Results Our crawling dataset contains 515,855 links from 1k websites, averaging to 515 crawled links per site. However, the entire crawling set contains only 146 login pages out of the ground truth (760). Crawling statistics show that 104 (71%) of the login pages are linked on the homepage, while 42 (29%) of them are linked on a subpage. The search engine approach, see SS6.3, detects almost three times more login pages. On top, continuous monitoring requires the crawling to be run on a regular basis. This puts excessive load onto the web servers. Due to the ambitious resource load, the approach does not scale and does not satisfy our research goal. ### Login Page Discovery: Search Engines Search engines provide benefits out-of-the-box: (1) they already crawl the web with an indefinite depth, (2) they instantly provide up-to-date results, (3) they make the data accessible and searchable via keywords, and (4) they use internal rankings to provide optimized results. For SSO tools [11, 18, 10] used search engines but did not systematically evaluate their effectiveness. We answer the following questions: (1) Which search engines return the most login pages? (2) Which search query returns the most login pages? (3) How many ranked search results are required to detect the most login pages? Methodology According to [42], Google (92.01%), Bing (2.96%), Yahoo (1.51%), Baidu (1.17%), YANDEX (1.06%), and DuckDuckGo (0.68%) are the most popular search engines in 2022. Yahoo uses Bing's search index. Baidu and YANDEX primarily target the Chinese or Russian market. Thus, we included Google, Bing, and DuckDuckGo in our scope. We further replaced Google with Startpage, which proxies the Google Search and does not interfere with CAPTCHAs. To eliminate FPs, we require the website and its login page to be the same site using the site: operator. For each resolved domain, we submitted five different search queries to each engine and stored the top 10 returned search results. We selected appropriate search queries to the best of our knowledge ranging from simple to more specific ones: (1) reddit.com login site:reddit.com (2) reddit login site:reddit.com (3) login reddit site:+.reddit.com (4) reddit login signin signup register account site:reddit.com (5) site:reddit.com (intitle:"login" OR intitle:"login" OR intitle:"signin" OR intitle:"signin") Results Surprisingly, the most straightforward search query (\(SQ1\)) combined with all engines found the most login pages out of the ground truth (434/760), see Table 3. Although we can compare different engines and queries _with each other_, this number only indicates a _lower boundary_. The ground truth only holds a single login page for each website, while in practice, a website can have multiple login pages. Our goal is to provide continuous monitoring of the SSO landscape once a year on a single-threaded machine (cf. SS6.7). Thus, performance plays an important role. Our analysis takes 289 seconds for each page (cf. Table 4). The analysis of the top 3 search results from all three engines (\(\rightarrow\) 9 in total) would require \(\frac{289*9*9-10,000}{60\cdot 60\cdot 60\cdot 24}=301\) days. By using at least 4 parallel workers, we can scale up the landscape evaluation to a quarterly executed scan with a duration of 75 days. ### SSO Detection: Patterns and Keywords Inspired by previous work [56], we decided to implement a keyword-based analysis. We base our analysis on the extraction of _clickable_ elements on a website like links and buttons, but also elements with JavaScript events. To reduce the set of candidates starting SSO, we search for specific keywords inside the elements' texts and attributes (e.g., _sign in with google_, _login with facebook_). If this does not return valid results, we search for elements including specific IdP names (e.g., _google_, _facebook_, _apple_). We store all candidates in a list to evaluate them later. The limitations of the keyword-based analysis are: (1) The SSO detection is limited to the defined keywords. Different languages or source code without any of the keywords lead to FNs. (2) SSO buttons containing only the IdP logo without any text lead to FNs. ### _SSO Detection: Images and Logos_ To solve the limitations of the keyword-based approach, we developed an image-based analysis. During our investigations, we discovered that SSO buttons also contain the logo of the corresponding IdP. Thus, we implemented an algorithm for opening a login page candidate, taking a screenshot, highlighting recognized IdP logos, and extracting their coordinates. For the logo recognition, we used a state-of-the-art algorithm that supports pattern matching, is available in Python, and is open source. Thus, we chose the OpenCV algorithm7. In the implementation, we solved the following challenges: logo collection, logo scalability, robustness, and high FP rate. Footnote 7: [https://docs.opencv.org/3.4/d4/dcf6/tutorial_py_template_matching.html](https://docs.opencv.org/3.4/d4/dcf6/tutorial_py_template_matching.html) Logo Collection To establish a set of IdP logos, we carefully analyzed 100 websites and extracted the commonly used logos. In sum, we stored two or three logos for each IdP. This list can be easily extended by storing new logos. Logo Scalability One of the main challenges is the variable size of the logos on the websites since the pattern matching algorithm works only if the sizes are similar. A suitable solution is to scale the website's screenshot with different factors and analyze it repeatedly for each scale factor. We discovered that it is far more efficient to scale the logo instead of the screenshot during our implementation. This approach requires fewer resources, for instance, time, memory, and CPU. Robustness The main challenge is to balance a high recognition rate with performance. This requires us to adjust multiple parameters: the number of pre-configured logos, different logo scales, and detection scores. Based on experiments and optimizations, we reduced the logo set to only used ones. We also reduced the number of scale iterations to a minimum without sacrificing pattern matching results. Since the OpenCV algorithm outputs a value with a matching score, we implemented upper and lower bounds to determine whether SSO is detected. The algorithm stops if a match is above the upper bound due to the high matching confidence. High False Positive Rate On websites without SSO, the algorithm matches areas looking similar to the logos, i.e., \(G\) for Google, \(O\) for Apple, and interestingly \(t\) for Facebook. We eliminate all FPs with a generic approach, see SS6.6. ### _Detection Rates and Performance_ The keyword-based and image-based analyses detect suitable SSO candidates. Still, their FP rates are high. To solve this problem, we designed a reliable and robust verification. Detection Verification For each candidate, we store the coordinates on the login page. During their verification, we automatically navigate the browser to these coordinates and click on the area. If the browser sends the login request to an IdP, we know that SSO with the corresponding IdP is supported. This verification eliminates the FP rate. Storing the coordinates leads to an unexpected advantage regarding the automated execution of SSO. Websites are changing their source code, including the HTML elements, quite often. These changes make the automated execution via Selenium impossible since it navigates via element IDs or HTML trees. However, the position of buttons is constant. With the stored coordinates, we can reliably and repeatedly execute SSO. Hybrid Approach We decided to chain both, the keyword-based and image-based SSO detection into a _hybrid_ approach. First, we execute the less resource-intensive keyword-based analysis. If it is not successful, the more resource-intensive image-based approach is triggered. Results In Table IV, we analyze the Tranco 1k sites that have a login page and do not require additional steps to start the SSO. From our ground truth, we expect to recognize 463 SSO logins. The keyword-based approach recognizes 95% of the SSO logins. The image-based approach recognizes 89% of the SSO logins. Areas on the website looking similar to the logos are the main reason for the lower recognition rate. The problem can be solved by marking multiple candidates where the logo _could be_. \begin{table} \begin{tabular}{l|c|c c c|c} \hline \hline **Analysis** & **SSO** & \multicolumn{3}{c|}{**Detection Rates**} & **Duration** \\ **Approach** & **Logins** & FPs & FNs & Recognized & Rate & \(t_{avg}\) \\ \hline Keyword & 463 & 0 & 20 & 443 & 95\% & 1:26m \\ Image & 463 & 0 & 52 & 411 & 89\% & 5:16m \\ Hybrid & 463 & 0 & 12 & 451 & 97\% & 4:49m \\ \hline \hline \end{tabular} \end{table} TABLE IV: By combining the keyword-based and the image-based SSO recognition, we achieve a 97% detection rate. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Login Progs found with & \(SQ1\) & \(SQ2\) & \(SQ3\) & \(SQ4\) & \(SQ5\) \\ \hline Surpage & 307 & 329 & 250 & 216 & 150 \\ Bing & 331 & 316 & 258 & 272 & 120 \\ DoubleDeloG & 320 & 315 & 264 & 268 & 132 \\ Surpage \& Bing & 423 & 414 & 320 & 343 & 172 \\ Surpage \& DarkDataG & 420 & 416 & 363 & 342 & 179 \\ Surpage \& DarkDataG & 354 & 340 & 310 & 306 & 144 \\ All Combined (Top 10) & 434 & 431 & 382 & 363 & 182 \\ \hline Top 5 Search Results & 385 (-49) & 386 (-45) & 357 (-25) & 333 (-30) & 163 (-39) \\ Top 5 Search Results & 357 (-7) & 337 (-49) & 335 (-47) & 292 (-71) & 153 (-39) \\ \hline \hline \end{tabular} \end{table} TABLE III: Search Query Evaluation. To find a suitable search query and engine, we compare the detection rate of five queries on three engines against our ground truth. Considering the resource and time restraints, we decided to address this improvement in future work. By combining both approaches, we achieve a recognition rate of 97%. Interestingly, 6 of 12 FNs are caused by CAPTCHAs on _ebay.com_ and _ebay.de_. ### _Continuous Monitoring_ SSO-Monitor runs multiple scans in sequence to monitor the SSO landscape continuously. It then compares the results and presents their differences. There are two ways to continuously monitor the SSO landscape. Both of them require an initial landscape analysis run, which includes the aggregation of login page candidates. Punctual Monitoring To determine the SSO landscape at a current timestamp, the analyst creates a continuous monitoring job and selects the first scan as a basis. The next scan uses the login pages candidates from the first scan. If the supported IdPs change, e.g., an IdP was added or removed, the continuous monitoring job will rerun a full detection of login pages and IdPs. Such scans can be repeated arbitrarily and represent the SSO landscape at a specific time. Constant Monitoring To monitor the SSO landscape constantly, the analyst creates a constant monitoring task. SSO-Monitor automatically uses free workers to update the SSO landscape continuously. This way, a large corpus of sites can be monitored fully automated over time. This approach provides an always up-to-date landscape. ### _The Tranco Top 10k SSO Landscape_ Landscape Overview With SSO-Monitor, we ran the SSO detection process on the Tranco 10k list from July to August 2022. We summarize our results in Table V. Out of 10k websites, 1,389 sites (13,9%) were not reachable during our analysis -- most likely due to domain name issues, server errors, or downtime. We excluded these websites from our analysis, leaving a final dataset containing 8,611 sites. In total, we found 1,632 websites (16,3%) offering SSO support with at least one of the three IdPs. Within this group, the most prevalent IdP is Google, which is supported on 1,399 websites (86%), followed by Facebook with 1150 sites (70%). Apple started its SSO service in 2019 and is now supported by 471 sites (29%). Interestingly, 1,040 out of 1,632 sites (64%) offer support for multiple IdPs (G+G+\(\bullet\): 348, G+\(\bullet\): 576, G+\(\bullet\): 86, G+\(\bullet\): 30). Overall, we identified 3,020 SSO begins out of which 209 could not be further analyzed. Therefore, we excluded them, resulting in a total set of 2,811 SSO logins. Automated SSO Discovery The SSO discovery reveals that OAuth, which is designed for authorization, is still the preferred protocol with 1,944 SSO logins (69%). In comparison, OIDC, which allows authentication, is only used in 864 SSO logins (31%). In total, 590 SSO logins still use the deprecated implicit flow for authentication. Interestingly, Apple does not use implicit flows. They may not support legacy flows because their SSO service was released quite recently. Keyword vs. Image Out of the 3,020 discovered SSO logins, 2,825 were discovered by the keyword-based approach (SS6.4) and 195 were additionally found by the subsequent image analysis (SS6.5). Note that the image analysis is only applied if the keyword-based analysis was not successful. Identification of Unrelated Logins The automated discovery of login pages with search engines produced promising results. However, we discovered a low rate of FPs in the landscape analysis. We used the operator site:domain.com in the search query to ensure that the discovered login pages are on the same (sub)domain of the Tranco entry. Nevertheless, pages can redirect to other domains. For example, the login page photoshop.com/login redirectors to auth.services.adobe.com. This redirect can lead to FPs if a site redirects to another site that supports SSO. To better understand the problem, we looked into the first 500 analysis results. We identified that 3% of the sites with SSO belong to a domain different from its original Tranco entry. Most of these cases are redirects to Twitter profiles. Since Twitter supports SSO, it is identified as a login page for that domain. Continues Monitoring We compared our latest SSO landscape results to a different run performed earlier in April 2022. As shown in Table VI, the comparison shows a tremendous variation in SSO support over four months. ## 7 Automatic Evaluation: SSO Security ### _Methodology_ The SSO security evaluation consists of three steps, see Figure 3, and takes the landscape analysis results as input. \begin{table} \begin{tabular}{r|c c c|c c} \hline \hline \multirow{2}{*}{_April vs. July_} & \multicolumn{3}{c|}{SSO Logins} & \multicolumn{2}{c}{Websites} \\ 2022 & **G** & **G** & **G** & **G** \\ \hline Added & 152 & 144 & 59 & 186 \\ Removed & 158 & 125 & 57 & 166 \\ \hline \hline \end{tabular} \end{table} TABLE VI: SSO-Support Variance in 4 Months. In total, 186 websites added support for at least one provider. E.g., 152 websites added Google login support, while 158 removed it. \begin{table} \begin{tabular}{r|c c c c|c c c} \hline \hline \multirow{2}{*}{**IdP**} & **SSO** & \multirow{2}{*}{**Broken**} & \multicolumn{2}{c|}{**Protocol**} & \multicolumn{2}{c}{**Flows**} \\ & & **Logins** & OAuth & OIDC & Code & Hybrid & Implicit & N/A \\ \hline \multirow{3}{*}{**G**} & 1,399 & 98 & 667 & 634 & 946 & 43 & 276 & 36 \\ & 1,150 & 73 & 1,068 & 9 & 586 & 0 & 314 & 177 \\ \cline{1-1} & 471 & 38 & 212 & 221 & 236 & 197 & 0 & 0 \\ \hline \(\sum\) & 3,020 & 209 & 1,947 & 864 & 1,768 & 240 & 590 & 213 \\ \hline \hline \end{tabular} \end{table} TABLE V: SSO-Monitor automatically found 3,020 SSO logins on the Tranco 10k websites but only 2,811 could be further analyzed due to technical constraints. OAuth (69%) and the code flow (63%) are predominantly used in the wild. The browser preparation script creates a fresh browser profile with an active IdP-session and 4 pre-installed browser extensions. We (1) use the _i don't care about cookie_ extension8 to automatically remove cookie banners, (2) develop _postMessage_ and _Fragment_ extensions to make InBCs visible to SSO-Monitor, and (3) develop _Auto Consent Extension (ACE)_ to automatically grant user consent and automate the IdP login. Footnote 8: [https://www.i-dont-care-about-cookies.eu/](https://www.i-dont-care-about-cookies.eu/) SSO Login Execution For each IdP, we visit the login pages in a Selenium-driven Chrome with the appropriate profile. Next, we use the coordinates from our landscape analysis to click on the SSO button. If clicking the original coordinates does not lead to the expected IdP, we restart the SSO detection. We capture the HTTP traffic and InBC in HAR files to finally analyze them. Due to our browser extensions and the authenticated IdP sessions, this step is fully automated. ### Security Analysis Test Selection We selected our security tests according to the following criteria to match our methodology: 1. [noitemsep,topsep=0pt] 2. _Passive_ tests that do not involve any active parameter manipulation. 3. Tests detecting faulty behaviour in _Clients_. We do not investigate IdPs. 4. Tests visible in the _front-channel_. Since the back-channel is inaccessible with our approach, we excluded it. Given the above criteria, we chose the security tests from consolidated security recommendation documents. Lodderstedt et al. [29] from the IETF OAuth working group collaborate with researchers for consolidating current SSO issues and their best practices in a single document. Since the document is mostly specific for OAuth, we extended our test set with security recommendations section in the OIDC core specification [47]. We implemented tests that detect the requirements resulting in the following attacks: * Obsolete Flows, Access Token Disclosure, Implicit Flow Threats [29, 47]. Test: access_token in front-channel. * Open Redirect on the Client [29]. Test: Find nested URLs in the redirect_uri using SmartParams. * CSRF Vulnerability [29, 47] with state, PKCE, nonce. Test: Missing parameters or insufficient entropy identified with SmartParams. * Secret Leakages [29]. Test: Check HTTP Referer Headers, Tokens in Browser History, or client_-secret visible in front-channel identified with SmartParams. * HTTPS only requests [29, 47]. Test: Checking all requests for TLS. * Authorization Code Injection, Token Substitution, Access Token Injection [29, 47]. Test: Missing parameters (PKCE, nonce, at_hash). * Token Manufacture/Modification [47]. Test: id_token in front-channel signed with symmetric key. We added tests to identify the following protocol violations: * Protocol Mix-Up: The Client starts OAuth but the IdP switches to OIDC. * Flow Mix-Up: The Client started a flow that does not match the IdP's returned flow. Although our test set is distinctive, more tests matching our criteria can simply be added because of our methodology. Analysis Methodology We implemented an _HAR-Analyzer_ module to evaluate the recorded HARs. First, it loads the HAR data into an in-memory graph data model by using a standard HAR parser library9. Next, it extracts the semantic information related to OAuth and OIDC, and enriches the model with this information. For locating the relevant login requests and login responses, it scans for data that was sent from a Client to the IdP and contains dedicated SSO parameters, for example, client_id and redirect_uri. Then, HAR-Analyzer investigates all further parameters that the Client website and the IdP exchange for detecting security issues. For this purpose, we used our SmartParams approach that we describe below. Finally, HAR-Analyzer produces a report containing the parameters of the SSO messages, the inferred results, and an aggregation of the relevant data. Footnote 9: [https://github.com/sdstoehr/har-reader](https://github.com/sdstoehr/har-reader) SmartParams We need to inspect structured data for the fully automated security analyses without knowing the exact format. For example, OAuth typically uses a random state parameter to protect against CSRF attacks. However, its randomness can be deeply nested in a structured parameter. The in-depth analysis of this parameter is the core idea of SmartParams and depicted in Figure 4. A SmartParam starts with an initial value, usually a Figure 4: The SmartParams approach parses all HTTP parameters until no further data structure can be extracted. Therefore, we can identify deeply nested values in HTTP parameters, so that further security analyses are possible. string, and provides a set of decoded or parsed values like URL, WWW Form Encoded, JSON, JWT, and more. Another SmartParam object is created for each decoded value, forming a searchable tree. The SmartParams processing step is applied to all gathered requests and responses. Furthermore, the collected SSO parameters are the foundation for further security analyses. ### _The Tranco Top 10K Security Results_ In total, we analyzed 131 GB of HAR-Files. It shows that 282 (10%) of the SSO executions failed. We manually investigated these cases and figured out that 219 (8%) are caused by faulty Client configurations, e.g., unregistered redirect_uris. Only 63 (2%) Clients executed SSO successfully. Due to the long running scanning process, these sites may not be reachable at the time of execution. We define six security threats systematically in two categories: potential security issues and vulnerabilities. Potential security issues define four misconfigurations violating either the specification or the security best practices. These misconfigurations do not necessarily mean that the implementation is vulnerable. It is a clear indicator that security analysts should provide further investigations. The second category defines two vulnerabilities, which are considered critical: CSRF and secret leakage. Even though the specification and previous researches address CSRF vulnerabilities clearly, the number of found CSRF vulnerabilities is surprisingly high. secret, _Client Impersonation_ attacks can be executed [27, Section 5.2.3],[56]. In addition, the id_token should not be signed with symmetric cryptography when the implicit flow is used [47, Section 10.1], because the Client cannot verify the token without leaking the key. In Table VII, one can see that only 5 Clients leak secrets. We did not discovered any id_tokens using symmetric cryptography. This is motivated by the fact that the IdPs always sign the id_token with RSA or elliptic curves. Advanced Security Countermeasures The IETF aims to reduce the risks against attacks to a minimum. In addition to the standardization of security consideration [27] and security best practices [28], the IETF developed protection mechanisms applied as extensions on the top of OAuth and OIDC. Such protection mechanisms are PKCE [39], Demonstrating Proof-of-Possession (DPoP) [9], and mTLS [4]. While PKCE can be detected by analyzing the network traffic, DPoP and mTLS are executed only in the back-channel communication between the Client and the IdP. Although, it was originally designed for native applications, PKCE can be used to prevent authorization code injection attacks [29]. Our analysis discovered 23 Clients implementing PKCE - 14 (), 3 (), and 6 (). Additionally, OIDC defines the nonce parameter and therefore provides another way to prevent authorization code injection attacks. However, the usage of the nonce parameter in general is quite low. In total, 493 Clients requested an id_token explicitly in the response_type but only 42 of them created a related nonce parameter. Also, none of the Clients requesting an id_token use PKCE. Similar to the code, the access_token can also be a target of an injection attack. Unfortunately, there are no ways to detect such an attack on the OAuth protocol level. However, OIDC benefits from the id_token which contains the at_hash. Hence, the access_token can be validated on usage time. From 39 Clients which requested an id_token and access_token all id_token included an at_hash. HTPS Only Request As stated in the OAuth security best practices [28], each authorization responses must not be transmitted over unencrypted network connections. However, we found 14 Service Provider (SPs) which try to set an unencrypted plain http redirect_uri - 5, 9 ). Additionally, 4 of them received a valid authorization response. Interestingly, Facebook blocks plain http redirect_uris while all authorization responses were send by Google. The only exception for using unencrypted conections are native clients that use loopback interface redirection. None of the aforementioned cases apply to this exception. When looking at the whole http traffic, 136 SPs try to send unencrypted http requests from which 32 also get a valid response. ## 8 Automatic Evaluation: SSO Privacy ### _Methodology_ The SSO privacy evaluation consists of three steps as depicted in Figure 3: (1) For each automated privacy analysis conducted by SSO-Monitor, we use a script to create a fresh browser profile with an active IdP session. We created profiles for Google, Facebook, and Apple. (2) We use the browser profile from step (1) to load each website from the Tranco 10k list supporting the top three IdPs in a Chrome browser. Again, we use Selenium for automation and to capture traffic in HAR files. (3) Our HAR Analyzer module automatically analyzed the HAR files created in step (2) for possible privacy leaks. Scripted Browser Preparation The scripted browser preparation consists of two parts. First, we reused the three profiles we used during the security analyses. This reusing is necessary since we can be confident that each IdP-account was used previously to sign in to a certain Client. Without this reusing, we would not know whether the account provided consent for this particular Client to the IdP. We call them the _consent-given profiles_. Second, we create one new browser profile per IdP using a fresh IdP-account. With this, we can be confident that these accounts have _never_ logged into the Client before. Thereby, no consent is given and we call them _no-consent profiles_. Note that the profiles only contain IdP-related cookies. Therefore, we ensure that a user is logged out on all Clients. In summary, our privacy analyses use, thereby, six different browser profiles. Client Website Visit For each website that our landscape analyses provides, we start visiting them. For each supported IdP on the website, we open the start page11 twice with the corresponding browser profiles. The first time with the _consent-given profile_, the second time with the _no-consent profile_. In both cases, we interact with the website by clicking on random links which do not claim to require or start any authentication and pressing some keys (e.g., PageUp). In summary, if a website supports all three IdPs (Apple, Google, Facebook), we visit the start page six times, once with each browser profile, and interact with that page. SSO-Monitor records the traffic of each visit. Footnote 11: In contrast to the security analyses, during which we visited the login page. ### _Privacy Analysis_ Test Selection For the selection of privacy tests in SSO, there exists no document summarizing such issues in contrast to security tests. Thereby, our privacy analyses detect whether a website visitor is authenticated in the background, for instance, without explicitly clicking a sign-in button. We additionally search for any identity-related information leaks to the Client. With our _consent-given profile_, the Client can - in theory - log in the user in the background. Our analyses reveal that this is abused in 199 cases. In contrast, the _no-consent profile_ should protect users from this behavior since they never agreed to share their identity with the Client. In both cases, an automatically created login attempt, created by the Client, may allow the IdP to track users secretly. Considering users' privacy, we raise the following research questions: (1) Do websites exchange SSO messages revealing privacy information without users' consent? (2) Do the IdPs provide a sufficient level of protection against _honest but curious Clients_? In our threat model, an honest but curious Client acts according to the protocol and establishes a trust relationship with the IdP. Clients can easily gain this relationship because IdPs support the registration of arbitrary Clients. Leakage Channels A login attempt signals the beginning of the SSO authentication. The Client initiates the protocol and sends a login request to the IdP along with the IdP's session cookie. Each message can disclose different private information. Our evaluation considers only messages as a leak if the user has not explicitly started any login. We classify _leakage channels_ in two different categories: login attempt and token exchange. Login Attempt Leak (LAL): Privacy Leak to IdPs The first leakage occurs if the login request is sent without the user actively navigating to the login page at the Client. If the user is authenticated to the IdP, the IdP learns which website the user is currently navigating. Token Exchange Leak (TEL): Privacy Leakage to Clients The second leakage targets the login response. It contains the user's identity, for example, the email. If the IdP issues the login response without any consent, the Client learns the user's identity. Therefore, the user's identity is entirely revealed to both SSO parties. Cookie-based vs. SSO-based Privacy Leaks In contrast to cookie-based privacy leaks, our findings are more invasive. Usually, the user visits a Client and actively decides to click on the sign-in button. After successful authentication, the browser stores the session cookies. If the user does not clear the cookie store, the website can re-identify the user based on the cookies. In SSO privacy evaluation, the user visits the Client and the IdP automatically returns user-identifiable tokens. Even if no cookies for the Client are stored, the website can still identify the user at any time. Consider a user starting the browser for the first time. The user authenticates to an IdP, for example, to synchronize the browser settings. If the user afterward visits the Client, the identity automatically leaks to the Client. We claim our privacy leaks as novel. To our best knowledge, no previous work has investigated such token leaks in an automated manner. ### _The Tranco Top 10K Privacy Results_ We evaluated all websites of the Tranco top 10k list supporting logins with the top three IdPs (Facebook, Google, and Apple) and discovered multiple privacy leakages, which we discuss in this section. In total, we analyzed 135 GB of HAR-Files. We summarize the results in Table VIII. Note that among 3,020 detected SSO logins, 24 privacy analyses are missing as the related websites failed to load during the run. We determined two categories with our pre-generated profiles: no consent given and consent given. Case 1) No-Consent Profiles We identified 200 login requests being sent transparently to the Google and Facebook IdPs. Interestingly, we did not observe automatic login attempts to Apple, possibly because Apple enforces a user to consent on every login attempt. Also, a positive result is that none of the IdPs generate authentication tokens automatically in this category. This is the correct and expected behavior, since the user should consent at least the first time when an authorization by the Client takes place. Case 2) Consent-Given Profiles We observed almost the same amount of login attempts for Google as in the previous category, which is not surprising since the Client does not know a-priory whether a user is authenticated to any IdP. The one missing LAL can be attributed to the client not being available at the time of the scan. Overall, we discovered that in 42 cases, the IdPs automatically generates authentication tokens and send them to the Client in the login response. Thus, the Client observes the user's identity even if the user never consciously started any authentication on the Client. Regarding Google, all 22 found TELs are due to Google's _autologin_ feature, where the user automatically gets logged in to the Client. Although this behavior is not desirable for the users' privacy, these cases may be visible to the user. In contrast, Facebook does not transparently show the login process. Our analysis reveals that 20 websites stealthily learn the user's identity by TELs. Hence, it is up to the Client if they reveal this to the user or not (i.e. via UI). ## 9 Related Work Apart from the systematization of known SSO tools in SS3, there is more related work on SSO. We divided prior work into three categories to match SSO-Monitor's architecture. Single Sign-On Landscape In 2020, Alaca and Oorschot [1] developed a framework to compare protocol designs and evaluate 14 different web SSO systems, but they did not compare implementations. In 2021, Morkonda et al. analyzed the Alexa top 500 per country for five countries [33]. They categorized user data provided by Google, Facebook, Apple, and LinkedIn, and identified that Clients request different data for different IdPs. They analyzed the login request, while we investigated the whole login flow. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{**No-Consent Profiles**} & \multicolumn{2}{c}{**Consent-Given Profiles**} \\ IdP & SSO Logins & LAL & TEL & LAL & TEL \\ \hline \(\mathbf{G}\) & 1,387 & 161 & 0 & 160 & 22 \\ \(\mathbf{G}\) & 1,143 & 39 & 0 & 39 & 20 \\ \(\mathbf{\SIUnitSymbolFace}\) & 466 & 0 & 0 & 0 & 0 \\ \hline \(\Sigma\) & 2,996 & 200 & 0 & 199 & 42 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: We discovered that in 200 of 2,996 (7%) SSO logins, the login is initiated automatically by visiting the Clients’ starting page. On 42 (1%) of them, the IdP automatically sends authentication tokens and reveals the user’s identity. Our leakage findings must be seen as lower boundaries. A considerable amount of literature has been published on the security of OAuth and OIDC web implementations in the wild. A significant amount of researcher concentrate on classical web vulnerabilities in the SSO context, such as CSRF and XSS [46, 21, 41, 24]. Li and Mitchell [22] investigated specific issues in Google's OIDC implementation. Wang et al. [49] analyzed OAuth protocol implementation on various platforms [49]. Mainka et al. [30] identified issues in official OIDC libraries. Sadji et al. [37] provided in 2020 a survey of OAuth relevant threats for web clients. Recently, Saito et al. [38] assessed the implementation of social logins on 500 American websites and compared their results to Japanese websites (2021). They found that 76 websites are susceptible to attacks, mainly caused by faulty implementations or insecure design decisions. In 2021, Liu et al. introduced a new threat for SSO authentication [26]. In comparison to our work, the authors did not concentrate on protocol flaws and implementation mistakes, but on a design issue of the SSO ecosystem. They reused abandoned email addresses. Privacy Besides a large corpus of literature concerning SSO security, a considerable amount of research has focused on privacy issues in SSO. Various research groups build new or extend existing SSO schemes to tackle their privacy concerns [8, 15, 12, 55]. These issues range from leaking user-specific parameters up to complete identity revelation. Farooqi et al. [7] investigated how leaked Facebook tokens are abused. Li and Mitchell [23] systematically analyzed how IdPs can track users' interactions with Clients. Despite previous efforts in improving the privacy of OAuth and OIDC, we still do not see these enhancements being implemented. ## 10 Conclusion & Discussion We conclude with lessons learned and present new directions for future SSO research. Further IdPs Parts of SSO-Monitor, for instance, the landscape analyses, already support LinkedIn, Microsoft, Twitter, and Baidu. However, the integration for security tests requires more complex adaptions. SSO-Monitor needs support for the IdP specific consent pages and the automatic login. Also, the browser profile generation is challenging. Additionally, when running the analyses, each analyzed IdP extends the processing time. Therefore, we decided first to analyze the top three IdPs and provide landscape, security, and privacy insights. Data for more IdP will be provided in the future. Developers vs. Specifications Any deviation from the specification makes an automated analysis searching for SSO patterns hard. As a result, we developed multiple strategies to recognize SSO flows even if they are not compliant to the specification. We also observed a disregard for the security best practices. Security problems that are well-studied and documented still exist. Real-World Security Best Practice Compliance Our research reveals for one more time that existing security best practices are often ignored and not implemented in the Tranco 10k. The question arises of how this situation could be improved. For example, the deployment of TLS on websites has tremendously improved since browsers penalize websites not supporting TLS. Similarly, this would be possible for IdPs. For example, they could drop login requests not following best practices. Future research should concentrate on how secure by default configurations can be deployed more efficiently.
2310.01235
COIN-LIO: Complementary Intensity-Augmented LiDAR Inertial Odometry
We present COIN-LIO, a LiDAR Inertial Odometry pipeline that tightly couples information from LiDAR intensity with geometry-based point cloud registration. The focus of our work is to improve the robustness of LiDAR-inertial odometry in geometrically degenerate scenarios, like tunnels or flat fields. We project LiDAR intensity returns into an intensity image, and propose an image processing pipeline that produces filtered images with improved brightness consistency within the image as well as across different scenes. To effectively leverage intensity as an additional modality, we present a novel feature selection scheme that detects uninformative directions in the point cloud registration and explicitly selects patches with complementary image information. Photometric error minimization in the image patches is then fused with inertial measurements and point-to-plane registration in an iterated Extended Kalman Filter. The proposed approach improves accuracy and robustness on a public dataset. We additionally publish a new dataset, that captures five real-world environments in challenging, geometrically degenerate scenes. By using the additional photometric information, our approach shows drastically improved robustness against geometric degeneracy in environments where all compared baseline approaches fail.
Patrick Pfreundschuh, Helen Oleynikova, Cesar Cadena, Roland Siegwart, Olov Andersson
2023-10-02T14:24:38Z
http://arxiv.org/abs/2310.01235v4
# COIN-LIO: Complementary Intensity-Augmented LiDAR Inertial Odometry ###### Abstract We present COIN-LIO, a LiDAR Inertial Odometry pipeline that tightly couples information from LiDAR intensity with geometry-based point cloud registration. The focus of our work is to improve the robustness of LiDAR-inertial odometry in geometrically degenerate scenarios, like tunnels or flat fields. We project LiDAR intensity returns into an intensity image, and propose an image processing pipeline that produces filtered images with improved brightness consistency within the image as well as across different scenes. To effectively leverage intensity as an additional modality, we present a novel feature selection scheme that detects uninformative directions in the point cloud registration and explicitly selects patches with complementary image information. Photometric error minimization in the image patches is then fused with inertial measurements and point-to-plane registration in an iterated Extended Kalman Filter. The proposed approach improves accuracy and robustness on a public dataset. We additionally publish a new dataset, that captures five real-world environments in challenging, geometrically degenerate scenes. By using the additional photometric information, our approach shows drastically improved robustness against geometric degeneracy in environments where all compared baseline approaches fail. ## I Introduction Recent advances in 3D Light Detection and Ranging (LiDAR) have decreased both the size and price of these sensors, enabling them to be used by a wider range of robots. At the same time, new LiDAR-based state estimation approaches such as FAST-LIO2 [1] have increased the accuracy and robustness while decreasing the computational cost, making 3D LiDAR one of the most popular choices for mobile robot sensors, especially in GNSS-denied environments. However, even these LiDAR-Inertial Odometry (LIO) approaches struggle in geometrically degenerate environments, such as tunnels, flat fields, and planar environments. In most geometrically uninformative scenes in the real world, the texture of the environment still offers some visual information. While other work [2, 3, 4, 5] has focused on fusing camera information with LiDAR to take advantage of this complementary data, this requires additional sensors, accurate extrinsic calibration, and time synchronization. Cameras are also passive sensors that will not work in the absence of ambient light, which limits their applicability. However, in addition to range measurements, modern 3D LiDARs also provide the measured return strength of each reflected point (intensity). For rotating multi-layer LiDARs, this signal can be projected into a dense image, which allows the LiDAR to operate as an active camera without external illumination. Images and point clouds are time-synchronized and the extrinsics are known, which further simplifies their use. These intensity images contain texture information about the environment, which can be used to inform pose estimation in geometrically degenerate environments. Compared to camera images, LiDAR intensity images suffer from poor Signal-to-Noise Ratio, lower resolution, strong rolling shutter effects (as a rotating LiDAR spins much slower than a camera sensor exposes), and a different projection model from traditional pinhole cameras, making it difficult to directly apply visual odometry methods. We present COIN-LIO, a robust, real-time intensity-augmented LIO framework, that couples geometric registration with photometric error minimization for improved robustness. We improve upon related work by introducing a filtering method to improve brightness consistency in the intensity image and an intensity feature selection scheme that complements geometrically degenerate directions. This feature selection is important as geometrically-degenerate parts of the scene (like tunnel edges) are often also visually degenerate. This allows us to vastly increase robustness of the combined method in geometrically degenerate scenarios, while keeping or improving performance in easier cases. We found a lack of 3D LiDAR datasets focusing on scenarios with degenerate geometry. To this end, we created the ENWIDE dataset, that captures five real-world ENvironments WIth large sections of DEgenerate geometry and recorded ground truth positions from a high accuracy laser scanner. We hope that by providing this data to the community along with our open-sourced code implementation1, we can fuel further advances in robust LiDAR-inertial odometry. Footnote 1: Code and dataset will be released after review. The main contributions of our work are: _(1)_ we show that our approach that effectively leverages LiDAR intensity improves robustness and performance of LIO in geometri Fig. 1: _Top:_ Accumulated point cloud colorized by intensity and trajectory (orange) resulting from COIN-LIO. Our approach achieves accurate odometry despite geometric degeneracy along the tunnel, resulting in clearly visible correct ground and wall markings. _Mid:_ Filtered intensity with tracked features (orange). _Bottom:_ Top view of the resulting point cloud (gray) and trajectory (orange) from the tunnel. cally degenerate scenes, _(2)_ we propose a LiDAR-intensity image processing pipeline as well as a geometrically complementary feature selection scheme that allows us to detect and track salient features with complementary information to the geometry-based measurements, _(3)_ we provide a real-world dataset, ENWIDE, that contains ten sequences in five scenes of diverse geometrically degenerate environments, with accurate position ground truth. We present our contributions in a combined system with geometry-based LIO, based on FAST-LIO2 [1], and show superior performance on a standard dataset and ENWIDE, over geometry-only and geometry-and-intensity-based methods. ## II Related Work ### _LiDAR (Intertial) Odometry_ Common LiDAR-based odometry approaches are based on registration of a measured point cloud against a (sub)map that is built during operation. It is computationally not feasible to register and map all points that modern 3D LiDARs produce online. For many years, the standard approach for LiDAR Odometry (LO) was LOAM [6] which extracts points on edges and planes for registration. This works well in structure-rich environments, but such edge and plane points are often not expressive enough to perform robustly in geometrically challenging scenarios. KISS-ICP [7] avoids feature selection and directly registers a voxel-downsampled point cloud with point-to-point ICP which showed improved performance in unstructured environments. X-ICP [8] explicitly detects degenerate directions in the registration, but relies on an auxiliary state estimate. The additional use of inertial measurements in LIO approaches has shown a large increase in robustness, as it allows accurately undistorting the point cloud from ego-motion and provides an initial guess for the registration. LIO-SAM [9] fuses Inertial Measurement Unit (IMU) measurements in a factor-graph [10, 11] with edge and plane feature matching against submaps. FAST-LIO [12] presents an efficient formulation of the Kalman Filter update that allows for alignment of every scan against the continuously built map in real-time. The authors switch from feature matching to raw points with point-to-plane ICP in its successor [1] that achieves state-of-the-art performance, which we base our approach on. ### _Intensity Assisted Odometry_ Several approaches use intensity as a similarity metric and integrate it into a weighted ICP [13, 14] or use high-intensity points as an additional feature class [15, 16, 17]. These approaches cannot capture fine-grained details due to their lower map resolution. The approaches presented in [18, 19] detect and match image features in the intensity image and only use the corresponding points for registration. However, in geometrically degenerate cases such features are often sparse and therefore most of the geometric information is neglected which can result in inferior performance. In the approaches above, the intensity only influences the point correspondences, but does not directly provide a gradient in the optimization. Differently, in MD-SLAM [20], the photometric error of the intensity image is optimized together with a range and normal image; however they do not use the IMU or perform motion undistortion and evalute the entire dense image instead of sparse informative patches as in our work. The approach closest to our work is RI-LIO [21]. Similar to us, they integrate photometric error minimization into the iterated Extended Kalman Filter (iEKF) [22] of [1], but use reflectivity instead of intensity. They randomly downsample the point cloud and project single points stored in a downsampled map into the reflectivity image for the photometric components. However, relevant information is typically not distributed homogeneously in images but concentrated in specific salient regions. Instead of single random pixels at a low resolution, we specifically select geometrically complementary, salient high-resolution patches from a filtered image and continuously asses feature validity. This leads to superior performance in difficult geometrically-deficient scenarios compared to existing approaches. ## III Method COIN-LIO adopts the tightly-coupled iterated Extended Kalman Filter presented in FAST-LIO2 for the point-to-plane registration and extends it using photometric error minimization. However, our approach could be applied to other LIO frameworks as well. Due to space limitations we do not review FAST-LIO2, but refer the reader to their works [1, 12] and focus on the photometric component instead. We process intensity images from point clouds using a novel filter that improves brightness consistency and reduced sensor artifacts. We specifically select image features that provide information in uninformative directions of the point cloud geometry. The feature management module examines the validity of tracked features and detects occlusions. Finally, we integrate the photometric residual into the Kalman Filter. ### _Definitions_ We define a fixed global frame (\(G\)) at the initial pose of the IMU (\(I\)). The transformation from LiDAR frame (\(L\)) to IMU frame is assumed to be known as \(\mathbf{T}_{IL}=(\mathbf{R}_{IL},t\mathbf{p}_{IL})\in SE(3)\). We define the robot's state as \(\mathbf{x}=[\mathbf{R}_{GI},\mathbf{\sigma}_{\text{P}GI},\mathbf{\sigma}_{V},\mathbf{b}^{\text{a}},\mathbf{b}^{\text{g}},\mathbf{\sigma}\mathbf{g}]\), where \(\mathbf{R}\in SO(3)\) denotes orientation, \(\mathbf{p}\in\mathbb{R}^{3}\) is the position, \(\mathbf{v}\in\mathbb{R}^{3}\) describes linear velocity, and \(\mathbf{b}^{\text{a}},\mathbf{b}^{\text{g}}\in\mathbb{R}^{3}\) indicate accelerometer and gyro biases. Each LiDAR scan consists of points recorded during one full revolution \(\mathcal{P}=\{L_{j}\mathbf{p}_{j},j=1,...,k\}\), with \(t_{j}\leq t_{k}\). ### _IMU Prediction and point cloud undistortion_ We adopt the Kalman Filter prediction step according to FAST-LIO2 [1] by propagating the state using IMU measurement integration from \(t_{j}\) to \(t_{k}\). Similarly, we calculate the ego-motion compensated, undistorted points at the latest timestamp \(t_{k}\) as: \(\scalebox{0.9}{$\underset{t_{k}}{\mathbf{p}_{j}}=\mathbf{T}_{L_{k}I_{k}} \mathbf{T}_{I_{k}I_{j}}\mathbf{T}_{I_{j}L_{j}}\mathbf{p}_{j}$}\). Fig. 2: Projection model. The offset between LiDAR origin and laser emitter is denoted as \(r\). A measured point is depicted on the top right (\(p\)). ### _Image Projection Model_ We project \({}_{L_{j}}\mathbf{p}_{j}=[x_{j},y_{j},z_{j}]\) to image coordinates using: \[c\mathbf{p}_{j}=\Pi({}_{L_{j}}\mathbf{p}_{j})=\begin{bmatrix}I_{\!x}\phi+c_{x} \\ I_{\!y}\theta+c_{y}\end{bmatrix}=\begin{bmatrix}I_{\!x}\arctan(\frac{y_{j}}{I_{ j}})+c_{x}\\ I_{\!y}\arccos(\frac{1}{R_{j}})+c_{y}\end{bmatrix}=\begin{bmatrix}u_{j}\\ v_{j}\end{bmatrix} \tag{1}\] where \(L_{j}=\sqrt{x_{j}^{2}+y_{j}^{2}}-r\), \(R_{j}=\sqrt{L_{j}^{2}+z_{j}^{2}}\), \(f_{\!x}=\frac{-w}{2\pi}\), \(f_{\!y}=\frac{-h}{\Theta_{f_{\!x}}}\) as illustrated in Figure 2, and \(w\) and \(h\) denote horizontal and vertical resolution of the LiDAR. Due to irregular vertical beam spacing in the LiDAR, this results in empty pixels as outlined in [21]. Thus, we directly create the image from laser beam and encoder value and compensate the horizontal offset similar to [21], but use a constant value for all beams. We keep a list of all beam-elevation angles \(\Theta_{L}=\{\theta_{1},...,\theta_{h}\}\) from the calibration of the LiDAR. When we project a feature point into the image, we calculate \(\theta_{f}\) and find the beams above and below in \(\Theta_{L}\) to interpolate the subpixel coordinate. ### _Image Processing_ The irregular vertical beam spacing causes horizontal line artifacts in the intensity image. They are less apparent in structure-rich scenes, but dominate the image in environments with little structure. As they occur at a regular row-frequency we design a finite impulse response filter to remove them. First, we use a highpass filter vertically with cutoff just below the line frequency. Apart from the lines, the output also contains relevant image content at this frequency. We therefore apply a lowpass filter horizontally, which isolates the lines as relevant image signals appear at a higher horizontal frequency. Finally, we subtract the isolated signal from the intensity image. Intensity values depend on the reflectivity of the surface as well as the distance and incidence angle. The intensity is thus lower in areas that are farther away from the sensor. LiDARs such as the Ouster also report compensated reflectivity signals, which is used in [21], but the influence of the incidence angle remains. We therefore propose a different approach to achieve consistent brightness throughout the image. The brightness level varies smoothly throughout the image, as average distance and incidence angle are typically driven by the global structure of the scene instead of small geometric details. We thus build a brightness map \(I_{\!b}(u,v)\) by averaging the intensity values in a large window. To achieve consistent exposure throughout the image, we calculate the filtered pixel values using the brightness map values: \[I_{F}(u,v)=200\cdot\frac{I(u,v)}{I_{\!b}(u,v)+1} \tag{2}\] Finally, we smooth the image using a 3x3 Gaussian kernel to reduce noise. We provide explanatory images in Figure 4. ### _Geometrically Complementary Patch Selection_ We select and track \(5\times 5\) pixel patches which has shown better convergence compared to single pixels [23]. In contrast to prior works that select features randomly [21] or based on visual feature detectors [18, 19], we follow an approach inspired by [24]. We select candidate pixels with an image gradient magnitude above a threshold and perform a radius based non-maximum suppression. This approach does not rely on corner features which is favourable for low-texture images. Candidate pixels are mostly detected on shape discontinuities in the 3D scene such as edges and corners, or on changes in surface reflectivity, e.g. from ground markings or vegetation. The information from Jacobians from pixels on shape discontinuities often overlaps with the information that is already captured in the point-to-normal registration. We thus aim to select candidates that give additional information to efficiently leverage the multi-modality. Fig. 4: (1): The intensity image is over- (center) and under-exposed (sides). (2): Our filtered image has consistent brightness across the image. (3) & (4): Detail views from a grass field (3) and tunnel (4). The reflectivity image is under-exposed and does not show the ground markings (4). The intensity suffers from strong line artifacts that dominate the texture (3). Our filter removes the line artifacts (Intensity w/o). Our brightness compensation produces consistent exposure and shows details at larger range (ground markings in (4), grass texture in (3)). Fig. 3: System Overview: The input point cloud is used geometrically (green) for map registration and as a projected image (blue) for photometric error minimization. Both residuals are combined in an iterated update (orange). We use the registration Jacobian to find uninformative directions in the geometry and select features with complementary image information (right bottom). Lines indicate information flow _before_ (- - -) and _after_ (—) the update step. To detect uninformative directions in the point cloud registration, we follow the information analysis presented in X-ICP. We calculate the principal components of the Hessian \(\mathbf{H}^{geoT}\mathbf{H}^{geo}\) of the point-to-plane terms. A direction is then detected as uninformative if the accumulated filtered contribution is below a threshold. We refer the reader to [8] for more details. We analyze the translational components and denote the set of uninformative directions \(V_{t}\). If all directions are informative, we insert vectors along the coordinate axes to promote equally distributed gradients. We calculate the second image moment \(M\)[25] and use its strongest eigenvector \(\mathbf{v}_{patch}\) to approximate the patch gradient, which is more stable than pixel gradients. We then calculate how the projected image coordinate changes, if the point is perturbed along a direction using (7): \[\mathbf{d}_{p_{i}}=\frac{\partial\Pi(L_{j}\mathbf{p}_{j})}{\partial L_{j} \mathbf{p}_{j}}\cdot\mathbf{v}_{\mathbf{t},i}\in\mathbb{R}^{2},\,\forall\, \mathbf{v}_{\mathbf{f},i}\in V_{t} \tag{3}\] We select features where shifting the point along an uninformative 3D direction results in a 2D coordinate shift in an informative image direction. We therefore project the projection gradient \(\mathbf{d}_{p_{i}}\) onto the informative direction \(\mathbf{v}_{patch}\) of the patch to calculate its directional contribution \(c_{i}\). As the magnitude of the projection gradient increases with decreasing range, which would favour the selection of points close to the sensor, we use the normalized gradient instead: \[c_{i}=\frac{\mathbf{d}_{p_{i}}\cdot\mathbf{v}_{patch}}{||\mathbf{d}_{p_{i}}||} \tag{4}\] For each direction in \(V_{t}\), we select the patches with the strongest contribution. We visualize the results in Figure 5. ### _Feature Management_ We initialize each point in a patch separately at its global position using the current pose estimate and store the corresponding filtered intensity value. Differently from visual odometry approaches [23], where one position is assigned to the whole patch, this allows us to project each point in the patch separately. Using high resolution patches we can capture fine-grained details in contrast to prior works [14, 21] which only store a single value per voxel-grid cell. To keep the computational load bounded, we limit the number of tracked patches. After each update step, we asses the feature patch validity. To detect occlusions, we compare the predicted and measured range for each point in the patch and discard all points in it if the difference is above a threshold. We also remove patches if the measured range is below a minimum or above a maximum range. Additionally, we calculate the normalized cross correlation (NCC) between a tracked and measured patch and remove it if the NCC is below a threshold. We only track features over a maximum amount of frames to reduce error accumulation and to encourage initialization of new features. We avoid overlapping features and improve feature distribution by enforcing a minimum distance between new and tracked features. ### _Photometric Residual & Kalman Update_ We minimize photometric errors between tracked and currently observed points. The error is computed by projecting tracked points into the current image and comparing current intensity values to the patch: \[z^{pho}=I_{c}(\Pi(L_{j}\mathbf{p}_{f}))-i_{f} \tag{5}\] As rotating LiDARs record individual points sequentially, the pixels inside the intensity image are measured at different times and different poses. We project the position of the tracked point in the LiDAR frame \(L_{j}\) at the time of the corresponding pixel: \[L_{j}\mathbf{p}_{f}=\mathbf{T}_{L_{j}I_{j}}\mathbf{T}_{I_{j}I_{k}}\mathbf{T}_ {I_{k}GG}\mathbf{p}_{f} \tag{6}\] However, this is dependent on \(\mathbf{T}_{I_{j}I_{k}}\), which in turn depends on the unknown time \(t_{j}\) itself. RI-LIO solves this by using a kNN-search in a kD-tree. However, this is only computationally feasible at a low resolution. We thus propose a projection-based solution. Given the undistorted point cloud, we can approximate which areas in the environment were captured at which timestamp. Therefore, we build an undistortion image by projecting the undistorted point cloud into an image and assign each pixel the index of the corresponding point: \(\mathcal{U}(\Pi(L_{k}\mathbf{p}_{j}))=j\). To find the corresponding index for the feature point, we then project it to the undistortion map, which is drastically cheaper than kD-tree-search and thus applicable to the full resolution point cloud. Given the index, we find the respective timestamp and \(\mathbf{T}_{I_{j}I_{k}}\) to calculate (6) and finally (5). The resulting Jacobian \(\mathbf{H}^{pho}\) is calculated as: \[\mathbf{H}_{j}^{pho}=\frac{\partial\mathcal{I}[c\mathbf{p}_{j}]}{\partial C \mathbf{p}_{j}}\cdot\frac{\partial\Pi(L_{j}\mathbf{p}_{j})}{\partial L_{j} \mathbf{p}_{j}}\cdot\frac{\partial L_{j}\mathbf{p}_{j}}{\partial\tilde{ \mathbf{x}}} \tag{7}\] \[\frac{\partial\mathcal{I}[c\mathbf{p}_{j}]}{\partial C\mathbf{p}_{j}}=\begin{bmatrix} \frac{-f_{x}y}{x^{2}+y^{2}}&\frac{f_{x}x^{2}+y^{2}}{x^{2}+y^{2}}&0\\ -\frac{f_{x}x^{2}}{LR^{2}}&-\frac{f_{y}y}{LR^{2}}&\frac{f_{y}L}{R^{2}}\end{bmatrix} \tag{8}\] \[\frac{\partial\mathcal{I}[c\mathbf{p}_{j}]}{\partial C\mathbf{p}_{j}}\ \ ## IV Experimental Results We quantitatively compare our proposed pipeline with several state-of-the-art approaches as baselines: KISS-ICP [7], LIO-SAM [9] and FAST-LIO2 [1] represent widely used LO and LIO algorithms. Similar to our approach, MD-SLAM [20], Du [19] and RI-LIO [21] also use intensity or reflectivity information. We use the Newer College Dataset [26] as a public baseline. To evaluate robustness in low-structured environments, we additionally provide and evaluate on a new dataset of geometrically degenerate scenes that is presented in Section IV-A. We calculated the absolute translational error (ATE) and the relative translational error (RTE) over segments of 10m using the evo library [27]. We declare approaches with an RTE that is larger than \(20\%\) as failed (indicated by \(\times\)) and do not report their ATE, as the required alignment between estimate and ground truth trajectories is not meaningful if the estimated trajectory differs too much from the ground truth. Apart from sensor extrinsics, calibrations and minimum range (to adapt for narrow scenes), we used the default parameters that were provided by the baseline approaches. We slightly increased the reflectivity covariance parameter in RI-LIO, as the default value caused divergence in all tested sequences. ### _ENWIDE Dataset_ As geometrically degenerate environments are barely represented in existing open-sourced datasets, we created a new dataset with long segments of real-world geometric degeneracy (Fig. 6). Using a hand-held Ouster OS0 128 beam LiDAR with integrated IMU, we recorded five distinct environments: Tunnel (urban, indoor), Intersection (urban, outdoor), Runway (outdoor, urban), Field (outdoor, nature), Katzensee (outdoor, nature). All sequences contain long sections of geometric degeneracy, but start and end in well-constrained areas. Tunnel/Intersection/Runway sequences contain strong intensity features, Katzensee/Field contain few salient features. For each environment we provide one smooth (walking, slow motions, e.g. Field**S**) and one dynamic (running, aggressive motions, e.g. Field**D**) sequence. Ground truth positions were recorded from a Leica MS60 station at 20 Hz (interpolated to 200Hz) with approximately 3cm accuracy. ### _Newer College Results_ The Newer College Dataset [26] uses a hand-held 128-beam Ouster OS0. We present the results in Table I. In the _Cloister_ sequence, which contains large structures and slow motions, all tested approaches achieve a low ATE. In _Quad-Hard_, aggressive rotations occur. Due to the absence of an ego-motion compensation, the LO approaches perform worst. Our approach achieves the lowest ATE, which confirms that our computationally-cheap image motion-compensation method is effective. The _Stairs_ sequence causes several approaches to diverge, as they use heavy spatial-downsampling of the point cloud to achieve real-time performance, which in the case of this narrow stairway removes too much information. While our approach uses the same downsampling for the geometric part, it achieves robust and accurate performance thanks to the photometric component. This unveils an inherent benefit of image-based intensity augmentation: fixed-size patches in the image implicitly capture different amounts of volume depending on the point distance. Thereby, projected images have automatic adaptive resolution at a constant cost, contrary to the increased cost resulting from a higher voxel-resolution necessary to capture the same information. While RI-LIO uses information from reflectivity images, its random feature selection fails to extract salient information and therefore diverges. In contrast, the dense approach in MD-SLAM does not fail, but is outperformed by our approach. We perform slightly better than FAST-LIO2 on the significantly longer but geometry-rich _Park_ dataset, showing that the intensity features can also improve performance in non-degenerate scenarios. We also evaluate our runtime on the _Park_ sequence which is the longest in our experiments. On average, our approach consumes 29.7ms per frame (33 Hz) on an Intel i7-11800H mobile CPU, of which only 6.2ms are spent on the photometric components, which shows that the main computational cost results from the conventional geometric approach. ### _ENWIDE Results_ While our approach showed improved accuracy in Table I, the main motivation behind this work is to leverage intensity to improve robustness of LIO in challenging scenarios. We therefore evaluate on the challenging ENWIDE Dataset, presented in Table II. It is plausible, that KISS-ICP fails in all sequences as it only operates on the (degenerate) point cloud geometry. However, we observe that MD-SLAM and Du, which also leverage the intensity channel, diverge in all sequences too. Both do not use the IMU, unlike LIO approaches, which impacts their ability to handle even short segments of geometric degeneracy or fast rotations. Additionally, Du only uses the images for geometric feature selection, and cannot benefit from additional texture information in the optimization. Despite using reflectivity and IMU, RI-LIO diverged in most sequences, which we discuss below. Due to noise in the IMU measurements and drifting biases, LIO approaches can still fail in longer segments of geometric degeneracy. This is evident in LIO-SAM, which uses curvature-based point cloud features [6]. FAST-LIO2, which operates on points directly, is able to avoid a failure in (_Intersection_, _Field_, _Katzensee_) where the present vegetation still offers some weak information, but exhibits large drift. However, we observe a failure in man-made environments (_Tunnel_, _Rumway_), where the geometry is effectively perfectly degenerate. Despite this, our approach achieves robust performance in all tested sequences, by leveraging the com plementary information provided by the photometric error minimization. While we consider the increased robustness in scenarios where prior approaches fail as the main strength of our approach, we also note our higher accuracy than FAST-LIO2 on most successful sequences. The main limitation of our approach is the dependence on specific high-resolution LiDARs that create dense intensity images. ### _Ablation study_ We show the effects of our image processing and feature selection decisions in Table III. We compare the proposed _Filtered_ image with _Intensity_ and _Reflectivity_ images. We also evaluate different feature selection policies by comparing _Random_ (similar to RI-LIO [21]) as well as _Strongest_ image gradient selection with the proposed geometrically _Complementary_ selection. We note lower error from (Intensity, Strongest) than (Reflectivity, Strongest). This seems surprising at first, as the reflectivity value compensates the range dependency of the signal. However, we observed that the reflectivity image contains stronger noise and artifacts and has less consistent brightness across the image. Our proposed image processing (Filtered, Strongest) improves performance in low textured environments (TunnelD, KatzenseeD). There, the line artifacts are more dominant than the actual features from the environment. Additionally, the brightness decreases drastically with increasing range. In contrast, the brightness compensation and line removal of our filtered image allows to use more fine-grained details, e.g. from vegetation or gravel, at larger range. We note that (Intensity, Strongest) marginally outperforms (Filtered, Strongest) on IntersectionS. In this scene, strong image features from road cracks are consistently found at short range. We believe that the slightly lower ATE results the filtered image having lower contrast than the intensity image at short range in this scene, which results in weaker gradients. Selecting features based on strong image gradients (Filtered, Strong) results in better performance compared to random patches (Filtered, Random), as they provide richer information for the filter updates. Our proposed feature selection scheme (Filtered, Complementary) achieves the highest performance, as it reduces redundant information along uninformative geometric directions and specifically selects informative image patches. It can be noted that its impact is strongest in TunnelD, where most strong gradients are in the geometrically degenerate direction along the tunnel (see Figure 5, while they are more randomly oriented in the other scenes. Overall, the ablation experiments confirm that COIN-LIO is able to effectively leverage the additional information provided by the multi-modality of the approach. ## V Conclusion This work proposed COIN-LIO, a framework that tightly fuses photometric error minimization for improved robustness in geometrically degenerate environments. We presented an image pipeline to produce brightness compensated intensity images that provide more fine-grained details at larger range and consistent illumination across different environments. Our novel feature selection scheme effectively leverages the multi-modality by providing additional instead of redundant information. Our approach slightly outperforms baseline approaches on the geometry-rich Newer College Dataset, and shows drastically increased robustness in our geometrically-degenerate ENWIDE dataset, which enables benchmarking LIO in previously underrepresented scenarios. We believe that this dataset as well as our work serve as a motivation for a new line of research that shifts from chasing even higher accuracies in geometrically simple cases to improving robustness in challenging environments. We also hope it motivates the industry to further improve the imaging capabilities of LiDAR. Fig. 6: Resulting maps from COIN-LIO on the ENWIDE dataset. Top to bottom: FieldS, IntersectionS, KatzenseeS, RunwayS. Despite long degenerate sections, COIN-LIO produces consistent, sharp maps.
2303.16117
Feature Engineering Methods on Multivariate Time-Series Data for Financial Data Science Competitions
This paper is a work in progress. We are looking for collaborators to provide us financial datasets in Equity/Futures market to conduct more bench-marking studies. The authors have papers employing similar methods applied on the Numerai dataset, which is freely available but obfuscated. We apply different feature engineering methods for time-series to US market price data. The predictive power of models are tested against Numerai-Signals targets.
Thomas Wong, Mauricio Barahona
2023-03-26T00:57:35Z
http://arxiv.org/abs/2303.16117v2
Feature Engineering Methods on Multivariate Time-Series Data for Financial Data Science Competitions + ###### Abstract We apply different feature engineering methods for time-series to US market price data. The predictive power of models are tested against Numerai-Signals targets. Machine Learning, Time-Series Prediction, ## 1 Introduction Financial data are often available in the form of time series. These time series are often highly dimensional with complex relationships between them. The complexity of financial data can be demonstrated in different aspects. Firstly, training data are often limited and the number of features that researchers can create is often much greater than the number of observations. In some research, such as [1], the ratio of the number of features over the number of observations, defined as model complexity can increase up to hundreds for financial instruments with a limited amount of history. Traditional setups in machine learning are not well-equipped for these data-scarce environments. Secondly, multicollinearity is very common in financial data and choosing suitable regularisation methods is a key part of model training. Thirdly, distribution shifts in data also hinder learning robust model parameters over time. It is well-known in finance that regime shifts can invalidate trading strategies. Therefore any robust machine learning models for financial forecasts require ways to deal with the non-stationarity of data. Moreover, correlation structure between features is often hard to estimate. For example, estimating the correlation structure of a basket of assets is non-trivial as the number of assets can easily exceed the length of price history. Dimensionality reduction methods are often required to simplify the problem. Researchers do not have a consensus on the best approach to handle the complexity of financial time series. The classical view on the bias-variance trade-off suggests using simple models to avoid over-fitting, especially for environments with a low signal-to-noise ratio. However, recent research [1] suggests using complex models through extensive feature engineering and model ensembling to take advantage of the "double descent" phenomenon of the curve of test loss with respect to the number of model parameters (as a measure of model complexity) in deep learning [2]. Financial time series can be treated directly using classic methods such as ARIMA models [3] and more recently through deep learning methods such as Temporal Fusion Transformers [4]. However, such deep learning methods are easily over-fitted and lead to expensive retraining for financial data, which are inherently affected by regime changes and high stochasticity. Alternatively, one can use various feature engineering methods to transform these time series into \(tabular\)\(form\) through a process sometimes called 'de-trending' in the financial industry, where the characteristics of a financial asset at a particular time point, including features from its history, are represented by a single dimensional data row (i.e., a vector). In this representation, the time dimension is not considered explicitly, as the state of the system is captured through transformed features at each time point and the continuity of the temporal dimension is not used. For example, we can summarise the time series of the return of a stock with the mean and standard deviation over different look-back periods. Grouping these data rows for different financial assets into a table at a given time point we obtain a _tabular dataset_. If the features are informative, this representation can be used for prediction tasks at each time point, and allow us to employ robust and widely tested ML algorithms that are applicable to tabular data. This paper is work done in parallel to another work performed by the same authors on the Numerai Classic tournament []. There are two major differences between the two papers, with different datasets and methodologies presented. Firstly, the two papers used different temporal tabular datasets. In this paper, we created our own dataset, with open-source documentation detailing how the tabular features are created from raw financial datasets. In the other paper, we used the dataset provided by Numerai for model training, the feature creation process is proprietary and it is impossible for outside researchers to recreate the dataset. Secondly, the two papers focused on different tasks in a machine-learning pipeline. In this paper, our focus is on dataset creation and feature engineering. In the other paper, the focus is on training robust machine learning models given standardised data and ways to post-process and combine predictions from well-known algorithms, namely feature neutralisation and model selection. The paper is organised as follows. Section 2 introduces the Numerai-Signals tournament. Section 3 describes and discusses different feature engineering methods that can be applied on multivariate time-series. Section 4 describes the pipeline used to create features from various raw financial databases. Section 5 describes the machine learning model training pipeline and performances of models trained from features created from various feature engineering methods. ## 2 Numerical-Signals tournament Numerai [5] is a hedge fund that uses crowd-sourced models to trade a market-neutral global equities portfolio. Numerai-Signals [6] is a competition organised by the Numerai which requires data scientists need to bring their own data to create trading signals for stocks traded in the global market. Numerai-Signals simplifies the complicated trading process into a stock ranking problem. Each week, the user predictions are evaluated with Spearman's correlation with the actual stock ranking. Unlike Kaggle data science competitions which are evaluated at a fixed period, submissions are evaluated with live data in an ongoing process. Prediction task:The tournament task is to predict the _stock rankings_ each week, ordered from lowest to highest expected return. The scoring is based on Spearman's rank correlation of the predicted rankings with the main target label ('target-20d'). Hence there is a single overall score each week regardless of the number of stocks to predict each week. Participants are not scored on the accuracy of the ranking of each stock individually. Numerai uses the predicted rankings to construct a market-neutral portfolio which is traded every week (As of Dec 2022), i.e., the hedge fund buys and short-sells the same dollar amount of stocks. Therefore the relative return of stocks is more relevant than the absolute return, hence the prediction task is a ranking problem instead of a forecast problem. Stock Return Targets:Numerai provides five normalised targets (As of Dec 2022), which represent forward stock returns normalised against different factors at different time horizons. The tournament is scored against 'target-20d', which represents 20 trading days normalised return against around 100 proprietary factors that include market, sector and Fama-French factors. Trading Universe:An important feature of the stock universe provided by Numerai is that they are constructed in a point-in-time fashion, without the look-ahead bias which is common in academic financial research. The stock universe is constructed survivor bias free since 2003 and is updated on Friday every week. It takes into account real trading constraints such as liquidity and borrowing costs. The trading universe and targets provided by Numerai are very robust as they have incorporated risk management and operational considerations which are not addressed in academic research usually. ## 3 Extracting features from multi-variate time series In this section, we introduce different mathematical transformations that are used to extract features from multi-variate time series such as price data. Multi-variate time-seriesA multi-variate time-series \(X\) of \(T\) steps and \(N\) channels can be represented as \(X=(\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{i},\dots,\mathbf{x}_{T})\), with \(1\leq i\leq T\) and each vector \(\mathbf{x}_{i}\in\mathbb{R}^{N}\) represents the values of the \(N\) channels at time \(i\). The number of channels of the time series is assumed to be fixed throughout time with regular and synchronous sampling, i.e. the values in each vector from multiple channels arrive at the same time at a fixed frequency. Feature extractionFeature extraction methods are defined as functions that map the two-dimensional time-series \(X\in\mathbb{R}^{T\times N}\) to a one-dimensional feature space \(f(X)\in\mathbb{R}^{K}\) where \(K\) is the number of features. Feature extraction methods reduce the dimension and noise in time-series data. With feature extraction methods, traditional machine learning models such as gradient boosting decision trees can be used without relying on advanced neural network architectures such as Recurrent Neural Networks (RNN) or Long-Short-Term-Memory (LSTM) Networks. Computational resources can be reduced as simpler machine-learning models with fewer parameters can be used. Look-back windowsFor financial time series which can potentially grow with infinite size, a look-back window is used to restrict the data size when calculating features from time series. To avoid look-ahead bias, features that represent the state of time-series at time \(i\) can only be calculated using values obtained up to time \(i\), which is \((\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{i})\). In financial analysis, data collected more recently often have more importance than data collected from a more distant past. Therefore, analysis is often restricted to use the most recent \(k\) data points only, which are \((\mathbf{x}_{i-k},\mathbf{x}_{i+1-k},\dots,\mathbf{x}_{i})\). This represents the state of the financial time series at time \(i\) with a look-back window of size \(k\). Feature extraction methods are applied on data within the look-back window only. In practice, multiple look-back windows are used to extract features corresponding to short-term and long-term price trends. At each time \(i\), features extracted with different look-back windows are concatenated to represent the state of the time series. ### Basic Statistics Different statistical moments can be used to summarise data. Mean, variance, skewness and kurtosis which correspond to the first four statistical moments are widely used in machine learning applications. Moments higher than the fourth order are rarely used due to the lack of interpretability. At each time \(i\) and a look-back window of size \(k\), the statistical moments can be calculated on each channel of data within the look-back window \((\mathbf{x}_{i-k},\mathbf{x}_{i+1-k},\dots,\mathbf{x}_{i})\). In this paper, moments up to the fourth order are calculated. For the multivariate time series with \(N\) channels, statistical moments are calculated independently for each channel and a total of \(4\times N\) features are obtained at each time \(i\). ### Catch22 Catch22 [7] is a general-purpose time-series feature extraction method. It is an optimised set of 22 features based on the 4791 features proposed in highly comparative time-series analysis (hctsa) [8]. Catch22 creates a set of diverse and interpretable features of time series in which all computations are deterministic. The computed features are scale-invariant, namely, it does not capture the location (mean) and spread (variance) properties of time series. Catch22 uses interdisciplinary methods to derive features of different themes, including data distribution, temporal statistics, linear and non-linear auto-correlations and entropy. Data distribution refers to statistical properties derived from histogram of numerical values in the time series. Temporal statistics refers to basic statistical properties of temporal trends in the time series such as the longest period of consecutive values above the mean. Linear and non-linear auto-correlations refer to different ways to measure auto-correlations in the time series with ideas from the Fourier spectrum and Auto Mutual Information. Entropy refers to Shannon entropy and other complexity measures. The description of how each of the features in Catch22 is calculated can be found in Table 1 in [7]. For a multivariate time series with \(N\) channels, Catch22 is applied to each channel individually and generates a total of \(22\times N\) features at each time \(i\). ### Signature Transforms Signature transforms [9; 10; 11], based on rough path theory, can be used to extract features from multi-variate time series. Signature transforms are applied on continuous paths. A path \(X\) is defined as a continuous function from a finite interval \([a,b]\) to \(\mathbb{R}^{d}\) with \(d\) the dimension of the path. \(X\) can be parameterised in coordinate form as \(X_{t}=(X_{t}^{1},X_{t}^{2},\dots,X_{t}^{d})\) with each \(X_{t}^{i}\) being a single dimensional path. For each index \(1\leq i\leq d\), the increment of \(i\)-th coordinate of path at time \(t\in[a,b]\), \(S(X)_{a,t}^{i}\), is defined as \[S(X)_{a,t}^{i}=\int_{a<s<t}\mathrm{d}X_{s}^{i}=X_{t}^{i}-X_{a}^{i}\] As \(S(X)_{a,\cdot}^{i}\) is also a real-valued path, the integrals can be calculated iteratively. A \(k\)-fold iterated integral of \(X\) along the indices \(i_{1},\dots,i_{k}\) is defined as \[S(X)_{a,t}^{i_{1},\dots,i_{k}}=\int_{a<t_{k}<t}\cdots\int_{a<t_{1}<t_{2}} \mathrm{d}X_{t_{1}}^{i_{1}}\dots\mathrm{d}X_{t_{k}}^{i_{k}}\] The Signature of a path \(X:[a,b]\mapsto\mathbb{R}^{d}\), denoted by \(S(X)_{a,b}\), is defined as the infinite series of all iterated integrals of \(X\), which can be represented as follows \[S(X)_{a,b} =(1,S(X)^{1}_{a,b},\ldots,S(X)^{d}_{a,b},S(X)^{1,1}_{a,b},\ldots)\] \[=\bigoplus_{n=1}^{\infty}S(X)^{n}_{a,b}\] An alternative definition of signature as the response of an exponential nonlinear system is given in [11]. Log Signature can be computed by taking the logarithm on the formal power series of Signature. No information is lost as it is possible to recover the (original) Signature from Log Signature by taking the exponential [10; 11]. Log Signature provides a more compact representation of the time series than Signature. \[logS(X)_{a,b}=\bigoplus_{n=1}^{\infty}\frac{(-1)^{(n-1)}}{n}S(X)^{\bigotimes n }_{a,b}\] Signatures can be computed efficiently using the Python package signatory [12]. The signature is a multiplicative functional in which Chen's identity holds. This allows quick computation of signatures on overlapping slices in a path. Signatures provide a unique representation of a path which is invariant under reparameterisation [10; 11]. Rough Path Theory suggests the signature of a path is a good candidate set of linear functionals which captures the aspects of the data necessary for forecasting. ## 4 Data Creation Pipeline Data SourcesData from traditional sources, such as price and financials are used in addition to alternative datasets such as sentiment to create the feature set. Price data from CRSP is used to create different features using the above methods (stats, Catch22, signature). Financial data are sourced from Open Source Asset Pricing [13]. Sentiment data from Ravenpack are used. Financial Data are collected between 2003-01-31 and 2021-12-31. Universe CreationNumerai-Signals [5] provides targets identified by Bloomberg tickers. Between 2003-01-31 and 2022-03-11, there are a total of 5842 US stock entries in the Numerai-Signals universe. A metadata table for our dataset is created by mapping the Bloomberg tickers with the key fields in various data sources. We map Bloomberg tickers to ticker names in CRSP, taking into account ticker name changes. We then use CUSIP and ISIN, which are two commonly used unique identifiers for US-traded stocks to map the stock entries to Compustat and Ravenpack databases. We obtain 5518 stock entries that can be validly mapped to any of the databases. TargetsWe use 'target-20d' provided by Numerai, which are 20 (trading) days forward return normalised against around 100 proprietary factors that include market, sector and Fama-French factors [6]. The target is scaled between 0 to 1, where 0 represents the quantile of the lowest return and 1 represents the quantile of the highest return. Price FeaturesDaily price data are used to calculate different features. Three different feature extraction approaches are used, including statistical, signature transforms [12] and Catch22 [7]. Pre-processing is applied to daily price data. The **average price** of a stock on each day is computed as the simple average of the dividend and splits adjusted open, high, low and close price. For each feature extraction approach, look-back windows with different lengths need to be defined to capture price patterns at different time resolutions. Look-back windows of sizes 21, 63 and 252 are chosen to capture short-term, mid-term and long-term price patterns in each feature extraction approach. To calculate statistical features, the log-returns of the **average price** time series are first calculated, defined as \[\text{log return}(x_{t})=\frac{\text{log average price}(x_{t})}{\text{log average price}(x_{t-1})}\] On each trading day, the mean, standard deviation, skewness and kurtosis of log returns in different look-back windows are calculated. In total there are 12 statistical features. The log of the **average price** is used to calculate the Catch22 features. In total there are 66 Catch22 features. Before applying signature transforms, the log of the **average price** is calculated to adjust for the effect of compounding growth in asset prices. Moving averages of the log **average price** with 5 and 21 days windows are added to the **average price** time series to create a lagged price time series, which is a multi-variate time-series with 3 channels. For each look-back window, the signature transforms up to the fourth level and is computed on the lagged price time series, which gives 32 log signatures. In total there are 96 log-signature features. Sentiment FeaturesRavenpack collects and processes stock news in an easy-to-use format. For each piece of news, an 'event relevance score' is given to indicate how relevant the news article is to the stock mentioned. On a scale between 0 to 100, we filter to include news that is the most relevant (an 'event relevance score' of 100). We also further filter news to ensure uniqueness by including only news with an 'event similar days' greater than 1, which means there is no similar news that occurred within a day. Each piece of news is given a sentiment score between -1 and 1, with negative scores indicating a negative outlook on the stock price and positive scores indicating a positive outlook on the stock price. For each stock, the **average sentiment** is calculated by the simple average of filtered news on each trading day. Moving averages of the **average sentiment** are also calculated with a look-back window of 21,63 and 252. This gives 4 features based on the overall sentiment. Each news article is also classified into different categories which correspond to different corporate events. Based on the data between 2003-01-03 and 2013-23-27 (the training and validation period for the machine learning pipeline described below), the top 200 categories of news that are most commonly found in stocks during the above period are selected for further study. Grouping the news by the top 200 categories, we then calculate the **average sentiment** using news in each selected category only on each trading day. On trading days that have no events of that category, the sentiment score is set to zero on that day. 252 days moving average of the sentiment scores by category are used instead of the actual time series due to the sparsity of events in each category. In total there are 204 sentiment features. Financial FeaturesThe monthly financial features obtained from Open Source Asset Pricing [13] are re-sampled into weekly data. For example, financial features obtained at the end of Apr 2010 will be used in the 4 following weeks in May 2010. There are 204 financial features in their dataset, computed from different data sources such as company annual reports, analyst estimates and regulatory filings. Detailed implementation for financial features can be obtained from their paper and website [13]. NormalisationUsing the features calculated above, the features of each stock are normalised within each week. Rank transform is used to bin the features into 5 equal-sized quantiles. The transformed features are in the format of integers between -2 and 2, with -2 representing the 20% of data with the lowest values in that feature and 2 representing the 20% of data with the highest values in that feature. ## 5 Machine Learning Models Data SplitWe consider a walk-forward cross-validation approach to train different ML models with the latest data available. The training period uses an expanding window. The details are described in table 1. We apply a 1-year gap between the training and validation period to reduce the effect of recency bias so that the performance of the validation period will better reflect future performance. The gap between the validation period and the test period is set to 26 weeks to allow for sufficient time to deploy trained machine learning models. CV 0 is used for hyper-parameters optimisation and the optimised hyper-parameters are then used for the rest of the cross-validations (CV 1 to CV 4). The aim of this approach is to demonstrate the robustness of hyper-parameters and reduce computational costs to regularly update hyper-parameters. Hyper-parameters are robust if they can be transferred from previous cross-validations to later cross-validations without a significant drop in model performances. Model TrainingUsing different feature sets described as above, we perform hyper-parameter optimisation for each machine learning pipeline on the training and validation dataset. We use optuna [14] to perform 100 iterations of the hyper-parameter search for the hyper-parameters of machine learning models based on the cross-validation data split of CV 0. The hyper-parameter space of the machine learning models are listed in Table 1. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Train Start & Train End & Validation Start & Validation End & Test Start & Test End \\ \hline CV 0 & 2003-01-31 & 2010-12-31 & 2012-01-06 & 2015-12-25 & 2016-07-01 & 2021-12-31 \\ \hline CV 1 & 2003-01-31 & 2012-01-06 & 2013-01-11 & 2016-12-30 & 2017-06-30 & 2021-12-31 \\ \hline CV 2 & 2003-01-31 & 2013-01-04 & 2014-01-10 & 2017-12-29 & 2018-06-29 & 2021-12-31 \\ \hline CV 3 & 2003-01-31 & 2014-01-03 & 2015-01-09 & 2018-12-28 & 2019-06-28 & 2021-12-31 \\ \hline CV 4 & 2003-01-31 & 2015-01-02 & 2016-01-08 & 2019-12-27 & 2020-06-26 & 2021-12-31 \\ \hline \end{tabular} \end{table} Table 1: Cross validation schemes to retrain machine learning models on different parts of the data based on expanding windows The predicted stock rankings in each week are scored against the rankings of 20 trading days return 'target-20d' using Spearman's correlation, which is defined as **Corr** in the tournament. A positive Spearman correlation represents a better alignment between predicted and actual rankings. We select the best parameters for each machine learning pipeline that have the highest Sharpe ratio of correlation with the 'target-20d' in the validation period. The Sharpe ratio is computed as the ratio of the average of **Corr** over the standard deviation of **Corr**. Using the best parameters obtained from CV 0, we train models using 20 different random seeds and report performances in the test period based on the average prediction for other cross-validations. Results for CV 1 are listed in Table 2. Performances in test periods from other cross-validations (CV 2 to CV 4) are listed in tables 3,4,5 in the supplementary information. The Sharpe ratio and Calmar ratio are computed. The Sharpe ratio is computed as the ratio of Mean **Corr** over Volatility **Corr**. The Max Drawdown is defined as the maximum loss from the local peaks of the cumulative sum of **Corr**. The Calmar ratio is computed as the ratio of Mean **Corr** over Max Drawdown. ### Performances of models in different cross-validation In feature sets based on price data only (signature, Catch22, statistical), models trained with the'signature' feature set have the highest Sharpe ratio in all cross-validations (CV 1 to CV 4). Models trained with the 'Catch22' feature set perform the worst in all cross-validations (CV 1 to CV 4) by having a lower Sharpe ratio and higher Max Drawdown. For price data that are highly non-stationary, Catch22 failed to capture the dynamic nature of financial time series and was over-fitted to temporal patterns. The Validation Sharpe ratio of Catch22 is 0.8026 which is the highest in all feature sets based on price data. The validation Sharpe ratio is 0.7215 and 0.4929 for'signature' and'statistical' respectively. It suggests Catch22 over-fits to specific patterns in the price data. Models trained with all three feature sets based on price data (signature+catch22+statistical) perform better than models trained with individual feature sets. Sharpe and Calmar ratios are higher in all cross-validations. Different feature extraction methods are complementary in nature which makes them good candidates for feature ensembling. The result is also consistent with the findings in [1] which suggests increasing model complexity improves predictive power of models. Models trained with the 'financials' feature set performed slightly better than models trained with price data only (signature, Catch22, statistical). As 'target-20d' represents stock returns normalised against Fama-French factors [], such as Momentum and Value. Commonly used price and financial features are already taken into account, and therefore provide little value for predicting 'target-20d'. Contrary to the common belief that more data/features improve model performances, models trained with only'sentiment' features perform significantly better than models trained with all 5 feature sets. Models trained with all feature sets can consider feature interactions but they are not useful in improving prediction for 'target-20d'. This suggests to learn unique(orthogonal) trading signals, it is not necessary to include the known trading signals (such as Fama-French factors) in model training if the target is already normalised against those signals. ML models can be independently trained on each feature set and then combined. From the perspective of the organiser of the Numerai-Signals tournament, it demonstrates the feasibility of crowd-sourcing financial signals from the community as each contributor does not need to have access to data that others have to build a good signal. The burden on contributors to collect raw data and create signals can be greatly reduced as they are not strictly required to process traditional stock trading factors such as price and financials. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Feature Sets & Sharpe & Calmar & Mean & Volatility & Max Drawdown \\ \hline All & 0.5158 & 0.0736 & 0.0147 & 0.0286 & 0.1996 \\ \hline Signature+Catch22+Stats & 0.2595 & 0.0158 & 0.0085 & 0.0327 & 0.5379 \\ \hline Signature & 0.1997 & 0.0110 & 0.0061 & 0.0307 & 0.5546 \\ \hline Catch22 & 0.1632 & 0.0077 & 0.0048 & 0.0294 & 0.6262 \\ \hline Statistics & 0.1600 & 0.0132 & 0.0050 & 0.0310 & 0.3789 \\ \hline Financials & 0.3101 & 0.0293 & 0.0082 & 0.0264 & 0.2794 \\ \hline Sentiment & 0.5324 & 0.0816 & 0.0168 & 0.0315 & 0.2060 \\ \hline \end{tabular} \end{table} Table 2: Strategy Performance of LightGBM models trained on different feature sets for the Numerai-Signals tournament in the test period for CV 1. In addition to models trained with individual feature sets (stats, signature, Catch22, financials, sentiment), we also report the model trained with all 5 feature sets combined (all). ## 6 Acknowledgments This was was supported in part by the Wellcome Trust under Grant 108908/B/15/Z and by the EPSRC under grant EP/N014529/1 funding the EPSRC Centre for Mathematics of Precision Healthcare at Imperial.
2302.10897
Learning to Simulate Daily Activities via Modeling Dynamic Human Needs
Daily activity data that records individuals' various types of activities in daily life are widely used in many applications such as activity scheduling, activity recommendation, and policymaking. Though with high value, its accessibility is limited due to high collection costs and potential privacy issues. Therefore, simulating human activities to produce massive high-quality data is of great importance to benefit practical applications. However, existing solutions, including rule-based methods with simplified assumptions of human behavior and data-driven methods directly fitting real-world data, both cannot fully qualify for matching reality. In this paper, motivated by the classic psychological theory, Maslow's need theory describing human motivation, we propose a knowledge-driven simulation framework based on generative adversarial imitation learning. To enhance the fidelity and utility of the generated activity data, our core idea is to model the evolution of human needs as the underlying mechanism that drives activity generation in the simulation model. Specifically, this is achieved by a hierarchical model structure that disentangles different need levels, and the use of neural stochastic differential equations that successfully captures piecewise-continuous characteristics of need dynamics. Extensive experiments demonstrate that our framework outperforms the state-of-the-art baselines in terms of data fidelity and utility. Besides, we present the insightful interpretability of the need modeling. The code is available at https://github.com/tsinghua-fib-lab/SAND.
Yuan Yuan, Huandong Wang, Jingtao Ding, Depeng Jin, Yong Li
2023-02-09T12:30:55Z
http://arxiv.org/abs/2302.10897v1
# Learning to Simulate Daily Activities via Modeling ###### Abstract. Daily activity data that records individuals' various types of activities in daily life are widely used in many applications such as activity scheduling, activity recommendation, and policymaking. Though with high value, its accessibility is limited due to high collection costs and potential privacy issues. Therefore, simulating human activities to produce massive high-quality data is of great importance to benefit practical applications. However, existing solutions, including _rule-based methods_ with simplified assumptions of human behavior and _data-driven methods_ directly fitting real-world data, both cannot fully qualify for matching reality. In this paper, motivated by the classic psychological theory, Maslow's need theory describing human motivation, we propose a knowledge-driven simulation framework based on generative adversarial imitation learning. To enhance the fidelity and utility of the generated activity data, our core idea is to model the evolution of human needs as the underlying mechanism that drives activity generation in the simulation model. Specifically, this is achieved by a hierarchical model structure that disentangles different need levels, and the use of neural stochastic differential equations that successfully captures piecewise-continuous characteristics of need dynamics. Extensive experiments demonstrate that our framework outperforms the state-of-the-art baselines in terms of data fidelity and utility. Besides, we present the insightful interpretability of the need modeling. The code is available at [https://github.com/tsinghua-fib-lab/SAND](https://github.com/tsinghua-fib-lab/SAND). Daily activities, Simulation, Human needs, GAIL + Footnote †: journal: Computer Vision and Pattern Recognition between activities with time dependence and high-order correlations, which are difficult to describe with prior simple rules (Kumar et al., 2017). Therefore, only relying on simplified assumptions makes _rule-based methods_ less qualified for modeling real-world activity behaviors. Instead, _data-driven methods_ tackle this problem by directly fitting real-world data. A series of sequential generative methods have been developed, from classical probability models, such as Markov models (Srivastava et al., 2017), to deep learning models, such as Recurrent Neural Networks (RNNs) (Kirshick et al., 2017) and Generative Adversarial Imitation Learning (GAIL) (Kirshick et al., 2018). Nevertheless, the above models cannot fully capture the temporal dynamics underlying human daily activities due to the unrealistic inductive bias of being time-invariant (Srivastava et al., 2017) or discrete updates only at observed time points (Srivastava et al., 2017). Comparatively, daily activities are always irregularly sampled and longer time intervals introduce larger uncertainty between observations, which requires a deeper understanding and fine-grained characterization. More importantly, there exist complex and various patterns in terms of temporal dynamics of different activities, which are hard to discriminate from each other when mixed together. For example, as Figure 2 illustrates, time intervals of going to the "Concert" exhibit totally distinct patterns compared with going to the "Workplace" that is highly similar to "All". Although individuals lead generally regular daily routines, some activities still occur occasionally but cannot be ignored. However, with the overall distribution exhibiting long-tailed characteristics, the coarse-grained learning paradigm of state-of-the-art _data-driven methods_ can be easily biased by the uneven distribution and fail to adequately capture unique patterns of each activity. Therefore, to generate faithful data that matches reality, it is better not to solely rely on the observed data that may possibly reveal an overall but misleading activity pattern. To address the above issues and achieve a realistic simulation, we propose a novel framework informed by psychological theories and integrate activity-related knowledge into the state-of-the-art GAIL method. Our key idea is to highlight the intrinsic drives of activity decisions, namely, **human needs**, which are well supported by Maslow's need theories. Accordingly, human needs can be categorized into three levels: _physiological needs_, _safety needs_, and _social needs_. Guided by this knowledge, we explicitly model human needs in a data-driven manner. We disentangle the needs behind daily activities to fully capture the aforementioned complex patterns in empirical data. Specifically, we simultaneously model each need dynamics with an alternating process between _spontaneous flow_ and _instantaneous jump_. For example, the accumulation of needs in evolution (flow) triggers the occurrence of related activities while the decaying needs after satisfaction (jump) can restrain tendencies towards specific activities. In terms of the specific model design, the proposed GAIL-based framework consists of a discriminator that provides reward signals and a generator that learns to generate high-quality activities with a policy network. Particularly, we utilize Maslow's Theory in our framework to enhance the activity simulation with need modeling from the following two perspectives. First, to overcome the challenge of complex activity patterns, we design a hierarchical structure in the modeling to disentangle different need levels and explicitly incorporate the underlying influence of human needs on activity decisions. Second, to address the limitations of RNN-based methods in modeling continuous-time dynamics, we leverage Neural Stochastic Differential Equations (Kirshick et al., 2017) to capture piecewise-continuous characteristics of need dynamics alternating between _spontaneous flow_ and _instantaneous jump_. The above need dynamics further serve as the states that define the policy function, which calculates activity intensities based on the current need state and decides the next action accordingly. In conclusion, our contributions can be summarized as follows: * We are the first to explicitly model the intrinsic drives of activities, _i.e._, human needs, which brings the synergy of psychological theories and data-driven learning. * We propose a novel knowledge-driven activity simulation framework based on GAIL, leveraging Maslow's theory to enhance the simulation reality by capturing need dynamics. * Extensive experiments on two real-world datasets show the effectiveness of the framework in generating synthetic data regarding fidelity, utility, and interpretability. ## 2. Preliminaries **Problem Statement.** Daily activity data can be defined as a temporal sequence of events \(S=[a_{1},a_{2},...,a_{n}]\), where \(a_{i}\) is a tuple \((t_{i},k_{i})\), \(t_{i}\) denotes the timestamp and \(k_{i}\) is the activity type, _e.g._, eating at restaurants, working at companies, playing at sports centers. The problem of activity simulation can be defined as follows: Definition 1 (Human Activity Simulation).: _Given a real-world activity dataset, generate a realistic activity sequence \(\hat{S}=[\hat{a}_{1},\hat{a}_{2},...,\hat{a}_{n}]\) with a parameterized generative model._ **Temporal Point Process.** A temporal point process (TPP) (TPP) (TPP) can be realized by an event sequence \(\mathcal{H}_{T}=\{(t_{1},k_{1}),...,(t_{n},k_{n})|t_{n}<T\}\). Here \(t_{i}\) represents the arrival time of the event and \(k_{i}\) is the event mark. Let \(\mathcal{H}_{t}\) denote the history of past events up to time \(t\), the conditional intensity function \(\lambda_{k}^{*}(t)\) (the \(k_{th}\) event category) is defined as: \(\lambda_{k}^{*}(t)=\lim_{\Delta t\to 0^{+}}\frac{\mathbb{P}(\text{event of type }\frac{\ln[t,t+\Delta t] |\mathcal{H}_{t}}{\Delta t})}{\Delta t}\). Note that \(\lambda^{*}(t)=\sum\lambda_{k}^{*}(t)\) denotes the total conditional intensity, deciding the arrival time without considering event types. Then the event type is sampled at the probability proportional to \(\lambda_{k}^{*}(t)\). **Neural Ordinary Differential Equations.** NODE (Han et al., 2017) describes the evolution of the system state over continuous time \(t\in\mathbb{R}^{+}\) by modeling the first-order ordinary differential equations with neural networks. Specifically, the derivative of the latent state is modeled as: \(d\mathbf{h}(t)=f(\mathbf{h}(t),t;\theta)\cdot dt\), where \(\mathbf{h}(t)\) is the latent state and \(f\) Figure 2. Interval distributions of different activities. Different activities inherently have distinct temporal dynamics. parameterized by a neural network describes the derivative at time \(t\). The system output at time \(t_{1}\) can be solved with an initial value at time \(t_{0}\) by an ODE solver: \(\mathbf{h}(t_{1})=\mathbf{h}(t_{0})+\int_{h}^{t_{1}}f(\mathbf{h}(t),t;\theta)\cdot dt\). In this work, we take the first attempt to characterize human needs with neural differential equations. ## 3. Method We first introduce how we model human needs to motivate the framework design in Section 3.1, then explain the MDP modeling of the decision process in Section 3.2, and finally elaborate on the framework details in Section 3.3. ### Human Needs Modeling **Hierarchy of Needs.** According to a classic theory in psychology, i.e., Maslow's Theory (Maslow, 1998), people are motivated to achieve a hierarchy of needs, including _physiological needs_, _safety needs_, _social needs_, _esteem needs_, and _self-actualization needs_, in a priority order, where higher levels of need are modeled as long-term changes such as life stages. With the development of Maslow's Theory, the follow-up theories (Maslow, 1998; Datta et al., 2000; Datta et al., 2001) have introduced flexibility in the hierarchy. For example, different needs can be pursued simultaneously, and there exist transition probabilities between any pair of needs. We do not take the top two need levels for _esteem_ and _self-actualization_ into consideration because they are too abstract and their effects can only be observed in a long term. Here we classify individuals' activities into three need levels, including _physiological needs_ (level-1), _safety needs_ (level-2), and _social needs_ (level-3), which are sufficient to depict patterns of daily life (Maslow, 1998; Datta et al., 2001). These three need levels are often triggered or satisfied in a short period, which are consistent with daily activities that happen within a short term (a few hours). We provide descriptions of each need level as follows: * _Physiological needs_ refer to biological requirements for survival, _e.g._, food, drink, and shelter. The human body cannot function optimally without satisfying these needs. * _Safety needs_ refer to requirements for security and safety, _e.g._, education and employment. Besides physiological needs, people expect their lives to be orderly, regular, and controllable. * _Social needs_ refer to requirements for spirits, _e.g._, entertainment and social relationships. After meeting physiological and safety needs, people are also striving for spiritual satisfaction. In our modeling, we follow Maslow's Theory in a more flexible way, rather than the original needs pursued in a rigid order. The fulfillment order can be flexible according to individual preferences and external circumstances. Based on well-respected need theories, each activity is explicitly labeled with one of the need levels3. The association between human needs and activities based on expert knowledge bridges the gap between classic psychological theories and human behavior modeling, which provides opportunities to model human needs computationally in a data-driven manner. Footnote 3: We refer the readers to Section 4.1.2 for more details of the need annotation. **Evolution of Needs.** In real-world scenarios, human needs are not static but generally evolve with time dynamically, which not only derive from spontaneous changes, but also can be interrupted by happened activities. To better learn sequential activity patterns, it is essential to capture the underlying mechanism of need dynamics. However, it is non-trivial because human needs cannot be observed explicitly and are affected by various factors, such as activity relations and periodicity. Besides, different from activities that happen one by one, need dynamics are more complicated with synchronicity and competitiveness among different levels. To effectively capture the underlying need dynamics, we innovatively capture piecewise-continuous dynamics in human needs including _spontaneous flow_ and _instantaneous jump_ as follows: * _Spontaneous flow_ denotes the continuous-time flow of need states. For example, needs for some activities can accumulate without taking them for a long time. Meanwhile, needs can also decay gradually as time goes by. * _Instantaneous jump_ models the influence of activities on the need states. For instance, the happened activities can immediately change the evolution trajectory of the corresponding need state. Naturally, the two kinds of dynamics describe an active process of need evolution and need satisfaction. Particularly, the three levels are disentangled in dynamic modeling, so they follow distinct evolution laws. Figure 3 illustrates the two evolution mechanisms of different need levels. Nevertheless, it is challenging to learn such dynamics since needs are intrinsically unobserved and stochastic with the coexistence of continuity and jump. To tackle this problem, we represent human needs with a stochastic embedding process \(\mathbf{z}(t)\) defined as follows: Definition 2 (Need Embedding Process).: _The need embedding processes are \(\{\mathbf{z}_{i}(t),i\in\{1,2\},i\geq 0\}\), where \(\mathbf{z}_{i}(t)\) is the representation of the \(i_{th}\) need level at time \(t\)._ In the above definition, we depict human needs with an embedding process \(\mathbf{z}(t)\) instead of a direct scalar value for stronger representation capabilities. Particularly, \(\mathbf{z}(t)\) is composed of three components \(\mathbf{z}_{1}(t)\), \(\mathbf{z}_{2}(t)\), \(\mathbf{z}_{3}(t)\) that correspond to different need levels. Then the need embedding process \(\mathbf{z}(t)\) with both _spontaneous flow_ and _instantaneous jump_ can be formulated as follows: Figure 3. Illustration of the need evolution. Representations of three-level need states evolve continuously over time until interrupted by a corresponding activity (_e.g._, \(a(t_{1})\) corresponds to level-1). Note that the need state is modeled by an embedding rather than a scalar, thus the jump up and down do not indicate an increase or decrease. \[\left\{\begin{array}{ll}\mathbf{z}(t+dt)=\mathbf{z}(t)+\mathcal{T}(t, \mathbf{z}(t))dt,&\text{no activity in }[t,t+dt),\\ \\ \lim\limits_{\Delta t\to t^{+}}\mathbf{z}(t_{t}+\Delta t)=\mathcal{G}(t_{t_{1}},k(t_{1})),&\text{with activity }k\text{ at the time }t_{t},\end{array}\right. \tag{1}\] where \(\mathcal{F}\) and \(\mathcal{G}^{4}\) control the _spontaneous flow_ and _instantaneous jump_, respectively, and \(k(t_{i})\) denotes the the occurred activity. ### Sequential Decision Processes The generation of activity sequences depends on individuals' decisions on what activity to take based on his/her own need state step by step. The whole process consists of a sequence of activity decisions that aim to maximize the total received "reward" along the process. Here we model the decision process as a Markov decision process (MDP) (Sandel, 1998), and it is described by a 4-tuple \(<\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R}>\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{T}\) is the state transition, and \(\mathcal{R}\) is reward function. The basic elements of MDPs are : (i) **State** represents the current need state. (ii) **Action** is generated based on the state by sampling a time interval \(\tau\) and an activity type \(k\). (iii) **Policy function** decides the next activity time and type. (iv) **State transition** controls how the state updates with two transit laws, _i.e._, spontaneous flow and instantaneous jump. (v) **Reward function** evaluates the utility of taking the action under the state, which is unknown and has to be learned from the data. Given the activity history \(\mathbf{s}_{t}=\{(t_{i},k_{i})\}_{t\times t}\), the stochastic policy function \(\pi_{\theta}(a|\mathbf{s}_{t})\) samples an interval time \(\tau\) and an activity type \(k\) to generate the next activity \(a=(t_{i+1},k_{i+1})\), where \(t_{i+1}=t_{i}+\tau\). Then, a reward value is calculated and the state will be updated by an _instantaneous jump_. Besides, there are also feedbacks of need states to the individual over time known as the _spontaneous flow_. ### Proposed Framework: SAND In this section, we present a novel framework, SAND, which **S**imulates human **A**ctivities with **N**eed **D**ynamics. Overall, it provides the synergy of need theories and imitation learning in simulating the activity decision-making process. As shown in Figure 4, it learns the policy and reward functions adversarially, where the need embedding process \(\mathbf{z}(t)\) plays an essential role in the loop. We elaborate on the details of key components in the following sections. #### 3.3.1. **Learning Need Dynamics** To model the need dynamics including the _spontaneous flow_ and _instantaneous jump_, we utilize neural stochastic differential equations (Kang et al., 2016) to describe such continuity and discontinuity, where the need embedding process \(\{\mathbf{z}_{i}(t),i\in\{1,2,3\},t\geq 0\}\) acts as the latent state. Between activity observations, each \(\mathbf{z}_{i}(t)\) flows continuously over time. Once an activity happens, the corresponding need embedding process is interrupted by a state jump. Different from directly modeling the changes of the hidden state like RNNs (Wang et al., 2017), neural differential equations model the derivative of \(\mathbf{z}(t)\) to better capture the continuous-time characteristics. Specifically, the derivative of the \(i_{th}\) need state is formulated as follows: \[d\mathbf{z}_{i}(t)=f_{i}(\mathbf{z}_{i}(t),t;\theta_{i})\cdot dt+\omega_{i}( \mathbf{z}_{i}(t),\mathbf{k}_{i}(t),t;\mathbf{y}_{i})\cdot dN_{i}(t)\;, \tag{2}\] where \(f_{i}\) and \(\omega_{i}\) are both parameterized by neural networks and control the _spontaneous flow_ and _instantaneous jump_ of the \(i_{th}\) need embedding process, respectively, and \(N_{i}(t)\) records the number of activities of the \(i_{th}\) level up to time \(t\). \(f\) and \(\omega\) in Eq. (2) are implementations of the function \(\mathcal{F}\) and \(\mathcal{G}\) defined in Eq. (1). In particular, each state \(\mathbf{z}_{i}(t)\in\mathbb{R}^{n}\) is composed of two vectors: (1) \(\mathbf{c}_{i}(t)\in\mathbb{R}^{n_{1}}\) encodes the internal need state, and (2) \(\mathbf{h}_{i}(t)\in\mathbb{R}^{n_{2}}\) encodes effects of the historical activities. _Spontaneous flow_. The top part in Figure 5 shows the network design to model spontaneous flow. The neural function \(f_{i}\) in Eq. (2) controls the spontaneous flow of the state \(\mathbf{z}_{i}(t)\). Although \(\mathbf{z}_{i}(t)\) contains two vectors \(\mathbf{c}_{i}(t)\) and \(\mathbf{h}_{i}(t)\), they follow distinct continuous dynamics due to different encoded information. Specifically, there is no constraint on the internal evolution of \(\mathbf{c}_{i}(t)\), hence we model \(\frac{d\mathbf{c}_{i}(t)}{dt}\) by an MLP. Differently, due to the temporal decaying effect of historical activities, we add constraints to the form of \(\mathbf{h}_{i}(t)\) to model such an effect. Concretely, we use another MLP followed by a Softplus activation layer to model the decay rate. The modeling of derivatives can be formulated as follows: \[\frac{d\mathbf{c}_{i}(t)}{dt}=\text{MLP}(\mathbf{c}_{i}(t)\oplus \mathbf{h}_{i}(t))\;, \tag{4}\] \[\alpha_{i}=\sigma(\text{MLP}(\mathbf{c}_{i}(t))\;,\] (5) \[\frac{d\mathbf{h}_{i}(t)}{dt}=-\alpha_{i}\mathbf{h}_{i}(t)\;, \tag{3}\] where \(\sigma\) is the Softplus activation function to guarantee a positive decay rate, and \(\oplus\) denotes the vector concatenation. _Instantaneous jump_. The bottom part in Figure 5 illustrates the network design to model the instantaneous jump introduced by happened activities. Specifically, the function \(\omega_{i}\) in Eq. (2) outputs the effects of the instantaneous jump, and it is modeled by an MLP in practice. As discussed before, the vector \(\mathbf{h}_{i}(t)\) encodes the activity memory, and thus it is reasonable that the instantaneous jump will only affect the vector \(\mathbf{h}_{i}(t)\). As a result, an activity of the \(i_{th}\) need level gives rise to a change \(\Delta\mathbf{h}_{i}(t)\) only to the corresponding activity memory embedding \(\mathbf{h}_{i}(t)\), _i.e._, \(\Delta\mathbf{h}_{j}(t)=0,\forall j\neq i\) and \(\Delta\mathbf{c}_{i}(t)=0,\forall i\). The MLP takes in the concatenation of the activity embedding \(\mathbf{k}(t)\) Figure 4. Illustration of the SAND framework. The policy and discriminator networks are optimized adversarially, and the state transition consists of two evolution mechanisms. and the internal state \(\mathbf{c}_{i}(t)\), and outputs the variation \(\Delta\mathbf{h}_{i}(t)\) in the memory embedding \(\mathbf{h}_{i}(t)\), which is formulated as follows: \[\Delta\mathbf{h}_{i}(t)=\text{MLP}(\mathbf{k}(t)\oplus\mathbf{c}_{i}(t ))\, \tag{7}\] \[\lim_{\epsilon\to 0^{+}}\mathbf{h}_{i}(t+\epsilon)=\mathbf{h}_{i}(t )+\Delta\mathbf{h}_{i}(t)\, \tag{6}\] where \(\mathbf{k}(t)\) denotes the activity associated with the \(i_{\text{th}}\) need level. #### 3.3.2. Policy Function Based on the activity intensity function \(\lambda_{k}(t)\), the probability of activity type \(k\) happens within the time interval \([t,t+dt)\) is as: \(P\{\text{activity }k\text{ happens in}[t,t+dt)\}=\lambda_{k}(t)\cdot dt\). The policy function is a mapping from the state to action that generates the arrival of the next activity with the type conditioned on the current state. With the modeling of activity intensities, the goal of the policy function is to generate intensities based on the need states \(\mathbf{z}(t)\). Figure 6 shows the network design of the policy function. Although the three need levels control specific activities, they are not independent and can be pursued simultaneously, which may give rise to competing activity choices. Therefore, the states of the three levels all affect the generation of the next activity. In other words, the activity intensity \(\lambda_{k}^{*}(t)\) is conditioned on embedding processes of all need levels. To model the interactions between different levels in determining the next activity, we concatenate the three embedding processes \(\mathbf{z}_{i}(t),i\in\{1,2,3\}\) and leverage an MLP to obtain conditional activity intensities. Here we perform the sampling to obtain the time interval and the activity type based on the total condition intensity and type distribution as: \[\lambda^{*}(t)=\sum_{k=1}^{M}\lambda_{k}(t),\ \ p(k|t)=\frac{\lambda_{k}(t)}{ \sum_{k=1}^{M}\lambda_{k}(t)} \tag{8}\] where \(M\) is the number of activity types. #### 3.3.3. **Reward Function** GAIL uses a reward function to evaluate the actions by comparing the generated state-action pairs with the real pairs, which is modeled by a discriminator network \(D_{\phi}\). To compare the real and policy-generated pairs more effectively, we also utilize the historical sequence information, thus, the state in the discriminator is defined as \(s_{d}=(\mathbf{z}(t),\mathbf{S})\). For the sequence \(\mathbf{S}=[x_{1},x_{2},...,x_{n}]\), \(x_{i}\) contains the information of the time interval \(\tau_{i}\), hour \(h_{i}\), weekday \(w_{i}\), activity type \(k_{i}\), and need level \(n_{i}\). In addition, the action \(a\) is set as the time interval \(\tau\) since the last activity to the current one, _i.e._, \(a=(\tau,k)\). Based on the above notations, the output of the discriminator can be defined as \(D_{\phi}(s_{d},a)\). Through an embedding layer, we first transform \(s_{d}\) and \(a\) into embeddings. Then we leverage an attention mechanism to aggregate the sequential features. The concatenation of the sequential embedding, state \(\mathbf{z}(t)\), and action embedding is fed into an MLP with a sigmoid activation function. Thus, the reward function can be expressed as: \(R(s,a)=\text{log}D_{\phi}(s_{d},a)\). The training process including (i) GAIL training and (ii) pre-training is introduced in Appendix. The simulation algorithm is presented in Appendix. ## 4. Experiments In this section, we conduct extensive experiments to investigate the following research problems: * **RQ1**: How does SAND perform in retaining the data fidelity and reflecting activity characteristics compared with baseline solutions? * **RQ2**: How do different components of SAND contribute to the final performance? * **RQ3**: Can SAND generate high-quality synthetic data that benefit practical applications? * **RQ4**: Can SAND provide insightful interpretations on modeling daily activities? ### Experimental Settings #### 4.1.1. **Datasets** We conduct extensive experiments on two real-world datasets. (1) Foursquare-NYC (Foursquare-NYC, 2017) dataset contains checkin activities to various POIs collected from 2000 users with 14 activity labels during the duration from 2012-05-01 to 2012-06-01. (2) Mobile dataset contains 10000 users with 15 activity labels during the duration from 2016-09-17 to 2016-10-17, which is collected in Beijing Figure 5. Network architecture to learn need dynamics with both spontaneous flow and instantaneous jump. (a) shows the spontaneous flow of \(\mathbf{c}_{i}(t)\) and \(\mathbf{h}_{i}(t)\) based on the derivatives \(\frac{d\mathbf{c}_{i}(t)}{dt}\) and \(\frac{d\mathbf{h}_{i}(t)}{dt}\). (b) illustrates the instantaneous jump caused by the happened activity \(k\). Figure 6. Network architecture of the policy function. by a major mobile operator in China. We take careful steps to consider ethical issues in using data5. Footnote 5: First, the Terms of Service for both datasets include consent for research studies. Second, the research protocol has been reviewed and approved by our local institutional board. All research data is sanitized for privacy preservation, with limited access to authorized researchers bound by non-disclosure agreements. #### 4.1.2. Need Annotation According to the definition and description of each need level in Section 3.1, we ask three annotators to label each activity with one of the need levels. To ensure that correct expert knowledge is utilized, the three annotators all have expertise in related knowledge, including a senior Ph.D. candidate and two postdocs with a background in psychology and behavioral sciences. We choose the number of experts (three) following studies in NLP with annotation tasks (Zhou et al., 2017). If the three experts disagree on the label, we will invite another expert and start a discussion. Through this process, all activities obtain consistent labels. The annotation approach has satisfied the requirement of our problem settings due to the small scale of activity types. #### 4.1.3. Baselines To evaluate the performance of the SAND framework, we compare it against state-of-the-art baseline methods: Semi-Markov (Zhou et al., 2017), a classical probability model; Hawkes Process (Zhou et al., 2017), a representative point process model; Neural Hawkes Process (Zhou et al., 2017), the neural extension to the Hawkes process; Transformer Hawkes Process (Zhou et al., 2017) (THP) is another neural extension to the Hawkes process, which utilizes the self-attention mechanism to capture long-term dependencies; Neural JSDE (Chen et al., 2017), the state-of-the-art method to learn continuous and discrete dynamic behavior; LSTM (Liu et al., 2017), a widely used model in sequence prediction; SeqGAN (Wang et al., 2018), the state-of-the-art model for discrete sequence generation; TrajGAIL (Tran et al., 2018), a model-free imitation learning algorithm in trajectory generation. #### 4.1.4. Metrics We measure whether synthetic data accurately reflects crucial characteristics of the original, real-world data. Following the mainstream practice in previous works (Chen et al., 2017; Wang et al., 2018), we use essential metrics to describe activity patterns for comparing the statistical similarity between the generated data and real-world data, including (1) _ActInt_: time intervals between activities, including type-free intervals (Macrolnt) and type-aware intervals (MicroInt); (2) _DailyAct_: daily happened activities. It is the number of activities in one day for each individual; (3) _ActType_: the overall distribution over different activity types; (4) _Weekday_: the overall time distribution over the seven days; (5) _Hour_: the overall time distribution over the twenty-four hours. To get the quantitative evaluations on the fidelity of generated data, we use Jensen-Shannon divergence (\(JSD\)) to measure the distribution similarity of the above patterns between the generated data and real-world data, which is a widely used distance metric for comparing two distributions as follows: \[\text{JSD}(P||Q)=H(M)-\frac{1}{2}(H(P)+H(Q)) \tag{9}\] where \(H\) is the Shannon entropy, \(p\) and \(q\) are distributions, and \(M=\frac{p+q}{2}\). In our setup, lower \(\text{JSD}\) denotes a closer distribution between synthetic data and real data, which indicates a better generative model. In addition, the \(\text{JSD}\) is bounded by \([0,1]\) for two probability distributions with the base \(2\) logarithm (Zhou et al., 2017). ### Overall Performance (RQ1) Table 1 reports the performance in retaining the data fidelity of our framework and the eight competitive baselines on two real-world datasets. From the results, we have the following findings: * **Our framework steadily achieves the best performance.** SAND achieves the best performance on the mobile operator dataset, by ranking first on five metrics and second on one metric. For five metrics that rank 1st, SAND reduces the JSD by more than 20%. It also shows superior performance on most of the metrics on the Foursquare dataset, which ranks first on five metrics by reducing JSD by more than 40%. Meanwhile, it achieves comparable performance with the best baseline on the other one metric. * **Time-invariant model performs poorly in simulating human activities.** Semi-Markov performs the worst in most cases, which indicates that the time-invariant assumption fails to describe behavior transition laws due to the existence of complex temporal patterns in daily activities. * **Learning from raw data alone is insufficient for a realistic simulation.** The LSTM model has a poor performance on the metrics of _DailyAct_ and _ActType_, which means errors can be accumulated in the step-by-step generation process. By contrast, SeqGAN and GAIL improve the performance by using reinforcement learning and adversarial learning. For the Foursquare dataset that is more sparse, their superiority is lost, which further suggests the instability of purely data-driven methods. * **It is essential to model dynamic human needs.** The neural Hawkes, THP, and neural JSDE almost achieve the sub-optimal results on the two datasets, indicating the rationality of characterizing events in continuous time by temporal point processes. \begin{table} \begin{tabular}{|c|c c c c c c|c c c c c c|} \hline **Dataset** & \multicolumn{6}{c|}{**Mobile Operator**} & \multicolumn{6}{c|}{**Foursquare**} \\ \hline Metrics (JSD) & MacroInt & MicroInt & DailyAct & ActType & Weekday & Hour & MacroInt & MicroInt & DailyAct & ActType & Weekday & Hour \\ \hline Semi-Markov & 0.291 & 0.158 & 0.439 & 0.471 & 0.0042 & 0.051 & 0.334 & 0.055 & 0.485 & 0.101 & 0.0032 & 0.051 \\ Hawkes & 0.276 & 0.151 & 0.542 & 0.123 & 0.0039 & 0.051 & 0.073 & 0.024 & 0.530 & 0.026 & 0.0024 & 0.047 \\ \hline Neural Hawkes & 0.026 & 0.143 & 0.125 & **0.0063** & 0.0036 & 0.052 & 0.072 & 0.041 & 0.119 & 0.012 & 0.0040 & 0.047 \\ Neural JSDE & 0.014 & 0.106 & 0.138 & 0.048 & 0.0033 & 0.051 & 0.041 & 0.033 & **0.056** & 0.0072 & 0.0022 & 0.046 \\ THP & 0.167 & 0.111 & 0.058 & 0.098 & 0.005 & 0.040 & 0.331 & 0.035 & 0.095 & 0.003 & 0.013 & 0.047 \\ \hline LSTM & 0.110 & 0.136 & 0.513 & 0.342 & 0.0041 & 0.050 & 0.249 & 0.217 & 0.628 & 0.073 & 0.0033 & 0.051 \\ SeqGAN & 0.143 & 0.128 & 0.047 & 0.054 & 0.022 & 0.072 & 0.225 & 0.178 & 0.627 & 0.065 & 0.0034 & 0.051 \\ GAIL & 0.089 & 0.120 & 0.040 & 0.231 & 0.005 & 0.050 & 0.226 & 0.118 & 0.167 & 0.087 & 0.0049 & 0.062 \\ \hline SAND & **0.0096** & **0.084** & **0.025** & 0.036 & **0.002** & **0.009** & **0.018** & **0.014** & 0.062 & **0.0044** & **0.00032** & **0.0069** \\ \hline \end{tabular} \end{table} Table 1. Overall performance of SAND and baselines in terms of the JSD-based metrics, and lower results are better. Bold denotes the best results and underline denotes the second-best results. The improvements are significant (p-value-0.05). However, without investigating the deeper mechanism behind observed activities, their performance is still limited. ### Ablation Studies (RQ2) The proposed SAND framework consists of two key components: modeling need dynamics and solving the MDPs with GAIL. Besides, we also use the pre-training mechanism. To further validate whether they are indeed crucial for the final performance, we conduct ablation studies on two datasets by comparing the performance of three variants of SAND, including _SAND - need, SAND - GAIL, SAND - pretrain_. Specifically, _SAND - need_ calculates the latent state as (Krishnan et al., 2017) without modeling hierarchical human needs, _SAND - GAIL_ removes the GAIL training framework, and _SAND - pretrain_ starts training from raw data without the pre-training mechanism. The evaluation results are reported in Table 2. We can observe that SAND delivers the best performance on five metrics compared with the variants that are removed with specific designs. Without modeling need dynamics, the performance is reduced significantly, indicating the necessity to consider the intrinsic motivation in human activity simulation. Besides, removing the GAIL framework also reduces the data fidelity, which suggests the strong modeling capabilities of generative adversarial mechanisms. In addition, the pre-training mechanism facilitates making full use of the activity data and enables our framework to preview the dependencies and regularities of daily activities before GAIL training, thus it also contributes to the final performance. ### Practical Applications (RQ3) In user-based applications, real-world activity records usually cannot be directly shared due to privacy issues. Under this circumstance, SAND can be used to generate synthetic data to mask sensitive information while retaining the usability of real data. To examine the utility of the generated synthetic data, we perform experiments with synthetic data of two categories: * **Fully synthetic scenario**; Only synthetic data is used in applications, which provides a more robust privacy protection. * **Hybrid scenario**; It combines real and synthetic data, which is widely used in data augmentation settings. We select two representative applications (Krishnan et al., 2017; Krizhevsky et al., 2017) based on the activity data: (1) activity prediction and (2) interval estimation, which are fundamental to many activity-related problems, such as activity recommendation and planning. We utilize a widely-used model, LSTM with attention mechanism, to predict individuals' future activity types based on their historical sequence. As shown in Figure 7, compared with the best baseline, the prediction performance on the dataset generated by our framework is much closer to the performance on the real data, showing the retained utility of the generated data. Figure 8 illustrates that the model trained on the augmented data exhibits significantly better performance than that only trained on the real-world data. Meanwhile, the data augmented by SAND outperforms that by the best baseline. Moreover, the augmented data becomes more useful when the real-world data is of small scale, _e.g._, only with 50 or 100 real-world sequences. These results validate the practical value of the synthetic data. ### Interpretability of Dynamic Needs (RQ4) To validate whether SAND can provide insightful interpretability, we perform a case study on the learned intensity values of different need levels in the simulation process. Figure 9 illustrates the simulated activity sequences of two individuals for one week, together with the corresponding intensity values of three need levels. In terms of the model interpretability, we have two main observations. First, the proposed SAND can generate distinct but lifelike activity sequences that are hard to tell apart from real-world data. Specifically, comparing Figure 9(a) and (b), the two synthetic individuals lead quite personalized lifestyles. Individual 1 follows regular working routines with the intensity dynamics of the level-2 need varying periodically, while individual 2 enjoys more freedom \begin{table} \begin{tabular}{|c|c c c c c|c c c c c c|} \hline **Dataset** & \multicolumn{6}{c|}{**Mobile Operator**} & \multicolumn{6}{c|}{**Foursquare**} \\ \hline Metrics (JSD) & MacroInt & MicroInt & DailyAct & ActType & Weekday & Hour & MacroInt & MicroInt & DailyAct & ActType & Weekday & Hour \\ \hline SAND & **0.013** & **0.084** & **0.025** & **0.036** & **0.002** & **0.009** & **0.018** & **0.014** & 0.062 & **0.0044** & **0.00032** & **0.0069** \\ \hline SAND - GAIL & 0.013 & 0.116 & 0.085 & 0.040 & 0.0031 & 0.051 & 0.039 & 0.028 & 0.202 & 0.0051 & 0.0018 & 0.0092 \\ \hline SAND - need & 0.014 & 0.116 & 0.085 & 0.039 & 0.0035 & 0.050 & 0.019 & 0.030 & **0.0085** & 0.0072 & 0.0021 & 0.048 \\ \hline SAND - pretrain & 0.015 & 0.110 & 0.059 & 0.190 & 0.004 & 0.048 & 0.070 & 0.025 & 0.161 & 0.064 & 0.0020 & 0.044 \\ \hline \end{tabular} \end{table} Table 2. Ablation study on SAND variants. Bold denotes the best results and underline denotes the second-best results. Figure 8. Activity prediction in the hybrid scenario. For a different number of real-world sequences, _i.e.,_ 50, 100, 1000, we all add 1000 generated sequences for data augmentation. Figure 7. Activity prediction in the fully synthetic scenario. without working, showing a constantly low intensity of the level-2 need. Second, SAND can simulate human daily activity in an interpretable way with need modeling. As observed from Figure 9, the occurrence of activity not only changes the intensity of the corresponding need level but also affects other levels, indicating that different need levels are interconnected by intensities derived from need states and trigger activities in a cooperative manner. In summary, the above observations demonstrate the interpretability of SAND for simulation outcomes, which is equally important in real-life applications. ## 5. Related Work **Human activity simulation.** Solutions for activity simulation are mainly agent-based modeling (Zhou et al., 2017) with rule-based methods (Kang et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Specifically, these methods assume that human activities can be described by limited parameters with explicit physical meaning and are governed by transition rules based on psychology and social science theories. With simplified assumptions of human behaviors, agents in the system can be assigned different goals, then they take actions to maximize different attributes. For example, Kim et al. (Kim et al., 2019) propose that human actions are triggered by a cause and give rise to corresponding effects. Besides, considering the multiple behaviors, the priorities of behaviors are determined based on Maslow's hierarchy of needs (Kang et al., 2017; Li et al., 2018; Li et al., 2019). Despite the promising performance under some circumstances, rule-based methods fail to capture complicated activity patterns due to relying on simplified assumptions and thus usually fail to simulate activities in reality. The purpose of activity simulation is different from that of activity prediction (Kang et al., 2017; Li et al., 2019; Li et al., 2019). The former emphasizes the simulation results to reproduce and reflect characteristics of real data, but should not be too similar to real data with the goal of protecting user privacy, while the latter highlights to what extent the model can recover the real data. Although deep learning approaches are proposed for activity prediction (Li et al., 2019; Li et al., 2019), the problem of simulating daily activities has been barely explored. **Deep generative models for activity simulation.** Deep generative models, such as generative adversarial networks (GAN) (Chen et al., 2016) and variational autoencoder (VAE) (Li et al., 2019), are promising solutions to simulation. Previous studies (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) have also explored the ability of Generative adversarial Imitation Learning (GAIL) to simulate human decision process. Besides, a series of neural temporal point process models (Li et al., 2019; Li et al., 2019; Li et al., 2019) are proposed to model discrete events. Although these models are mainly for discrete event prediction, the learned probability distribution provides opportunities to perform event generation by the sampling operation. Recently, Gupta et al. (Gupta et al., 2019) propose attention-based temporal point process flows to model goal-directed activity sequences. However, it is not appropriate for our research problems as daily activities cannot be represented as a sequence of actions performed to achieve an explicit goal. We propose a knowledge-driven framework based on GAIL, and the incorporation of psychological knowledge is realized by leveraging an ODE-based temporal point process. ## 6. Conclusion In this paper, we investigate the individual activity simulation problem by proposing a novel framework SAND, which integrates deep generative models with well-respected psychological theories. Extensive experiments on two real-world datasets show the superior performance of the proposed framework. Our framework is not strictly limited to Maslow's theories, instead, what we highlight is leveraging neural networks to learn the driving force behind human daily activities, and the choice of knowledge or theory related to such driving force is quite flexible. Importantly, effective modeling of human needs makes it possible to understand human behaviors at a deeper level, which not only benefits the activity simulation in this work but also contributes to many other problems of psychology-informed user modeling. In terms of limitations, we recognize that data-driven models largely depend on high-quality datasets. For example, the shortage of long-term and fine-grained datasets hinders the modeling of needs for esteem and self-actualization. ###### Acknowledgements. This work was supported in part by the National Key Research and Development Program of China under grant 2020YFA0711403, the National Nature Science Foundation of China under 61971267, 61972223, 62171260, and U1936217, the Young Elite Scientists Sponsorship Program by CIC under 2021QMRC001,the Guoqiang Institute, Tsinghua University under 2021GQG1005, and Beijing National Research Center for Information Science and Technology (BNRist). Figure 9. Case study of two generated activity sequences and the learned intensity of different need levels. We select two representative individuals with different activity patterns.
2308.04661
Unified Matrix Factorization with Dynamic Multi-view Clustering
Matrix factorization (MF) is a classical collaborative filtering algorithm for recommender systems. It decomposes the user-item interaction matrix into a product of low-dimensional user representation matrix and item representation matrix. In typical recommendation scenarios, the user-item interaction paradigm is usually a two-stage process and requires static clustering analysis of the obtained user and item representations. The above process, however, is time and computationally intensive, making it difficult to apply in real-time to e-commerce or Internet of Things environments with billions of users and trillions of items. To address this, we propose a unified matrix factorization method based on dynamic multi-view clustering (MFDMC) that employs an end-to-end training paradigm. Specifically, in each view, a user/item representation is regarded as a weighted projection of all clusters. The representation of each cluster is learnable, enabling the dynamic discarding of bad clusters. Furthermore, we employ multi-view clustering to represent multiple roles of users/items, effectively utilizing the representation space and improving the interpretability of the user/item representations for downstream tasks. Extensive experiments show that our proposed MFDMC achieves state-of-the-art performance on real-world recommendation datasets. Additionally, comprehensive visualization and ablation studies interpretably confirm that our method provides meaningful representations for downstream tasks of users/items.
Shangde Gao, Ke Liu, Yichao Fu
2023-08-09T01:58:28Z
http://arxiv.org/abs/2308.04661v2
# Unified Matrix Factorization with Dynamic Multi-view Clustering ###### Abstract. Matrix factorization (MF) is a classical collaborative filtering algorithm for recommender systems. It decomposes the user-item interaction matrix into a product of low-dimensional user representation matrix and item representation matrix. In typical recommendation scenarios, the user-item interaction paradigm is usually a two-stage process and requires static clustering analysis of the obtained user and item representations. The above process, however, is time and computationally intensive, making it difficult to apply in real-time to e-commerce or Internet of Things environments with billions of users and trtillions of items. To address this, we propose a unified matrix factorization method based on dynamic multi-view clustering (MFDMC) that employs an end-to-end training paradigm. Specifically, in each view, a user/item representation is regarded as a weighted projection of all clusters. The representation of each cluster is learnable, enabling the dynamic discarding of bad clusters. Furthermore, we employ multi-view clustering to represent multiple roles of users/items, effectively utilizing the representation space and improving the interpretability of the user/item representations for downstream tasks. Extensive experiments show that our proposed MFDMC achieves state-of-the-art performance on real-world recommendation datasets. Additionally, comprehensive visualization and ablation studies interpretably confirm that our method provides meaningful representations for downstream tasks of users/items. matrix factorization, neural networks, multi-view clustering, recommender systems + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: ## 1. Introduction Recommender system is in great demand nowadays with the rapid growth of various web services, _e.g._, e-commerce and social network (Song et al., 2018; Wang et al., 2019). It helps users find the items of their interest from a massive amount of candidates, making significant contributions in improving user experience as well as increasing business value. Matrix Factorization (MF) (Song et al., 2018) is a classic and effective method for extracting user preferences (user latent vectors) and item features (item latent vectors) from historical data, which performs well in recommendation systems. In its basic form, the MF model, such as singular value decomposition, maps both users and items to a fixed dimensional joint latent factor space, where the user-item interactions are modeled as inner products in this space. The recommendation is formed by the highly corresponding item and user factors, represented by the inferred factor vectors from the item rating patterns. However, the traditional MF process requires significant time and computational resources, making it difficult to apply in real scenarios, especially when dealing with billions of users and tens of billions of items in big data situations. Recently, many variants of MF have been proposed to deal with specific problems in recommendation systems. For example, biased MF (Song et al., 2018) handles biases in ratings. Some deep learning (DL) methods,otherwise, utilize deep neural networks to construct high-order features or model latent non-linear connections, achieving the generalization and improvement of matrix factorization algorithms for recommendation performance (Song et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). However, complex optimization designs were required for matrix factorization in previous works. Furthermore, the problem of the MF process being a black box, where the results are uninterpretable, has not been solved (Beng et al., 2018). Additionally, the latent representation space is often not fully and effectively utilized, resulting in high dimensionality and redundant information in the latent space. In this work, we endeavor to explore an effective and interpretable MF scheme for recommender system. Our goal is to maximize the utilization of the latent representation space in acquiring user user/item representations, while significantly reducing time and computing resources. Additionally, we aim to explore the interpretability of user/item representations as much as possible and apply it to downstream specific tasks. Here, we propose an unified Matrix Factorization with Dynamic Multi-view Clustering (MFDMC). The contributions of this work are summarized as follows: * We propose a unified matrix factorization method for recommender system, known as MFDMC. MFDMC combines dynamic clustering and matrix decomposition to fully leverage the representation space, resulting in significant time and computational resource savings. * We conduct comprehensive visualization to validate the interpretability of user/item representations obtained from multi-view clustering. * Extensive experiments on six real-world datasets demonstrate that MFDMC achieves state-of-the-art performance and can be applied to downstream computer vision tasks with constrained representation space. ## 2. Related Works Matrix Factorization has garnered significant attention due to its efficacy in recommender systems, resulting in numerous studies focused on optimizing and improving its interpretability (Beng et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). Additionally, multi-view learning is extensively employed to enhance the performance of models. This section provides a brief overview of these techniques. ### Matrix Factorization The Matrix Factorization (MF) algorithm evolves from Singular Value Decomposition (SVD), which can only decompose dense matrices. However, the user-item interaction matrix is usually extremely sparse. Therefore, in order to decompose the rating matrix using SVD, the missing values of the matrix must be filled first. This can cause the following two problems: (1) Filling missing data can significantly increase the amount of data and lead to an increase in the complexity of the algorithm. (2) Improper padding methods can result in data distortion. Since the SVD algorithm does not perform well with scoring matrices, researchers have turned to investigating whether they can decompose matrices considering only the existing scores. Matrix decomposition optimization methods such as FunkMF (Krizhevsky et al., 2012), Probabilistic Matrix Factorization (PMF) (Krizhevsky et al., 2012), and BiasedMF (Krizhevsky et al., 2013) have been proposed. Concretely, Simon Funk(Krizhevsky et al., 2012) introduced stochastic gradient descent to optimize Eq. 1. This algorithm traverses all rating records in the training set. For each training sample, the algorithm predicts the rating using the user/item embedding and computes the prediction error. \[\min_{p,q}\quad\sum_{(u,i)\in\mathcal{K}}(r_{ui}-q_{i}^{T}p_{u})^{2}+\lambda(\| q_{i}\|^{2}+\|p_{u}\|^{2}) \tag{1}\] And BiasedMF (Krizhevsky et al., 2013) takes individual biases into account, i.e., much of the observed variation in ratings is influenced by the user or item independently, rather than by the interaction between the user and the item. So BiasedMF divides the rating into four parts: the item and user biases, the interaction between them and global average as Eq. 2. Where \(b_{i}\), \(b_{u}\), and \(\mu\) are the user, item bias and the global average respectively. \[\hat{r_{ui}}=q_{i}^{T}p_{u}+b_{i}+b_{u}+\mu \tag{2}\] Pu et al. proposed a model-based method called Asymmetric SVD (Pue et al., 2017) to tackle the cold-start problem. Bi et al. also proposed a method called Group-specific SVD (Bai et al., 2018), where users/items are grouped into clusters and their embeddings are influenced by the cluster they belong to. In recent years, the rapid development of deep learning has led to the integration of deep learning methods into Matrix Factorization (MF)(Krizhevsky et al., 2012; Li et al., 2017). This integration aims to generate representational information by constructing higher-order features or potential non-linear connections of models (Krizhevsky et al., 2013; Li et al., 2017), with the goal of improving both model performance and generalization ability. ### Multi-view Learning The idea of Multi-view learning (Li et al., 2017; Li et al., 2017; Li et al., 2017) is widely utilized in various models such as multi-head attention in Transformer (Krizhevsky et al., 2013) and multi-interest in Multi-Interest Network with Dynamic routing (MIND) (Krizhevsky et al., 2013). In the case of Transformer, multi-head attention allows the model to simultaneously focus on information from different representations of the information subspace in different locations. Similarly, in MIND, Li et al. (Li et al., 2017) employ multiple vectors to represent a user, thereby encoding different aspects of the user's interests. Both of these works demonstrate the effectiveness of the multi-view approach. In our method, we extend the concept of multi-view learning to an environment, where we design dynamic multi-view clustering to represent multiple roles of users/items, effectively leveraging the representation space and subsequently applying them to downstream recommendation tasks. ### Interpretability in Recommender Systems In the Matrix Factorization (MF) technique, the representation vectors of users and items are embedded in a low-dimensional space. Each dimension of this space represents a specific factor that influences the user's decision. However, the precise meaning of each factor is unclear, making it difficult to interpret predictions or recommendations. Several interpretable recommendation models have been proposed based on matrix factorization methods (Krizhevsky et al., 2012). In the Explicit Factor Models (EFM) (Bau et al., 2018) approach, Zhang et al. extract explicit features from user reviews and assign each latent dimension to a specific feature. This makes the entire process traceable and provides an explicit interpretation. In another approach, Chen et al. construct a user-item-feature cube by extracting features from user reviews (Chen et al., 2018). They then employ a pair-wise learning-to-rank method to rank the features and items. The Sentiment Utility Logistic Model (SULM) (Bau et al., 2018), presented by Bauman et al., incorporates user sentiments on these features into MF to predict ratings. This enables the learning of feature recommendations for each item, which can be used as explanations. In the Explainable Matrix Factorization (EMF) method (Bau et al., 2018), an 'interpretable regularizer' is added to the objective function of MF. This generates relevant-user interpretations. Among the aforementioned methods, only EFM offers dimension-wise interpretability. Additionally, all of them require extra information beyond the user-item interaction matrix, except for EMF. However, both EFM and EMF underperform in terms of Root Mean Square Error (RMSE), which is an important metric in experiments. ## 3. MF with Dynamic Multi-View Clustering The purpose of matrix decomposition is to characterize users and items by decomposing the user-item interaction matrix into user \begin{table} \begin{tabular}{c|l} \hline \hline Notations & Meaning \\ \hline \(d\) & Dimension of latent vector, \(d\in\mathbb{Z}\) \\ \(m,n\) & Number of users/items, \(m,n\in\mathbb{Z}\) \\ \(P,Q\) & Matrix of users/items, \(P,Q\in\mathbb{R}^{m\times d}\) \\ \(p_{i},q_{i}\) & The \(i^{th}\) user/item latent vector, \(p_{i},q_{i}\in\mathbb{R}^{d}\) \\ \(R\) & User-item interaction matrix, \(R\in\mathbb{R}^{m\times n}\) \\ \(t,v\) & Number of centers, views, \(e,v\in Z\) \\ \(b\) & Dimension of centers, \(b\in\mathbb{Z}\) and \(b=d/v\) \\ \(C^{user},C^{item}\) & The centers of user/item, \(C\in\mathbb{R}^{m\times e\times b}\) \\ \(c_{i,j}^{user},c_{i,j}^{item}\) & The \(i^{th}\) center in \(j^{th}\) view of user/item, \(c_{i,j}\in\mathbb{R}^{b}\) \\ \(W_{i}^{user},W_{i}^{item}\) & Weight of \(i^{th}\) user/item for centers, \(W\in\mathbb{R}^{m\times e}\) \\ \(w_{i,j}^{user},w_{i,j}^{item}\) & Weight of user/item \(i^{th}\) center in \(j^{th}\) view, \(w_{i,j}\in\mathbb{R}\) \\ \(\rho,\eta,\gamma,\psi,\lambda\) & Weight parameters \\ \hline \hline \end{tabular} \end{table} Table 1. The notations used throughout the paper and item matrices in a lower-dimensional latent space, as formulated in Eq. 3. Each row of matrix \(P\) or \(Q\) represents a latent vector for user or item representation. \[R=P\cdot Q^{T} \tag{3}\] In this section, we propose a unified matrix factorization with Dynamic Multi-view Clustering (MFDMC), in a end-to-end training paradigm, to obtain interpretable representations and make full use of the representation space. The overview of MFDMC is shown in Fig. 1, where the user/item representation provides rich information about users' interest preferences, such as preferences for comedy or the perceived humor in movies. Therefore, latent vectors are divided into sub-vectors in \(v\) views of interest. In each view of interest, the user/item is assigned to one of the clusters, which represents the preference, and the cluster center is selected. This way, the user/item latent vector can be derived as shown in Eq. 4. \[\begin{split} p_{i}&=\bigoplus_{j=0}^{p}\sum_{i=0}^{ t}c_{i,j}^{user}w_{i,j}^{user}\\ q_{i}&=\bigoplus_{j=0}^{p}\sum_{i=0}^{t}c_{i,j}^{ item}w_{i,j}^{item}\end{split} \tag{4}\] The numbers of views \(v\) for users and items are the same, while the center numbers \(e\) can vary. Loss functions are designed for cluster centers, user/item weights, and target user-item interaction ratings to achieve the aforementioned goals, which are detailed in the subsections. ### Cluster centers In this section, our clustering method aims to address the following two problems: (1) How to fully utilize the latent space in the cluster centers? (2) How to manage the number of clusters during training? Firstly, the latent space is expected to be fully utilized, which means that the cluster centers should be spread out. We design the loss function as shown in Equation 5: \[\begin{split} loss_{1}^{user}&=\sum_{j=0}^{p}\sum_{ \alpha,\beta}l(c_{\alpha,j}^{user},c_{\beta,j}^{user})\\ loss_{1}^{item}&=\sum_{j=0}^{p}\sum_{\alpha,\beta }l(c_{\alpha,j}^{item},c_{\beta,j}^{item})\\ l(c_{\alpha},c_{\beta})&=max\{0,\rho-\mathcal{D}( c_{\alpha}-c_{\beta})\}\end{split} \tag{5}\] Here, \(\mathcal{D}\) represents any distance function, and \(\rho\) defines the maximum allowed proximity between cluster centers. Additionally, to bring the latent vectors closer to their corresponding cluster centers, we measure the average distance between a user/item and its cluster center using Equation 6: \[\begin{split} loss_{1,e}^{user}&=\frac{1}{N}\sum_{j =0}^{p}\sum_{i=0}^{t}\sum_{k=0}^{N,k\in S_{i,j}}\|c_{i,j}^{user}-p_{k}^{user} \|^{2}\\ loss_{1,e}^{item}&=\frac{1}{N}\sum_{j=0}^{p}\sum_{i=0 }^{t}\sum_{k=0}^{N,k\in S_{i,j}}\|c_{i,j}^{item}-q_{k}^{item}\|^{2}\end{split} \tag{6}\] where \(S_{i,j}\) is the set of user/item that belong to the \(i^{th}\) cluster in the \(j^{th}\) view. Then, to make full use of the representation space, the loss function we want to optimize for cluster centers can be defined as: \[loss1=loss_{1}^{user}+loss_{1}^{item}+loss_{1,e}^{user}+loss_{1,e}^{item} \tag{7}\] Considering the difficulty of determining the number of cluster centers that accurately represent potential space, it is also challenging to ascertain if there is redundancy in our problem. In our MFDMC, we employ a dynamic update mechanism to update and manage the cluster centers' count. The Algorithm 1 provides an outline on how to handle cluster centers while calculating cluster losses. Specifically, after each iteration of \(I_{p}\), we calculate the weights of each center and remove those whose weights fall below the threshold \(\psi\). ``` Input: Cluster centers: \(\mathcal{C}\); Weights of user/item: \(\mathcal{W}\); Current epoch: \(i\). Output: New cluster centers: \(\mathcal{C}^{\prime}\); Clustering loss: \(\mathcal{L}_{1,e}\) 1:if\(i>I_{p}\)then 2:\(\mathcal{W}\leftarrow\) cluster-wise mean of user/item in \(W\). 3:\(\mathcal{C}^{\prime},\mathcal{W}\leftarrow\) Remove the \(\mathcal{C}\) and \(\mathcal{W}\) where \(\mathcal{\overline{W}}<\psi\). 4:else 5:\(\mathcal{C}^{\prime}\leftarrow\mathcal{C}\). 6:endif 7:\(\mathcal{P},\mathcal{Q}\leftarrow\mathcal{W}\) weighted sum \(\mathcal{C}^{\prime}\). 8:\(\mathcal{L}_{1,e}\leftarrow\) Compute using \(\mathcal{C}^{\prime}\), \(\mathcal{P},\mathcal{Q}\) and Eq.6. 9:return\(\mathcal{C}^{\prime}\), \(\mathcal{L}_{1,e}\) ``` **Algorithm 1** Dynamic clustering algorithm ### User/item weights From the perspectives of optimization and interpretability, user/item should be explicitly grouped into clusters in an interest view. Therefore, the \(softmax\) function is used to normalize the weights in Figure 1. Illustration of our MFDMC. The User/item is embedded into Weights \(t\) centers in \(v\) Views. In each view, the representation is weighted sum of the Cluster centers. Equation 8, and entropy is used in the loss function 10 to reduce uncertainty. \[W^{user^{\prime}}=softmax(W^{user}) \tag{8}\] Additionally, the number of centers varies from view to view. In views with a different number of centers, the losses behave differently even if the weight distribution is the same. For example, in \(view_{1}\) and \(view_{2}\), there are 3 and 10 centers respectively, and the weight distribution of users is uniform. Although the distribution of both views is poor, the loss of \(view_{1}\) is almost at the extreme point and much greater than the loss of \(view_{2}\). This can lead to unbalanced optimizations, meaning that it is much more difficult to optimize losses in views with more centers. Therefore, the mapping function of \(w\) in Equation 10 is designed to solve this problem. As shown in Fig. 2, the uniform distribution loss (worst case) is always the extreme value even in views with a different number of centers, where \(e\) is the Napier logarithm. (9) \[\begin{array}{l}w^{\prime\prime}=\left\{\begin{array}{cl}\frac{e}{w^{ \prime}}&0\leq w^{\prime}\leq\frac{1}{t}\\ \frac{w^{\prime}-1}{t-1}(1-\frac{1}{e})+\frac{1}{e}&\frac{1}{t}\leq w^{\prime }\leq 1\end{array}\right.\\ \\ loss_{2}^{user}=-\sum_{j=0}^{v}\sum_{i=0}^{m}w_{i,j}^{user^{\prime\prime}}log(w _{i,j}^{user^{\prime\prime}})\\ loss_{2}^{item}=-\sum_{j=0}^{v}\sum_{i=0}^{m}w_{i,j}^{item^{\prime\prime}}log(w _{i,j}^{item^{\prime\prime}})\end{array}\] (10) Finally, the loss for user/item weight is taken to be: \[loss_{2}=loss_{2}^{item}+loss_{2}^{user} \tag{11}\] **Evaluation metric** Root mean square error (RMSE) 12 is used as evaluation metric and one of the loss. \[loss_{3}=\sqrt{\frac{1}{N}\sum_{i=0}^{N}(y_{i}-r_{i})^{2}} \tag{12}\] Finally, the total objective of MFDMC can be described as: \[loss=\eta loss_{1}+\gamma loss_{2}+loss_{3} \tag{13}\] ## 4. Experiments In this section, we present extensive experimental results on six real-world datasets. We conduct a thorough comparison with matrix factorization and visualization techniques to validate the superiority and interpretability of our MFDMC model. ### Datasets & Implementation Details #### 4.1.1. Datasets The experiments are conducted on six datasets, each of which contains files of users, movies, and ratings. Only the rating files are used in the experiments. An overview of the datasets is provided in Table 2.For all the datasets, we randomly select 80% of the data for training the model, 10% of the data for validation, and report the results of the remaining 10% of the data for testing. _MovieLen_(Moving et al., 2016) collects user ratings for movies. Meanwhile, for each user in the MovieLens 1M and 100k, there are at least 20 ratings. _Amazon-video_(Moving et al., 2016) is a subset of Amazon review dataset, in which each rating indicates how the user rated the instant video. Each user rated at least one video. _Epinions1_ collects subjective reviews of users about many different types of items. Also, at least one rating exists for a user. _Books-across_(Shen et al., 2016) contains user's more than one rating on the books. _Jester_(Moving et al., 2016) collects the ratings on jokes by users, where each user has rated more than 15 jokes. Footnote 1: [http://www.truslet.org/datasets/downloaded_epinions/](http://www.truslet.org/datasets/downloaded_epinions/) #### 4.1.2. Parameter Setting of MFDMC As stated in Eq. 5, \(\mathcal{D}\) is used to measure the distance between user/item and its cluster center. In our experimental setup, we use the Euclidean distance as the metric, as shown in Eq.14 below \[\mathcal{D}(c_{\alpha},c_{\beta})=\|c_{\alpha}-c_{\beta}\|^{2} \tag{14}\] Furthermore, cluster centers are view-wise normalized by Eq.15 \[C=\frac{C-min(C)}{max(C)-min(C)} \tag{15}\] In our all experiments, \(I_{d}\) is set to be 40, which means, in the first 40 epochs, the cluster centers are not removed dynamically. The threshold to remove the cluster center is \(\frac{1}{2}\) and we keep at least 3 centers in one view. The weight parameters, \(\eta\) and \(\gamma\) of \(loss_{1}\) and \(loss_{2}\) increase gradually with the epochs. As the numbers of views for users and items are the same, centers in the same location in the view can be shared. And the number of cluster centers \(t\) in each view is 10. Weight decay with a regularization parameter \(\lambda\) is also added to our model. ### Results & Analysis #### 4.2.1. Results Firstly, we compared our approach with other baseline models. Table 3 presents the RMSE values for four methods on extensive datasets. Three observations were made. (1)MFDMC consistently outperformed the other competitors, such as achieving approximately 0.025 RMSE improvement compared to FunkMF on the MovieLens-100k dataset. (2) Despite the increase in the dimension of the latent space from 16 to 60 in FunkMF and BiasedMF, the \begin{table} \begin{tabular}{l|l l l l} \hline \hline Dataset & \(m\) & \(n\) & \(N\) & Range \\ \hline MovieLens-1M & 6,040 & 3,952 & 1,000,209 & [1, 5] \\ MovieLens-100k & 943 & 1,682 & 100,000 & [1, 5] \\ Amazon-video & 424,560 & 23,745 & 583,933 & [1, 5] \\ Epinions & 40,163 & 139,738 & 664,824 & [1, 5] \\ Books-across & 105,283 & 340,395 & 1,149,780 & [0, 10] \\ Jester & 73,421 & 100 & 3,519,446 & [-10, 10] \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics of the datasets. The symbol \(N\) indicates the number of records. And \(Range\) indicates the range of ratings. Figure 2. Mapping function of Eq. 9 for the weight. Using the function in the middle of the Figure, we can map the weights on the left to the right, which facilitates a relatively balanced optimization of weight in different views. RMSE did not improve or even worsened. These models were unable to fully utilize the representation space. However, in MFDMC, where the dimension of the latent space is 16 (same as FunkMF), the RMSE was significantly better. It was even observed that MFDMC with a lower dimension of 12 (as shown in Table 4) could achieve comparable results to other methods. (3) The ablation studies revealed that our approach was effective not only due to the deep structure of our model but also because of the loss functions we designed. Without these loss functions, the RMSE of our model was only slightly better than or even worse than the other methods. However, with the inclusion of the loss functions, the RMSE consistently improved. Besides, our approach dynamically clusters users and items from various perspectives by decomposing the interaction matrix. To provide a clear visualization of the clustering results, we utilized the results of experiment No.4 in Table 4. As depicted in the Fig. 3, every user and item is assigned to a specific cluster. Although the cluster boundaries may not be distinct, we can readily determine the cluster to which each user or item belongs based on their weights assigned to each center. same number of views (\(v\)). Based on these observations, we suggest the following parameter settings for \(v\), \(d\), and \(b\):1. Determine the appropriate number of views (\(v\)) based on the number of classes you want to cluster the items. In the case of MovieLens-1M, there are 16 categories of movies, and our goal was to group them into six main categories.2. If you need to save time and memory resources, you can share centers between users and items.3. Keep the dimension of centers (\(b\)) low, as a small \(b\) is sufficient to adequately represent the properties of a cluster. For example, in the No.4 experiment, we found that \(b=2\) worked well. #### 4.2.3. Interpretability analysis We take one _view_ of the No.4 experiment in Table 4 as an example of how to interpret the user/item representation. Via counting the movies in the clusters of this view, it is clear that movies in the category of animation, children and comedy categories are located in several clusters, while movies in other categories are almost all in one cluster. Hence \(view_{1}\) represents the movies in these categories. Specifically for any of the clusters in the view of user, by counting the user's rating on the movies in these categories, it can be inferred which cluster represents more preference and which cluster represents movies with higher quality in that category. We randomly select a user and a movie to get their embedding respectively, which is obtained from the weighted sum of centers in \(view_{1}\). The overview is shown in Fig. 6. For example, through the analysis above, the meaning of each center is obtained. Therefore, when a user/item belongs to a certain cluster, he also has the properties of that cluster, i.e., the category and quality of the movie or the user's preferences. Moreover, additional experiments on CV were conducted to verify the interpretability. The results are shown in Fig. 4. In the color view of Fig. 4(a), the green, red, and blue geometric shapes are grouped together respectively. In the shape view of Fig. 4(b), although some of the shapes are incorrectly clustered, the green rectangles and red rectangles, blue circles and red circles, green triangles and red triangles are correctly clustered together. Fig. 4(c) shows the t-SNE results of the entire embedding, where the 9 classes of shapes are correctly clustered. ## 5. Conclusion In this study, we propose a novel approach called Unified Matrix Factorization with Dynamic Multi-View Clustering (MFDMC). Our motivation stems from the observation that the representation space is not fully utilized in traditional matrix factorization (MF) algorithms, leading to uninterpretable user/item representations. Additionally, downstream clustering tasks require significant additional time and resources. To address these limitations, we introduce dynamic multi-view clustering to MF. By incorporating dynamic multi-view clustering into MF, our proposed method not only enhances the interpretability of representations but also optimally utilizes the representation space. We validate our approach through extensive experiments on massive datasets, and the results demonstrate that our proposed MFDMC surpasses existing MF methods. Moreover, the dynamic multi-view clustering approach that we Figure 4. The t-sne results of experiments on CV. Fig. 4(a), Fig. 4(b), and Fig. 4(c) show the t-sne results of shape view, color view, and the entire embedding respectively. Figure 5. Performance (RMSE) comparison of applying different configurations. The horizontal and vertical coordinates are the experiment number and RMSE respectively. Figure 6. Illustration of interpreting a representation. The red circles represent the cluster centers, while the squares depict the embedding of centers and weights. The transparency of each shape indicates its value, and the number on the center indicates the position of its corresponding weights. introduce effectively utilizes the representation space and enhances the interpretability of representations. As for future work, we plan to further improve our approach and extend its applicability to a broader range of models.
2303.11678
Full or Weak annotations? An adaptive strategy for budget-constrained annotation campaigns
Annotating new datasets for machine learning tasks is tedious, time-consuming, and costly. For segmentation applications, the burden is particularly high as manual delineations of relevant image content are often extremely expensive or can only be done by experts with domain-specific knowledge. Thanks to developments in transfer learning and training with weak supervision, segmentation models can now also greatly benefit from annotations of different kinds. However, for any new domain application looking to use weak supervision, the dataset builder still needs to define a strategy to distribute full segmentation and other weak annotations. Doing so is challenging, however, as it is a priori unknown how to distribute an annotation budget for a given new dataset. To this end, we propose a novel approach to determine annotation strategies for segmentation datasets, whereby estimating what proportion of segmentation and classification annotations should be collected given a fixed budget. To do so, our method sequentially determines proportions of segmentation and classification annotations to collect for budget-fractions by modeling the expected improvement of the final segmentation model. We show in our experiments that our approach yields annotations that perform very close to the optimal for a number of different annotation budgets and datasets.
Javier Gamazo Tejero, Martin S. Zinkernagel, Sebastian Wolf, Raphael Sznitman, Pablo Márquez Neila
2023-03-21T08:41:54Z
http://arxiv.org/abs/2303.11678v1
# Full or Weak annotations? ###### Abstract Annotating new datasets for machine learning tasks is tedious, time-consuming, and costly. For segmentation applications, the burden is particularly high as manual delineations of relevant image content are often extremely expensive or can only be done by experts with domain-specific knowledge. Thanks to developments in transfer learning and training with weak supervision, segmentation models can now also greatly benefit from annotations of different kinds. However, for any new domain application looking to use weak supervision, the dataset builder still needs to define a strategy to distribute full segmentation and other weak annotations. Doing so is challenging, however, as it is a priori unknown how to distribute an annotation budget for a given new dataset. To this end, we propose a novel approach to determine annotation strategies for segmentation datasets, whereby estimating what proportion of segmentation and classification annotations should be collected given a fixed budget. To do so, our method sequentially determines proportions of segmentation and classification annotations to collect for budget-fractions by modeling the expected improvement of the final segmentation model. We show in our experiments that our approach yields annotations that perform very close to the optimal for a number of different annotation budgets and datasets. ## 1 Introduction Semantic segmentation is a fundamental computer vision task with applications in numerous domains such as autonomous driving [11, 43], scene understanding [45], surveillance [50] and medical diagnosis [9, 18]. As the advent of deep learning has significantly advanced the state-of-the-art, many new application areas have come to light and continue to do so too. This growth has brought and continues to bring exciting domain-specific datasets for segmentation tasks [19, 29, 32, 52, 6]. Today, the process of establishing machine learning-based segmentation models for any new application is relatively well understood and standard. Only once an image dataset is gathered and curated, can machine learning models be trained and validated. In contrast, building appropriate datasets is known to be difficult, time-consuming, and yet paramount. Beyond the fact that collecting images can be tedious, a far more challenging task is producing ground-truth segmentation annotations to subsequently train (semi) supervised machine learning models. This is mainly because producing segmentation annotations often remains a manual task. As reported in [4], generating segmentation annotations for a single PASCAL image [15] takes over 200 seconds on average. This implies over 250 hours of annotation time for a dataset containing a modest 5'000 images. What often further exacerbates the problem for domain-specific datasets is that only the dataset designer, or a small group of individuals, have enough expertise to produce the annotations (, doctors, experts, etc.), making crowd-sourcing ill-suited. To overcome this challenge, different paradigms have been suggested over the years. Approaches such as Active Learning [7, 8, 26] aim to iteratively identify subsets of images to annotate so as to yield highly performing models. Transfer learning has also proved to be an important tool in reducing annotation tasks [13, 36, 25, 17, 24, 30]. For instance, [37] show that training segmentation models from scratch is often inferior to using pre-training models derived from large image classification datasets, even when the target application domain differs from the source domain. Finally, weakly-supervised methods [2, 40] combine pixel-wise annotations with other weak annotations that are faster to acquire, thereby reducing the annotation burden. In particular, Papandreou _et al_. [40] showed that combinations of strong and weak annotations (_e.g_., bounding boxes, keypoints, or image-level tags) delivered competitive results with a reduced annotation effort. In this work, we rely on these observations and focus on the weakly supervised segmentation setting. In the frame of designing annotation campaigns, weakly-supervised approaches present opportunities for efficiency as well. Instead of completely spending a budget on a few expensive annotations, weakly-supervised methods allow a proportion of the budget to be allocated to inexpensive, or weak, labels. That is, one could spend the entire annotation budget to manually segment available images, but would ultimately lead to relatively few annotations. Conversely, weak annotations such as image-level labels are roughly 100 times cheaper to gather than their segmentation counterparts [4]. Thus, a greater number of weakly-annotated images could be used to train segmentation models at an equal cost. In fact, under a fixed budget, allocating a proportion of the budget to inexpensive image-level class labels has been shown to yield superior performance compared to entirely allocating a budget to segmentation labels [4]. Yet, allocating how an annotation budget should be distributed among strong and weak annotations is challenging, and inappropriate allocations may severely impact the quality of the final segmentation model. For example, spending the entire budget on image-level annotations will clearly hurt the performance of a subsequent segmentation model. Instead, a naive solution would be to segment and classify a fixed proportion of each (_e.g_., say 80% - 20%). Knowing what proportion to use for a given dataset is unclear, however. Beyond this, there is no reason why the same fixed proportion would be appropriate across different datasets or application domains. That is, it would be highly unlikely that the datasets shown in Fig. 1 all require the same proportion of strong and weak annotations to yield optimal segmentation models. Despite its importance, choosing the best proportion of annotation types remains a largely unexplored research question. Weakly-supervised and transfer-learning methods generally assume that the annotation campaign and the model training are independent and that all annotations are simply available at training time. While active learning methods do alternate between annotation and training, they focus on choosing optimal samples to annotate rather than choosing the right type of annotations. Moreover, most active learning methods ignore constraints imposed by an annotation budget. More notable, however, is the recent work of Mahmood _et. al._[33, 34] which aims to determine what weak and strong annotation strategy is necessary to achieve a target performance level. While noteworthy, this objective differs from that here, whereby given a fixed budget, what strategy is best suited for a given new dataset? To this end, we propose a novel method to find an optimal budget allocation strategy in an online manner. Using a collection of unlabeled images and a maximum budget, our approach selects strong and weak annotations, constrained by a given budget, that maximize the performance of the subsequent trained segmentation model. To do this, our method iteratively alternates between partial budget allocations, label acquisition, and model training. At each step, we use the annotations performed so far to train multiple models to estimate how different proportions of weak and strong annotations affect model performance. A Gaussian Process models these results and maps the number of weak and strong annotations to the expected model improvement. Computing the Pareto optima between expected improvement and costs, we choose a new sub-budget installment Figure 1: Illustration of different semantic segmentation applications; OCT: Pathologies of the eye in OCT images, SUIM: Underwater scene segmentation [19], Cityscape: street level scene segmentation [11], PASCAL VOC: natural object segmentation. and its associated allocation so to yield the maximum expected improvement. We show in our experiments that our approach is beneficial for a broad range of datasets, and illustrate that our dynamic strategy allows for high performances, close to optimal fixed strategies that cannot be determined beforehand. ## 2 Related work ### Weak annotations for segmentation Weakly supervised semantic segmentation (WSSS) relies on coarser annotations, such as bounding boxes [46], scribbles [31, 49] or image-level classification labels [1], to train a segmentation network. WSSS methods have often employed saliency maps as weak annotations for segmentation models, as these are typically obtained from CAM [55], which leverages image-level classification annotation. These methods then focus on refining the saliency maps with a variety of techniques [16, 28]. Others make use of attention to achieve coarse segmentations [20, 23]. Conversely, [54] combined annotations in the form of bounding boxes and image-level labels to accurately generate image graphs, to be used by a graph neural network to predict node values corresponding to pixel labels. In this context, the work in [33] and [34] are close to this one, whereby their objective is to determine what annotation strategy over annotation types is likely to yield a target performance level. ### Transfer learning Due to the limited availability of annotated image data in some domains, it is now common to use neural networks pre-trained on large image classification tasks [12] for subsequent target tasks. Specifically, in cases where the target task has limited data or annotations, this has been shown to be particularly advantageous. Among others, this practice is now widely used in medical imaging and has been linked to important performance gains after fine-tuning [13, 14, 24, 36, 48]. Efforts are now pivoting towards the use of in-domain pre-training, avoiding the leap of faith that is often taken with Imagenet [17, 30]. In [30], the model is pre-trained on ChestX-ray14 [51] to more accurately detect pneumonia in chest X-ray images from children. In [17], the authors show that joint classification and segmentation training, along with pre-training on other medical datasets that have domain similarity, increases segmentation performances with respect to the segmentation using Imagenet-based pre-training. Alternatively, cross-task methods seek to transfer features learned on one task (_e.g_. classification, normal estimation, etc.) to another, usually more complex one. Along this line, Taskonomy [53] explored transfer learning capabilities among a number of semantic tasks and built a task similarity tree that provided a clustered view of how much information is available when transferring to other tasks. Similarly, [37] performed an extensive study of cross-task transfer capabilities for a variety of datasets, reaching the conclusion that Imagenet pre-training outperforms random initialization in all cases, but further training on related tasks or domains also brings additional benefits. ### Active learning In active learning, the goal is to train a model while querying an oracle to label new samples that are expected to improve the model's accuracy. In computer vision, it has been applied to image classification [41, 22] or semantic segmentation [44, 3, 5] among others. As a byproduct, Active learning has also been used as a way to reduce labeling time. For example, [27] describes a method that couples Reinforcement Learning and Active Learning to derive the shortest sequence of annotation actions that will lead to object detection within an image. Others have focused on speeding up this process via eye-tracking [38] or extreme clicking [39]. As such, Active Learning is related to the present work in the sense that our approach is adaptive but differs in that our method determines what annotations types should be collected under a constrained budget instead of predicting at each time step which samples should be added to the annotated set. ## 3 Method Training segmentation models using a combination of expensive pixel-wise annotations and other types of cheaper annotations, such as image-wise labels or single-pixel annotations is known to be beneficial, as well as using cross-task transfer learning techniques [37]. This is motivated by empirical findings showing that, under a limited annotation budget, allocating a proportion of the budget to inexpensive image-level class labels led to superior performance compared to allocating the budget entirely to segmentation labels [4]. However, the optimal proportion of the budget to allocate per annotation type is a-priori unknown beforehand and data-dependent. Thus, the goal of our method is to find this data-specific optimal budget allocation in an online manner, as it is necessary for any dataset builder starting off. We describe our method in the subsequent sections. For clarity, we focus on image segmentation and assume two kinds of annotations are possible: strong annotations as segmentation labels and weak annotations as image-level classification labels. Generalizing this formulation to other tasks or settings with more than two annotations types should follow directly. ### Problem formulation Let \(p_{\text{data}}(\mathbf{x})\) be the distribution of training images for which we have no annotations initially. Each training im age \(\mathbf{x}\) can be annotated with a pixel-wise segmentation labeling \((\mathbf{x},\mathbf{y})\sim p_{\text{data}}(\mathbf{x})p_{\text{spm}}(\mathbf{y} \mid\mathbf{x})\) or an image-wise classification annotation \((\mathbf{x},c)\sim p_{\text{data}}(\mathbf{x})p_{\text{cls}}(c\mid\mathbf{x})\)Sampling from the distributions \(p_{\text{cls}}\) and \(p_{\text{sgm}}\) represents the task of manually annotating the image and has associated costs of \(\alpha_{\text{c}}>0\) and \(\alpha_{\text{s}}>0\), respectively. Supported by previous work [33, 37, 4], we will assume that \(\alpha_{\text{s}}\gg\alpha_{\text{c}}\). By sampling \(C\) classifications from \(p_{\text{cls}}\) and \(S\) segmentation from \(p_{\text{sgm}}\), we can build an annotated training dataset \(\mathcal{T}=(\mathcal{T}_{c},\mathcal{T}_{s})\sim(p_{\text{cls}}^{C},p_{ \text{sgm}}^{S})\). The dataset \(\mathcal{T}\) then has an annotation cost, \[\alpha_{\text{c}}C+\alpha_{\text{s}}S, \tag{1}\] which we assume to be bounded by an upper limit, or _budget_, \(B\). To annotate \(\mathcal{T}\), however, we can choose different _allocation strategies_, or combinations of \(C\) and \(S\), that have different costs and that yield different segmentation model performances. The utility \(u\) of an allocation strategy \((C,S)\) is the expected performance of a model trained with datasets that follow that strategy, \[u(C,S)=\mathbb{E}_{(\mathcal{T}_{c},\mathcal{T}_{s})\sim(p_{\text{cls}}^{C},p_ {\text{sgm}}^{S})}\left[m(\mathcal{T}_{c},\mathcal{T}_{s})\right], \tag{2}\] where \(m(\mathcal{T}_{c},\mathcal{T}_{s})\) is the performance score (, Dice score, IoU) of a segmentation model trained with datasets (\(\mathcal{T}_{c},\mathcal{T}_{s}\)) and evaluated on a separate fixed test dataset. Note that in contrast to Active Learning, the utility is defined over the set of strategies \((C,S)\) and not over the individual samples of a fixed training set. This is motivated by our aim to estimate the performance of the annotation strategy \((C,S)\) and not the ensuing specific training dataset. Our goal then is to find the annotation strategy that maximizes the expected performance constrained to a budget \(B\), \[\max_{(C,S)\in\mathbb{N}^{2}} u(C,S),\] (3) s.t. \[\alpha_{c}C+\alpha_{s}S\leq B.\] In the following, we describe how we optimize Eq. (3). ### Utility model As defined Eq. (2), the utility function, \(u\), marginalizes over all possible training sets, which is intractable to compute in practice. To overcome this computational challenge, we approximate \(u\) with a collection \(\mathcal{M}\) of discrete samples, where each sample \(m\in\mathcal{M}\) is a tuple containing an allocation strategy \((C,S)\) and the estimated score \(m(\mathcal{T}_{c},\mathcal{T}_{s})\) obtained for a dataset sampled with that allocation strategy. To build \(\mathcal{M}\), one could simply sample a \(\mathcal{C}^{\prime}\) among strategy \((C^{\prime},S^{\prime})\), annotate a dataset \((\mathcal{T}_{c}^{\prime},\mathcal{T}_{s}^{\prime})\sim(p_{\text{cls}}^{C^{ \prime}},p_{\text{sgm}}^{S^{\prime}})\), and measure its performance. However, this would imply annotating for different potential budgets and is thus infeasible in practice. Instead, a practical alternative is to leverage previously annotated data \((\mathcal{T}_{c},\mathcal{T}_{s})\). For each sampled strategy \((C^{\prime},S^{\prime})\), we build the corresponding dataset \((\mathcal{T}_{c}^{\prime},\mathcal{T}_{s}^{\prime})\) by taking random samples from the already annotated data according to the strategy. While this procedure, formalized in Alg. 1, leads to biased samples, we empirically found this bias to have a minor impact on the final strategies compared to estimations with unbiased sampling. While \(\mathcal{M}\) provides an estimation of \(u\) as a set of discrete locations, we generalize these estimations to the entire space of strategies by fitting a Gaussian process (GP) to the samples in \(\mathcal{M}\). The Gaussian process, \(\mathcal{GP}(\mu,k)\) is parameterized by a suitable mean function \(\mu\) and covariance function \(k\). In our case, we use the mean function, \[\mu(C,S)=\gamma_{c}\log(\beta_{c}C+1)+\gamma_{s}\log(\beta_{s}S+1), \tag{4}\] which accounts for the fact that the segmentation performance increases logarithmically with the volume of the training data [47] and that each annotation type has a different rate of performance growth. Similarly, the covariance \(k\) is a combination of two RBF kernels with different scales \(\ell_{c}\), \(\ell_{s}\) for each annotation type, \[k\left((C,S),(C^{\prime},S^{\prime})\right)=\sigma^{2}e^{-\frac{(C-C^{\prime})^ {2}}{2\ell_{c}^{2}}}e^{-\frac{(S-S^{\prime})^{2}}{2\ell_{s}^{2}}}. \tag{5}\] The values \(\gamma_{c}\), \(\beta_{c}\), \(\gamma_{s}\), \(\beta_{s}\) from the mean, the length scales \(\ell_{c}\), \(\ell_{s}\) and the amplitude \(\sigma\) from the covariance are trainable parameters of the GP. ``` 1:functionBuildUtilitySamples(\(\mathcal{T}_{c},\mathcal{T}_{s}\)) 2:\(C\leftarrow|\mathcal{T}_{c}|\), \(S\leftarrow|\mathcal{T}_{s}|\) 3:\(\mathcal{M}\leftarrow\{((C,S),m(\mathcal{T}_{c},\mathcal{T}_{s}))\}\)\(\triangleright\) Add sample with all the available data 4:repeat\(M-1\)times 5: Sample \((C^{\prime},S^{\prime})\in[0,C]\times[0,S]\) 6:\(\mathcal{T}_{c}^{\prime}\leftarrow\{C^{\prime}\text{ elements sampled from }\mathcal{T}_{c}\}\) 7:\(\mathcal{T}_{s}^{\prime}\leftarrow\{S^{\prime}\text{ elements sampled from }\mathcal{T}_{s}\}\) 8:\(\mathcal{M}\leftarrow\mathcal{M}\cup((C^{\prime},S^{\prime}),m(\mathcal{T}_{c}^{ \prime},\mathcal{T}_{s}^{\prime}))\) 9:endrepeat 10:endfunction ``` **Algorithm 1** Build utility samples from annotated data The trained GP models a distribution over utility functions, \(u\sim\mathcal{GP}(\mu,k)\), that are plausible under the samples \(\mathcal{M}\). This distribution represents not only the expected utility, but also its uncertainty in different areas of the strategy space. Sampling just a single \(u\) from the GP to solve Eq. (3) would thus be suboptimal. For this reason, we substitute the utility \(u\) in Eq. (3) by a surrogate function \(\hat{u}\) that trades-off exploitation and exploration, thus incorporating uncertainty information into the optimization problem. Following a Bayesian optimization approach [21], we choose \(\hat{u}\) to be the expected improvement (EI), \[\hat{u}(C,S)=\mathbb{E}_{u\sim\mathcal{GP}_{\mathcal{T}_{s}}}[\max\{u(C,S)-m^{ *},0\}], \tag{6}\] where \(m^{*}\) is the current maximum point. ### Optimization Training the GP requires annotated data to build the set \(\mathcal{M}\), which in turn relies on an annotation strategy that we are trying to find, whereby implying a circular dependency. We address this circular dependency by optimizing Eq. (3) in an iterative manner. Our algorithm shown in Alg. 2, allocates the available budget \(B\) in a fixed number of adaptive installments, alternating between data annotation with the current strategy, GP fitting, and strategy selection for the next budget installment. More specifically, our method starts with an initial strategy \((C_{0},S_{0})\) with associated cost \(B_{0}\). At each iteration \(t\), new data is annotated according to the current strategy \((C_{t},S_{t})\) so that the sets of annotated data \((\mathcal{T}_{c},\mathcal{T}_{s})\) contain \(C_{t}\) classification and \(S_{t}\) segmentation annotations, respectively. From the available annotated data \((\mathcal{T}_{c},\mathcal{T}_{s})\), we extract new samples for \(\mathcal{M}\) and fit the GP, which defines the surrogate function \(\hat{u}_{t}\). The corresponding current maximal point \(m^{*}_{t}\) is set to be the maximum performance found so far, (_i.e_., the performance of the model trained with all the annotated data available at this iteration), \(m^{*}_{t}=m(\mathcal{T}_{c},\mathcal{T}_{s})\). Finally, this surrogate function is used to estimate the next best strategy \((C_{t+1},S_{t+1})\). We find a delta strategy \((\Delta C,\Delta S)\) that increases the expected improvement by a fixed fraction of its maximum possible value, \[\begin{split}\operatorname*{arg\,min}_{(\Delta C,\Delta S)\in \mathbb{N}^{2}}&\alpha_{c}(C_{t}+\Delta C)+\alpha_{s}(S_{t}+ \Delta S),\\ \text{s.t.}&\hat{u}_{t}(C_{t}+\Delta C,S_{t}+ \Delta S)\geq\frac{1}{T-t}\hat{u}^{*}_{t},\end{split} \tag{7}\] where \(T\) is the desired maximum number of iterations of the algorithm and \(\hat{u}^{*}_{t}\) is the maximum expected improvement that can be reached using the entire budget \(B\) for the current surrogate function \(\hat{u}_{t}\) according to Eq. (3). The found delta strategy defines the new strategy \((C_{t+1},S_{t+1})=(C_{t}+\Delta C,S_{t}+\Delta S)\) for the next iteration. The process is depicted in Fig. 2. Note that solving Eq. (7) requires finding \(\hat{u}^{*}_{t}\), which in turn requires solving Eq. (3). While solving two optimization problems may seem unnecessary, the solutions of both problems are in the Pareto front of strategies (_i.e_., the set of non-dominated strategies for which no other strategy has simultaneously smaller cost and larger or equal expected improvement). Given that the space of strategies is discrete, the elements of the Pareto front can be easily found in linear time by enumerating all possible strategies, computing their costs and expected improvements with \(\hat{u}_{t}\), and discarding the dominated elements. Given the Pareto front, the strategy with the maximum EI \(u^{*}_{t}\) and the strategy of minimum budget with EI larger than \(\frac{1}{T-t}\hat{u}^{*}_{t}\), which correspond to the solutions of Eq.(3) and Eq. (7), respectively, can be found in linear time. Figure 2: Illustration of proposed method. At a given iteration \(t\), \(C_{t}\) and \(S_{t}\) classification and segmentation annotations have already been collected (blue region, left panel) with a budget of \(B_{t}\). For the next annotation phase, the budget is increased to \(B_{t+1}\). To determine how many new classification and segmentation annotations to collect, \(M\) combinations of different quantities \((C^{(i)},S^{(i)})\) are gathered according to Alg. 1 to compute \(m(C^{(i)},S^{(i)})\). A Gaussian Process is then trained to estimate the utility of different combinations of annotation types (light blue area, left panel). From this, we infer \(\Delta C\) and \(\Delta S\) to select next by computing the combination that maximizes the expected improvement along the Pareto front given by the budget \(B_{2}\) (red point, left panel). The next iteration starts then with the new proportions (red point, right panel) and follows the same steps (see text and Alg. 2 for details). For illustration purposes, the costs are set here to \(\alpha_{c}=\alpha_{s}=1\). ## 4 Experimental setup To validate our approach, we evaluated it on four different datasets, while comparing its performance to a set of typical fixed budget allocation strategies. In addition, we explore the impact of different hyper parameters on the overall performance of the method. ### Datasets We chose a collection of datasets with different image modalities, including a medical dataset as they often suffer from data and annotation scarcity. In this context, they represent a typical new application domain where our method could be particularly helpful. In each case, we enumerate the number of images for which classification or segmentation images can be sampled by a method: **Augmented PASCAL VOC 2012 [15]:**: 5'717 classification and 10'582 segmentation natural images with 21 classes for training. The validation sets contain 1'449 segmented images. **SUIM [19]:**: training set consists of 1'525 underwater images with annotations for 8 classes. For evaluation, we used a separate split of 110 additional images. The classification labels were estimated from the segmentation ground-truth as a multi-label problem by setting the class label to 1 if the segmentation map contained at least one pixel assigned to that class. **Cityscapes [11]:**: 2'975 annotated images for both classification and segmentation are available for training. We test on the official Cityscapes validation set, which contains 500 images. **OCT:**: 22'723 Optical Coherence Tomography (OCT) images with classification annotations and 1,002 images with pixel-wise annotations corresponding to 4 different types of retinal fluid for segmentation. We split the data into 902 training images and 100 test images. ### Baseline strategies. We compared our method to ten different _fixed_ budget allocation strategies. Each of these randomly sample images for classification and segmentation annotations according to a specified and fixed proportion. We denote these policies by the percentage dedicated to segmentation annotations: \(B_{0}\): \(50\%,55\%,\ldots,95\%\) with increases in 5%. For fair comparison, the strategies are computed from the budget \(B_{0}\). In addition, we consider an _estimated-best-fixed_ budget allocation strategy, whereby the method estimates what fixed budget should be used for a given dataset. This is done by using the initial budget \(B_{0}\) to compute the best performing fixed strategy (mentioned above) and then using this fixed strategy for the annotation campaign until budget \(B\) is reached. This strategy represents an individual that chooses to explore all fixed strategies for an initial small budget and then exploit it. ### Implementation details. **Weakly supervised segmentation model:** To train a segmentation model that uses both segmentation and classifications, we first train the models with the weakly-annotated data \(\mathcal{T}_{c}\) until convergence and then with the segmentation data \(\mathcal{T}_{s}\). We use the U-Net segmentation model [42] for OCT, and the DeepLabv3 model [10] with a ResNet50 backbone on the SUIM, PASCAL, and Cityscapes. For the U-Net, a classification head is appended at the end of the encoding module for the classification task. For the DeepLab-like models, we train the entire backbone on the classification task and then add the ASPP head for segmentation. In all cases, we use the cross-entropy loss for classification and the average of the Dice loss and the cross-Entropy loss for segmentation. While we choose this training strategy for its simplicity, other cross-task or weakly supervised alternatives could have been used as well [2, 40]. Additional details are provided in the supplementary materials. Note that all models are randomly initialized to maximize the impact of classification labels, as Imagenet-pretraining shares a high resemblance to images in PASCAL and Cityscapes. Failing to do so would lead to classification training not adding significant information and may even hurt performance due to catastrophic forgetting [35]. **Hyperparameters:** We measured costs in terms of class-label equivalents setting \(\alpha_{c}=1\) and leaving only \(\alpha_{s}\) as a hyperparameter of our method. We set \(\alpha_{s}=12\) for all datasets following previous studies on crowdsourced annotations [4]. We predict the first GP surface with 8% of the dataset for both classification and segmentation. This quantity is reduced for OCT classification and VOC segmentation due to the high number of labels available. In all cases, we fixed the number of iterative steps to 8 and set the learning rate of the GP to 0.1. ## 5 Results **Main results:** Figure 3 compares the performance achieved by our method against that of the different fixed strategies and the estimated best fixed strategy when using \(\alpha_{s}=12\) across the different datasets. From these results we can make a number of key observations. First, we can observe that no single fixed strategy is performing optimally across the different datasets evaluated. This is coherent with our initial claims and with the literature. Indeed, for OCT the best strategy appears to be one that samples 90% of segmentations, while this same policy performs poorly on the SUIM dataset. This implies that blindly using a fixed policy would on average not be very effective. Second, the estimated best-fixed strategy (in red) appears to do well initially and progressively loses competitiveness as the budget increases. This behaviour is expected as the estimated fixed strategy is that with \(B_{0}\) (the lowest budget), Figure 4: Mean of our method with \(\alpha_{s}=\{5,12,25,50\}\) on Cityscapes (orange, line). Shaded region is computed from three seeds. Fixed strategies are shown in blue. Labels expressed as percentage of the budget allocated to segmentation. Figure 5: Mean of our method when using different numbers of iteration steps \(\{3,5,8,10\}\). Results shown with three seeds. Figure 3: Performance of our method (orange line) on OCT, PASCAL VOC, SUIM and Cityscapes datasets. Shaded region is computed from three seeds. Fixed strategies are shown in blue. Red points show the _estimated-best-fixed_ strategy with \(B_{0}\). Labels expressed as percentage of the budget allocated to segmentation. Note that the first budget \(B\) fulfills \(B\gg B_{0}\) in all cases. and becomes increasingly irrelevant as \(B\) grows. This is particularly clear on VOC where the best low-budget strategy allocates 95% of the budget to segmentation and still achieves superior performance up to \(B=12^{\prime}000\). However, that strategy drops below average performance with budgets greater than \(B=25^{\prime}000\). In the case of SUIM, the best-fixed strategy corresponds to 50% of the budget allocated to segmentation. Since the dataset contains only 1,525 segmentation samples, this strategy is not attainable with \(B>4000\). Last, we can observe that our method is consistently able to produce good performances, across both different budget quantities and datasets. We can also clearly see that our strategy is not guaranteed to be the top performing strategy, but that on average it performs well in different cases. At the same time, we notice that the performance of our approach on SUIM begins well and then drops after a 3'500 budget. This differs sharply from the other datasets. By observing the true budget-performance surface of SUIM and the other datasets (see Fig. 6), we can see that the SUIM surface does not grow logarithmically with the dataset size, while it does for Cityscapes (and the other too, see the Supplementary materials). This is relevant as our GP mean prior (4) assumes this relationship and explains why our approach fails when the true surface deviates from our GP mean form. While the use of adaptive, higher-level order priors would be beneficial to deal with such cases, we leave this as future work to be researched. ### Sensitivity to \(\alpha_{s}\) and \(T\) Different types of annotations or domains may have different ratios of cost. While we have fixed \(\alpha_{s}\) in our experiments across all datasets regardless of their domain, some datasets such as OCT and VOC require different expertise and domain knowledge to annotate and thus different \(\alpha_{s}\). In Fig. 4, we three additional values of \(\alpha_{s}=\{5,12,25,50\}\) and show the performance implication it has on our methods and the baselines. For Cityscapes, we see that the method is robust regardless of the value of \(\alpha_{s}\), showing above average performance especially for \(\alpha_{s}=25\) and \(\alpha_{s}=50\). This behavior is reproduced in all four datasets (see Supplementary materials). Similarly, the number of steps \(T\) given to reach the final budget is a hyperparameter of our approach. While low \(T\) values could lead to poor solutions due to the unreliability of the GP far from the sampled region, higher \(T\) values (_i.e._, therefore smaller steps) may exacerbate the intrinsic greedy nature of our method. We thus seek a trade-off between reliability and greediness. To study the sensitivity of the algorithm with respect to this variable, we show the behaviour of our method with different number of steps in Fig. 5. We see that lower \(T\) values greatly affect the reliability of the found strategy, especially for OCT and SUIM (blue line). However, as the number of steps increases, the variance of the strategy reduces sharply. We can therefore conclude that the method is robust to this hyperparameter as long as it is kept within reasonable ranges. ## 6 Conclusion In this paper, we propose a novel approach to determine a dynamic annotation strategy for building segmentation datasets. We design an iterative process that identifies efficient dataset-specific combinations of weak annotations in the form of image-level labels and full segmentations. We show in our experiments that the best strategies are often dataset and budget-dependent, and therefore the trivial approaches do not always produce the best results. Our method however is capable of adapting to different image domains and finds combinations of annotations that reach high-performance levels. We show our method is robust to a number of hyperparameters and that it offers a good option for allocating annotation strategies. Figure 6: Cityscapes (top) and SUIM (bottom) ground truth budget-segmentation surfaces. We note that segmentation performance grows logarithmically with training set size on Cityscapes (as well as OCT and VOC, see the Supplementary materials). This trend is not observed on the SUIM dataset.
2301.11001
Discovery of periodicities in two highly variable intermediate polars towards the Galactic Center
We discovered Fe $K_{\alpha}$ complex emission and pulsation in two highly variable sources (4XMM J174917.7--283329, 4XMM J174954.6--294336). The equivalent widths of 6.4 and 6.7 keV lines of 4XMM J174917.7--283329 are $99^{+84}_{-72}$ and $220^{+160}_{-140}$ eV, respectively. The continuum is fitted by a partially absorbed apec model with plasma temperature of $kT=13^{+10}_{-2}$ keV. The inferred mass of the white dwarf (WD) is $0.9^{+0.3}_{-0.2}\ M_{\odot}$. We detected pulsations with a period of $1212\pm3$ s and a pulsed fraction of $26\pm6\%$. The light curves of 4XMM J174954.6--294336 display asymmetric eclipse and dipping behaviour. To date, this is only the second intermediate polar (IP) that shows a total eclipse in X-rays. The spectrum of the sources is characterized by a power-law model with photon index $\Gamma=0.4\pm0.2$. The equivalent widths of the 6.4 keV and 6.7 keV iron lines are $171^{+99}_{-79}$ and $136^{+89}_{-81}$ eV, respectively. The continuum is described by emission from optically thin plasma with a temperature of $kT\sim35$ keV. The inferred mass of the WD is $1.1^{+0.2}_{-0.3}\ M_{\odot}$. We discovered coherent pulsations from the source with a period of $1002\pm2$ s. The pulsed fraction is $66\pm15\%$. The measured spin period, hard photon index, and equivalent width of the fluorescent Fe $K_{\alpha}$ line in both sources are consistent with the values found in IP. While 4XMM J174954.6--294336 was already previously classified as an IP, we also suggest 4XMM J174917.7--283329 as a new IP. The X-ray eclipses in 4XMM J174954.6--294336 are most likely caused by a low-mass companion star obscuring the central X-ray source. The asymmetry in the eclipse is likely caused by a thick bulge that intercepts the line of sight during the ingress phase but not during the egress phase located behind the WD along the line of sight.
Samaresh Mondal, Gabriele Ponti, Frank Haberl, Kaya Mori, Nanda Rea, Mark R. Morris, Sergio Campana, Konstantina Anastasopoulou
2023-01-26T09:08:42Z
http://arxiv.org/abs/2301.11001v1
# Discovery of periodicities in two highly variable intermediate polars towards the Galactic Center ###### Abstract Context: Aims:We are performing a systematic analysis of X-ray point sources within 1\({}^{\circ}\)-5 of the Galactic center using archival _XMM-Newton_ data. While doing so, we discovered Fe \(K_{\alpha}\) complex emission and pulsation in two highly variable sources (4XMM J174917.7-283329, 4XMM J174954.6-294336). In this work, we report the findings of the X-ray spectral and timing studies. Methods:We performed detailed spectral modeling of the sources and searched for pulsation in the light curves using Fourier timing analysis. We also searched for multi-wavelength counterparts for the characterization of the sources. Results:The X-ray spectrum of 4XMM J174917.7-283329 shows the presence of complex Fe K emission in the 6-7 keV band. The equivalent widths of 6.4 and 6.7 keV lines are \(99.^{+9.8}_{-72}\) and \(220.^{+160}_{-160}\) eV, respectively. The continuum is fitted by a partially absorbed apec model with plasma temperature of \(kT=13^{+10}_{-10}\) keV. The inferred mass of the white dwarf (WD) is \(0.9^{+0.3}_{-0.2}\)\(M_{\odot}\). We detected pulsations with a period of 1212 \(\pm\) 3 and a pulsed fraction of 26 \(\pm\) 6%. The light curves of 4XMM J174954.6-294336 display asymmetric eclipse and dipping behaviour. To date, this is only the second intermediate polar that shows a total eclipse in X-rays. The spectrum of the sources is characterized by a power-law model with photon index \(\Gamma=0.4\pm 0.2\). The equivalent widths of the fluorescent (6.4 keV) and Fe XXV (6.7 keV) iron lines are \(171^{+99}_{-10}\) and \(136^{+89}_{-54}\) eV, respectively. The continuum is described by emission from optically thin plasma with a temperature of \(kT\sim 35\) keV. The inferred mass of the WD is \(1.1^{+0.2}_{-0.3}\)\(M_{\odot}\). We discovered coherent pulsations from the source with a period of \(1002\pm 2\) s. The pulsed fraction is \(66\pm 15\)%. Conclusions:The spectral modeling indicates the presence of intervening clouds with high absorbing column density in front of both sources. The detected periodic modulations in the light curves are likely to be associated with the spin period of WDs in magnetic cataclysmic variables. The measured spin period, hard photon index, and equivalent width of the fluorescent Fe \(K_{\alpha}\) line are consistent with the values found in intermediate polars. While 4XMM J174954.6-294336 was already previously classified as an intermediate polar, we also suggest 4XMM J174917.7-283329 as a new intermediate polar. The X-ray eclipses in 4XMM J174954.6-294336 are most likely caused by a low-mass companion star obscuring the central X-ray source. The asymmetry in the eclipse is likely caused by a thick bulge that intercepts the line of sight during the ingress phase but not during the egress phase located behind the WD along the line of sight. ## 1 Introduction Accreting white dwarf (WD) binaries are abundant in our universe (see Mukai 2017, for a recent review). WDs are a common endpoint of intermediate and low-mass stars, and many stars are born in a binary system with small separations that go through one or more mass transfer phases. Accreting WD binaries are categorized into two types, mainly on the basis of the companion star, which feeds the central X-ray source via Roche lobe overflow. Cataclysmic variables (CVs) have an early-type main-sequence donor, and symbiotic systems have a late-type giant donor. Understanding the long-term evolution of CVs is necessary for studying the progenitors of Type Ia supernovae and for future detection of gravitational wave sources by _LISA_ in the millihertz band (Meliani et al., 2000; Zou et al., 2020). Further, CVs are categorized into two types, non-magnetic and magnetic. Most of the hard X-ray emission from the Galactic center (GC) is expected to be produced by magnetic CVs (Revnivtsev et al., 2009; Hong et al., 2009). In magnetic CVs, the matter from the companion star is funneled through the magnetic field lines to the polar regions of the WD (Cropper, 1990; Patterson, 1994). The in-falling material reaches a supersonic speed of 3000-10000 km s\({}^{-1}\), creating a shock front above the star and emitting thermal X-rays (Aizu, 1973). There are two types of magnetic CVs: intermediate polars (IPs) and polars. IPs have a non-synchronous orbit with a WD surface magnetic field strength of \(\sim\)0.1-10 MG; they emit an ample amount of hard X-rays (20-40 keV). Polars are magnetically locked binary systems that have synchronized orbits with a strong magnetic field of 10-200 MG. Polars have softer X-ray spectra, \(kT\sim 5-10\) keV, due to faster cyclotron cooling (Mukai 2017). A large number of CVs were detected through all-sky surveys such as performed by _ROSAT_(Beuermann et al. 1999), _INTEGRAL_(Barlow et al. 2006) and _Swift_-BAT (Baumgartner et al. 2013). The 77-month _Swift_-BAT catalogue, whose sky coverage is relatively uniform, lists around 81, of which roughly half are confirmed to be IPs (Baumgartner et al. 2013). There are also deeper surveys focusing on a small part of the sky; for example, Pretorius et al. (2007) exploited the _ROSAT_ all-sky survey, which was deeper near the north ecliptic pole, to infer the space density of CVs. Many star clusters are also prime targets for finding CVs. Gosnell et al. (2012) discovered a candidate CV in the metal-rich open cluster NGC 6819 using _XMM-Newton_. Globular clusters have been considered to host a large number of CVs; for example, among the X-ray sources in 47 Tuc (Grindlay et al. 2001a), about 30 are considered likely CVs (Edmonds et al. 2003b,a) and in the Globular Cluster NGC 6397, nine likely to be CVs (Grindlay et al. 2001b). CVs have recently been discussed many times in the context of the GC (Krivonos et al. 2007; Revnivtsev et al. 2009; Hong 2012; Ponti et al. 2013; Perez et al. 2015; Hailey et al. 2016). The diffuse hard X-ray emission in the GC and disk (the latter is termed as the Galactic ridge X-ray emission, or RXE; Warwick et al. 1985) is from a population of unresolved, faint point sources, including CVs (Revnivtsev et al. 2009; Yamachi et al. 2016). However, the contribution from different types of sources and different types of CVs is still an open question. The only unambiguous way to constrain the CV population in the GC, ridge, and bulge is to analyze the individual X-ray point sources using spectra and light curves and identify them. Furthermore, estimating the X-ray-to-optical flux ratio by finding multi-wavelength counterparts can help to determine the source type. Muno et al. (2003) detected 2350 X-ray point sources in the \(17^{\prime}\times 17^{\prime}\) field around Sgr A\({}^{*}\) and found that more than half of the sources are very hard, with photon index \(\Gamma<1\), indicating magnetic CVs. Yuasa et al. (2012) fitted the spectra of the Galactic ridge and bulge regions with a two-component spectral model and found the hard spectral component consistent with magnetic CVs of average mass \(0.66^{+0.09}_{-0.07}\)\(M_{\odot}\). We are systematically studying X-ray point sources in the GC to understand the different types of X-ray binary populations. While doing this analysis, we found two relatively faint sources that display iron complex emission in X-ray spectra and periodicities in the light curves. In this paper, we report the X-ray spectral modeling, periodicities, and characterization of the two X-ray point sources in the GC. The coordinates of the sources are \((\alpha,\delta)_{200}=(17^{\rm h}\,49^{\rm m}\,17^{\prime}\!\!.\!,-28^{\rm s}\,3 3^{\prime}\,29^{\prime\prime})\) and \((17^{\rm h}\,49^{\rm m}\,54^{\rm s}\!\!.\!,-29^{\rm s}\,43^{\prime}\,36^{\prime \prime})\); both of these sources are listed in the 4XMM-DR11 catalogue as 4XMM J174917.7-283329 and 4XMM J174954.6-294336 (Webb et al. 2020). 4XMM J174917.7-283329 is a newly identified point source with the detection of iron 6.4 and 6.7 keV lines and pulsations in the X-ray light curves. The source 4XMM J174954.6-294336 was first observed by _Chandra_ during the Bulge Latitude Survey and then detected in Galactic Bulge Survey (Monker et al. 2014); later subsequently detected by _Swift_ and _XMM-Newton_. An association of a faint optical counterpart with an orbital period of 0.3587 days was identified by Udalski et al. (2012). A periodicity of 503.3 s was also detected in the optical light curve, which was interpreted as spin period (Johnson et al. 2017). In this paper, we provide the actual spin period of the WD. ## 2 Observations and data reduction This work is based on archival _XMM-Newton_ observations of the GC (Ponti et al. 2015, 2019). The details of the observations are listed in Table 1. The observation data files were processed using the _XMM-Newton_(Jansen et al. 2001) Science Analysis System (SASv19.0.0)1. We used the SAS task barycenter to apply the barycentre correction to the event arrival times. We only selected events with PATTERN\(\leq 4\) and PATTERN\(\leq 12\) for EPIC-pn and EPIC-MOS1/MOS2 detectors, respectively. The source and background products were extracted from circular regions of 25\({}^{\prime\prime}\) radius. The background products were extracted from a source-free area. The spectrum from each detector (pn, MOS1, MOS2) was grouped to have a minimum of 20 counts in each energy bin. The spectral fitting was performed in xspec (Arnaud 1996), and we applied the \(\chi^{2}\) statistic. The spectra from observations of EPIC-pn, MOS1, and MOS2 detectors were fitted simultaneously. While fitting the data simultaneously, we add a constant term for cross-calibration uncertainties, fixed to unity for EPIC-pn, and allowed to vary for MOS1 and MOS2. The best-fit parameters are listed in Table 2 with the quoted errors at the 90% significance level. Footnote 1: [https://www.cosmos.esa.int/web/xmm-newton/sas](https://www.cosmos.esa.int/web/xmm-newton/sas) ## 3 Results ### X-ray spectra We performed a detailed spectral analysis of the sources 4XMM J174917.7-283329 and 4XMM J174954.6-294336. We tested various phenomenological models to fit the spectra as well as a physical model to constrain the mass of the central WD. The results from the spectral fitting are described in the following subsections. All the spectral fitting models are convolved with a Galactic absorption component tbabs with the photoionization cross sections and abundance values from Wilms et al. (2000). #### 3.1.1 4Xmm J174917.7-283329 The source was observed three times by _XMM-Newton_. The observations done on 22-09-2006 (ObsID: 0410580401) and 26-09-2006 (ObsID: 0410580501) were in timing mode and pointed at IGR J17497-2821, so the source was outside the field of view of the EPIC-pn and MOS1 detectors. In the case of the MOS2 detector, the source was marginally detected due to the high background and low flux state of the source. Hence we used the ObsIDs 0410580401 and 0410580501 to estimate the flux of the \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & ObsID & Date & Exposure & Total count \\ \hline & 0410580401 & 22-09-2006 & 31.6 ks & -/-32 \\ J174917.7 & 0410580501 & 26-09-2006 & 31.1 ks & -/-30 \\ & 0801681301 & 07-10-2017 & 25.0 ks & 551/700/611 \\ \hline J174954.6 & 0801681401 & 07-10-2017 & 25.0 ks & 463/-290 \\ & 0801683401 & 06-04-2018 & 26.0 ks & 800/316/317 \\ \hline \end{tabular} 3 \end{table} Table 1: The details of observations. source only. Later the same field was observed by _XMM-Newton_ on 07-10-2017 (ObsID: 0801681301), in which the source was brighter and clearly detected by all three detectors. We used spectra from this observation for our detailed spectral modeling. First, we fit the spectra with a simple absorbed power-law model. Fitting with this model indicates the source has a hard photon index with \(\Gamma=0.9\pm 0.2\) and shows the presence of excess emission in the 6-7 keV band, which is shown in panel B of Fig. 1. The resultant fit statistics is \(\chi^{2}=152\) for 108 degrees of freedom (d.o.f.). The excess between 6 and 7 keV is fitted by adding two Gaussian lines at 6.4 keV (\(\chi^{2}=146\) for 107 d.o.f. with 96.16% detection significance in an F-test) and 6.7 keV (\(\chi^{2}=136\) for 107 d.o.f. with 99.94% detection significance in an F-test). We did not find any improvement in the fit after adding another Gaussian at 6.9 keV for the Fe XXVI line. The improvement in the fit after adding the lines is shown in panel C of Fig. 1. We left the width of the lines free but found them to be consistent with being narrow; therefore, we froze the width of the Gaussian lines to zero. While adding the two Gaussians, the statics of the spectral fit is significantly improved by \(\Delta\chi^{2}=22\) for two additional d.o.f. The equivalent width and its 90% error on the lines at 6.4 keV and 6.7 keV are \(99^{+84}_{-72}\) eV, and \(220^{+160}_{-140}\) eV, respectively. Next, we add a partial covering to the model, which represents the emission partially covered by the intervening medium in front of the source. The column density of the intervening medium is almost 5-9 times higher than the Galactic absorption. The Galactic absorption column density from the spectral fit is \(N_{\rm H}\sim(3\pm 0.7)\times 10^{22}\) cm\({}^{-2}\). Adding the partial covering further improves the fit with \(\Delta\chi^{2}=21\) for two additional d.o.f. The resultant fit is shown in panels A and D of Fig. 1. To estimate the temperature of the X-ray emitting plasma, we fit the spectra with the apec model together with the partial covering absorption. The apec uses both the shape of the continuum and the line ratio of 6.7 keV and 6.9 keV to estimate the plasma temperature. Furthermore, the apec model represents the emission from the ionized material. Therefore, it does not include the neutral iron \(K_{\alpha}\) line emission at 6.4 keV. Hence we add a Gaussian line at 6.4 keV to the apec model. Fitting the spectrum with this model provides a best-fit plasma temperature of \(kT=13^{+10}_{-2}\) keV. Next, we fit the data with a physically motivated model mcvspec. The model is an evolution of the model presented in Saxton et al. (2005) by Mori et al. (in preparation) and is available in xspec. This model represents the emission from the surface of a WD. It only includes lines produced collisionally in an ionized, diffuse gas in the accretion column of the WD. Therefore, we again add a Gaussian at 6.4 keV for the neutral iron \(K_{\alpha}\) line to take into account the X-ray reflection of the WD surface or pre-shock region. While doing the fit with this model, we freeze the magnetic field \(B\) and the mass accretion flux \(\dot{m}\) to values of 10 MG and 5 g cm\({}^{-2}\) s\({}^{-1}\), respectively, which are the values typically found in IPs. The WD mass obtained by fitting this model is \(0.9^{+0.3}_{-0.2}\)\(M_{\odot}\). #### 3.1.2 4Xmm J174954.6-294336 The field around 4XMM J174954.6-294336 was observed twice by _XMM-Newton_. The observation done on 07-10-2017 (ObsID: 0801681401) was performed in full frame mode; however, the source fell into the chip gap of the MOS1 detector; therefore, we report only the analysis of the EPIC-pn and MOS2 detectors. The source was also observed by _XMM-Newton_ on 06-04-2018 (ObsID: 0801683401) by all three detectors. We noticed that between the 2017 and 2018 observations, the source flux varied by a factor of 1.45. However, the shapes of the continua are very similar. Therefore, we fit the combined spectra of 2017 and 2018 observations to gain statistics. Fitting an absorbed power-law model provides a best-fit photon index of \(\Gamma=0.4\pm 0.2\). Residuals around the iron line complex are clearly visible in the ratio plot, which is shown in panel B of Fig. 2. To resolve the excess in the 6-7 keV band, we add two Gaussians at 6.4 keV and 6.7 keV to the power-law model, Figure 1: The various spectral model fits to the spectra of 4XMM J174917.7–283329. The black, red, and green colors represent the spectra from EPIC-PN, MOS1, and MOS2 detectors, respectively. Panel A represents the best-fit spectral model overlaid on the data points. The lower panels indicate the ratio plot obtained from the fitting of various models. The various model components are, tbabs: Galactic absorption, tbpcf: absorption from medium partially covering the X-ray source, po: power-law continuum, apec: emission from collisionally-ionized diffuse gas, mcvspec: continuum emission from WD accretion column, g1: Gaussian line at 6.4 keV and g2: Gaussian line at 6.7 keV. which improves the fit by \(\Delta\chi^{2}=29\) for two additional d.o.f. We performed an F-test, which gives a detection significance of 99.98% and 99.86% for the 6.4 keV and 6.7 keV lines, respectively. Further, adding another Gaussian at 6.9 keV for the Fe XXVI line does not improve the fit. The equivalent width of the lines at 6.4 keV and 6.7 keV are \(171^{+99}_{-79}\) eV and \(136^{+89}_{-81}\) eV, respectively. Next, we add a partial covering absorption model to the power-law continuum, which improves the fit marginally by \(\Delta\chi^{2}=7\) for two additional d.o.f. However, we noticed while fitting with the apec and mcvspec continuum models that adding a partial covering absorption improves the fit by \(\Delta\chi^{2}=42\) and 44, respectively, for two extra additional d.o.f. Fitting with the apec model provides a best-fit plasma temperature of \(kT=35\pm 17\) keV. Furthermore, we fit the spectra with the mcvspec model. Such as done for 4XMM J174917.7-283329 while fitting with the mcvspec model we freeze \(B\) to 10 MG and \(\dot{m}\) to 5 g cm\({}^{-2}\) and s\({}^{-1}\). The mass of the central compact object estimated from the mcvspec model is \(1.1^{+0.2}_{-0.3}\)\(M_{\odot}\). ### Periodicity search We computed the power spectral densities (PSD) to search for periodicities in the 1-10 keV light curves. For our PSD analysis, we used EPIC-pn light curves only, as it has the shortest frame time that allows us to probe a higher frequency range. Next, to refine the detected period and estimate the error, we search for maximum \(\chi^{2}\) as a function of the period using the FTOOL efsearch. Then we used the refined period to fold the light curve and estimate the pulsed fraction in the 1-10 keV band. The pulsed fraction was estimated by using the formula \(\rm{PF}=\frac{F_{min}-F_{min}}{F_{min}+F_{min}}\times 100\%\), where \(\rm{F_{max}}\) and \(\rm{F_{min}}\) are the maximum and minimum of the normalized intensity, respectively. #### 3.2.1 4XMM J174917.7-283329 The left top, middle, and bottom panels of Fig. 3 show the PSD, \(\chi^{2}\) search, and the folded light curve of source 4XMM J174917.7-283329, respectively. The PSD shows a peak at frequency \(8.39\times 10^{-4}\) Hz. We used this frequency as an input in the efsearch algorithm. The refined period and its 90% (\(\Delta\chi^{2}=\pm 2.7\)) error is \(1212\pm 3\) s. Further, we folded the light curve with the given period, and the estimated pulsed fraction is \(26\pm 6\%\). #### 3.2.2 4XMM J174954.6-294336 The right panels of Fig. 3 show the results obtained from the timing analysis of 4XMM J174954.6-294336 using ObsID 0801683401. The PSD shows a peak at \(9.98\times 10^{-4}\) Hz. The estimated period and error from the efsearch analysis is \(1002\pm 2\) s. The pulsed fraction of the source is \(66\pm 15\%\). We noticed that the eclipse duration of 2500 s at the end of the light curve introduces a spurious signal in the PSD at a frequency of \(2.38\times 10^{-3}\) Hz. Further, we analyzed the light curve from the ObsID 0801681401 and did not find any clear signal in the PSD at the corresponding frequency of 1002 s period. In this observation, we noticed that the source light curve shows dipping behaviour caused by absorption. Therefore, the pulsed signal is likely lost due to the variation introduced by the absorption. In fact, by computing the FFT using the initial 10 ks of this light curve which is unaffected by the absorption, the PSD shows two peaks at frequencies \(9.13\times 10^{-4}\) Hz, which is consistent with the 1002 s period and \(1.99\times 10^{-3}\) Hz which is likely the first harmonic of the fundamental period (Fig. 4). ### The long-term X-ray variability We constructed the long-term light curve spanning over the time scale of ten years by searching for counterparts in _Swift_ 2SXPS2(Evans et al., 2020) and _Chandra_ CSC 2.03 catalogues (Evans et al., 2010). Figure 2: Same as Fig. 1 but for the source 4XMM J174954.6–294336. The black, red, and green data points are from the EPIC-pn, MOS1, and MOS2 detectors of ObsID 0801683401. The blue and cyan data points are from the EPIC-pn and MOS2 detectors of ObsID 0801681401. #### 3.3.1 4XMM J174917.7-283329 Figure 5 shows the long-term flux variation of the source 4XMM J174917.7-283329 (top panel). The source has been detected multiple times by _XMM-Newton_ and _Swift_ and displays a flux variation by a factor of six or more over the timescale of ten years. #### 3.3.2 4XMM J174954.6-294336 Figure 5 bottom panel shows the long-term light curve of the source 4XMM J174954.6-294336. The flux of the source varies by a factor of three. Figure 6 shows the EPIC-pn 1-10 keV, 1-4 keV, and 4-10 keV light curves. The light curves were binned with a time resolution of 500 s. The light curves show remarkable features with a long-term variation and two eclipses near the end of the observations. Obscuration of the central X-ray source by the companion star likely causes the eclipses. In the first observation, the 1-4 keV band (middle panel of Fig. 6) light curve also shows very short-term variation associated with the absorption due to dipping behaviour before entering into the eclipses. During the dipping activity, the soft X-ray photons (1-4 keV) are absorbed more than the hard 4-10 keV photon leading to an increase in hardness ratio (bottom panel of Fig. 6); this indicates an absorption related origin. ## 4 Discussion ### 4XMM J174917.7-283329 The hard X-ray spectrum of 4XMM J174917.7-283329 can be characterized by a power law with photon index \(\Gamma=0.9\pm 0.2\). The presence of excess emission in the 6-7 keV band can be attributed to the iron \(K_{\alpha}\) complex. The equivalent width of the 6.4 keV and 6.7 keV lines are \(99^{+84}_{-72}\) eV and \(220^{+160}_{-140}\) eV, respectively. Our spectral fitting indicates the presence of an absorbing medium close to the source with \(N_{\rm H,pcf}\sim(1.5-3)\times 10^{23}\) cm\({}^{-2}\) which partially absorbs the incoming X-ray photons. The plasma temperature of the accreting material is \(kT=13^{+10}_{-2}\) keV. The central WD mass estimated from fitting a physical model is \(0.9^{+0.3}_{-0.2}\)\(M_{\odot}\). The Galactic neutral atomic hydrogen column density towards the source is \(1.1\times 10^{22}\) cm\({}^{-2}\) (\(N_{\rm H}=N_{\rm HI}+N_{\rm H2}\); Willingale et al. 2013), which is lower than the absorption column density obtained from the X-ray spectral fitting. For the first time, we detected the spin period of the WD is \(1212\pm 3\) s. For better positional accuracy, we searched for an X-ray counterpart in the _Chandra_ source catalogue; however, no _Chandra_ observation of this region has been performed so far. Two possible _Gaia_ counterparts were found within 0.05' from the _XMM-Newton_ position with \(G_{\rm mag}\) of 20.48 and 18.96. Both _Gaia_ sources have a similar parallax of 0.48 mas, which translates to a distance of 2.08 kpc. The X-ray source flux varied by a factor of six over a time scale of ten years. The 2-10 keV luminosity variation of the source is (1-6)\(\times 10^{32}\) erg s\({}^{-1}\). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline tbabs*Model & \(N_{\rm H}\) & \(N_{\rm H,pcf}\) & \(pcf\) & \(\Gamma/kT/M\) & \(N_{1}\) & \(N_{\rm g1}\) & \(N_{\rm g2}\) & \(\chi^{2}/dof\) \\ & \(\times 10^{22}\) & \(\times 10^{22}\) & & & & \(\times 10^{-6}\) & \(\times 10^{-6}\) & \\ \hline \multicolumn{8}{c}{4XMM J174917.7–283329} \\ \hline po & \(3.1^{+0.6}_{-0.5}\) & & & \(0.9\pm 0.2\) & \(9^{+4}_{-2}\times 10^{-5}\) & & & 152/108 \\ \hline po+g1+g2 & \(3.3^{+0.6}_{-0.5}\) & & & \(1.0\pm 0.2\) & \(1.0^{+0.4}_{-0.3}\times 10^{-4}\) & \(2\pm 1\) & \(4\pm 2\) & 130/106 \\ \hline tbpcf*(po+g1+g2) & \(4\pm 1\) & \(27^{+17}_{-14}\) & \(0.8^{+0.1}_{-0.2}\) & \(2.3\pm 0.6\) & \(2^{+6}_{-1}\times 10^{-3}\) & \(2^{+2}_{-1}\) & \(4\pm 3\) & 109/104 \\ \hline tbpcf*(apec+g1) & \(2.9\pm 0.7\) & \(15^{+11}_{-6}\) & \(0.6\pm 0.1\) & \(13^{+10}_{-2}\) & \(1.2\pm 0.2\times 10^{-3}\) & \(2\pm 1\) & & 117/105 \\ \hline tbpcf(mcvspec+g1) & \(3.0^{+0.6}_{-0.7}\) & \(16^{+11}_{-6}\) & \(0.6\pm 0.1\) & \(0.9^{+0.3}_{-0.2}\) & \(1.3^{+0.5}_{-0.4}\times 10^{4}\) & \(2\pm 1\) & & 113/105 \\ \hline \multicolumn{8}{c}{4XMM J174954.6–294336} \\ \hline po & \(2.8^{+0.7}_{-0.6}\) & & & \(0.4\pm 0.2\) & \(4^{+2}_{-1}\times 10^{-5}\) & & & 209/149 \\ \hline po+g1+g2 & \(3.0^{+0.8}_{-0.6}\) & & & \(0.6\pm 0.2\) & \(4\pm 1\times 10^{-5}\) & \(3\pm 1\) & \(3\pm 2\) & 180/147 \\ \hline tbpcf*(po+g1+g2) & \(2^{+1}_{-2}\) & \(9^{+13}_{-6}\) & \(0.6\pm 0.3\) & \(1.1^{+0.6}_{-0.4}\) & \(1.2^{+2.7}_{-0.7}\times 10^{-4}\) & \(4\pm 2\) & \(3\pm 2\) & 173/145 \\ \hline tbpcf*(apec+g1) & \(3.0^{+0.9}_{-1.0}\) & \(16^{+12}_{-6}\) & \(0.7\pm 0.1\) & \(35\pm 16\) & \(1.2^{+0.3}_{-0.3}\times 10^{-3}\) & \(4\pm 2\) & & 182/146 \\ \hline tbpcf(mcvspec+g1) & \(3.0^{+0.9}_{-1.0}\) & \(15^{+11}_{-5}\) & \(0.7\pm 0.1\) & \(1.1^{+0.2}_{-0.3}\) & \(9^{+4}_{-5}\times 10^{3}\) & \(4\pm 2\) & & 177/146 \\ \hline \end{tabular} * **Notes.**\(N_{\rm H}\) is given in units of \(10^{22}\) cm\({}^{-2}\), \(kT\) in keV and \(M\) in \(M_{\odot}\). For the apec and mcvspec models, the metal abundance value is frozen to 1.0. In the mcvspec model, we freeze \(B\) and \(\dot{m}\) to 10 MG and 5 gm cm\({}^{-2}\) s\({}^{-1}\), typical values for low magnetized WDs. Due to a lack of good-quality data, we had to freeze the centroid of the Gaussian lines; otherwise, it takes random values while fitting. \end{table} Table 2: The best-fit parameters of the fitted models. ### 4xmm J174954.6-294336 The spectra of 4XMM J174954.6-294336 are characterized by a hard power-law with a photon index of \(\Gamma=0.4\pm 0.2\), which is typically seen from accreting WDs. Moreover, a partially absorbed optically thin plasma of temperature \(kT=35\pm 16\) keV provides an adequate fit to the spectra. In addition to that, the spectra display the presence of fluorescent 6.4 keV and ionized 6.7 keV lines. The equivalent widths of the lines are \(171^{+99}_{-79}\) eV and \(136^{+89}_{-81}\) eV for 6.4 keV and 6.7 keV, respectively. The 6.4 keV line originates from the reflection from the surface of the WD or from the pre-shock region in the accretion column and typically has an equivalent width of 150 eV (Ezuka & Ishida 1999). The X-ray light curve shows coherent pulsations with a period of 1002 \(\pm\) 2 s. The pulsation signal was suppressed in an earlier _XMM-Newton_ observation with ObsID 0801681401. This is due to the energy-dependent absorption dips (prominent in the 1-4 keV band, middle panel of Fig. 6), which dilutes the coherent pulsations. However, the pulsations were marginally detected in the initial one-third of that observation, which is unaffected by the dips. These dips are believed to be caused by photoelectric absorption by surrounding material. In the later observation of 0801683401, the dipping phenomenon is not present in the soft band 1-4 keV light curve before the source goes into the eclipse phase. This suggests the dips are highly irregular and variable from orbit to orbit. Similar dipping behaviour was also detected in other X-ray eclipsing sources such as dwarf novae (Mukai et al. 2009) and low-mass X-ray binaries (Diaz Trigo et al. 2006; Ponti et al. 2016). It is well established that the dipping phenomena are seen in high inclination systems. The physical model for explaining this dipping behaviour is linked to the obscuration of the central X-ray source by absorbing material in the region where the stream of material from the companion star hits the outer rim of the accretion disk. This leads to the thickening of the disk rim with azimuth, generating a thick bulge where the stream hits the disk edge, and the dipping occurs when the bulge intercepts the line of sight to the central X-ray source (White & Mason 1985). On the other hand, there is another physical picture of the dipping phenomena where the disk structure is fixed. The dipping activity originates from the interaction of matter that has been left of the stream above and below the accretion disk (Frank et al. 1987). The source 4XMM J174954.6-294336 is classified as a nova-like variable (Ritter & Kolb 2003). The source was detected by _Chandra_ in the Galactic Bulge Survey (GBS) and is designated as CXOGBS J174954.5-294335 (Jonker et al. 2011, 2014). Udalski et al. (2012) did a systematic search for optical counterparts of GBS sources, and the source appeared in OGLE-IV fields. Two possible optical counterparts within 3\(\aas@@fstack{\prime\prime}\)9 from the _Chandra_ source position were found: a variable red giant with a period of 31.65 days with \(I_{\rm mag}=15.67\) and a fainter eclipsing binary with a period of 0.3587 days with \(I_{\rm mag}=17.98\). The _Chandra_ and _XMM-Newton_ source locations are consistent at 2\(\sigma\) position uncertainty. The eclipsing binary system has ob Figure 4: The periodogram of 4XMM J174954.6–294336, obtained using the initial 10 ks of the observation with ObsID 0801681401. Two peaks were observed but not at a very high significance level. Figure 5: The long-term flux variation of 4XMM J174917.7–283329 (top panel) and 4XMM J174954.6–294336 (bottom panel). Both sources show significant flux variability. Figure 3: The top panels shows the periodogram in Leahy normalization obtained from the EPIC-pn light curve of the source 4XMM J174917.7–283329 (left panel) and 4XMM J174954.6–294336 (right panel). The middle panels show the \(\chi^{2}\) analysis using the FTOOL **efsearch**. The bottom panels show the folded light curves. served a \(V-I\) color magnitude of 1.52. Britt et al. (2014) also did an optical search for the GBS sources using the Blanco 4 m Telescope at CTIO. An optical counterpart of \(r_{\rm mag}=19.21\) was found associated with the X-ray source. Their optical light curve shows aperiodic variability of 0.4 mag and an eclipse of almost one magnitude depth. Given the detection of the eclipses in the X-ray light curves, it is very likely that the eclipsing binary with the period of \(P_{\rm orb}=0.3587\) days is the actual optical counterpart of the X-ray source 4XMM J174954.6-294336. Johnson et al. (2017) analyzed optical photometry data from DECam and OGLE. They obtained a similar orbital period as Udalski et al. (2012) and discovered a spin period of the WD of 503.3 s. The detected 1002 s X-ray periodicity is consistent with twice the optical period of 503.3 s. In a later _XMM-Newton_ observation (ObsID: 0801681401), the peaks close to 1002 and 503 s were detected in the initial 10 ks observation (Fig. 4). This indicates that the true spin period is 1002 s. Furthermore, Johnson et al. (2017) analyzed the data from _Chandra_ and detected a total X-ray eclipse. However, the _Chandra_ data do not have enough signal-to-noise to detect the asymmetric shape of the eclipses and the iron line complex. We searched for counterparts in the _Gaia_ catalogue (Gaia Collaboration et al., 2016, 2022). An optical source with _Gaia_\(G_{\rm mag}=18.97\) is consistent in position with the eclipsing system. The estimated parallax obtained from the _Gaia_ data is \(0.61\pm 0.34\) mas which translates into the distance to the source of \(\sim 1.64^{+2.06}_{-0.59}\) kpc. In each of the two _XMM-Newton_ observations, we detected an X-ray eclipse in which the count rate went to zero. In one observation, we detected a total X-ray eclipse; however, in the later observation, only the ingress phase was caught. So far, only a few accreting WDs are known to display eclipses in X-rays (Hellier, 1997; Schwope et al., 2001; Pandel et al., 2002; Ramsay & Cropper, 2007; Mukai et al., 2009) and 4XMM J174954.6-294336 is only the second IP after XY Ari (Hellier, 1997) that shows complete eclipses in X-rays. X-ray eclipses are a powerful diagnostic tool to constrain the geometry of the binary system. The duration of the eclipse ingress (the time interval between first and second contact) and egress (the time interval between third and fourth contact) is used to estimate the fractional area \(f\) of the X-ray emitting region on the WD surface. So far, \(f\) was constrained only for one IP (Hellier, 1997). Typically the ingress and egress times are of a few seconds. The ingress phase of 4XMM J174954.6-294336 lasted around 1500 s, which is much larger than previously found in eclipsing WDs. We detected one complete eclipse, which is asymmetric. The egress phase takes less than 500 s. Further, it is noticeable that the asymmetry is more pronounced in the soft 1-4 keV band than in the hard 4-10 keV band, suggesting an absorption-related origin. A similar asymmetric eclipse behaviour was seen in the eclipsing polar HU Aqr (Schwope et al., 2001) in which the ingress took longer because of the effect of absorption dips which is discussed previously. At the same time, the egress is clean and lasts only 1.3 s. Asymmetric eclipses are more common in eclipsing high-mass X-ray binaries such as 4U 1700-37 (Haberl et al., 1989) and Vela X-1 (Haberl & White, 1990; Falanga et al., 2015). Falanga et al. (2015) studied a sample of bright high-mass X-ray binaries using data from _INTEGRAL_ and found that the asymmetric shape is seen more clearly in the soft (1.3-3, 3-5, 5-12 keV) bands than in the hard (40-150 keV) band. They suggest that the asymmetry is caused by an increase in local absorption column density due to accretion wakes (Blondin et al., 1990; Manousakis et al., 2012). During the egress phase, the wake is located behind the compact object along the line of sight, thus not leading to any apparent increase in the local absorption column density. Therefore the egress phase is clean and much shorter than the ingress phase. However, the companion of 4XMM J174954.6-294336 is unlikely to be a high mass system as Johnson et al. (2017) estimated the spectral type of the donor to be G3V-G5V Figure 6: The EPIC-pn light curve of the source 4XMM J174954.6–294336 with 500s time bin in various energy bands 1–10 keV (top panels), 1–4 keV, 4–10 keV (middle panels), and hardness ratio plot in bottom panels. from density-period relation, which is associated with a main sequence star of \(0.9-1.0\ M_{\odot}\). The estimated distance of 1.64 kpc to the source suggests a 2-10 keV luminosity of (1-4)\(\times 10^{32}\) erg s\({}^{-1}\). On the other hand, the mean Galactic absorption column density towards the source location is \(1.0\times 10^{22}\) cm\({}^{-2}\)(Willingale et al., 2013), which is lower but within a factor of two of the value obtained from the spectral fitting of the source. Optical measurements of a sample of 32 sources indicate the mean WD mass among CVs is \(0.83\pm 0.23\ M_{\odot}\)(Zorotovic et al., 2011). On the other hand, using _RXTE_ observations of 20 magnetic CVs, Ramsay (2000) derived a mean mass of \(0.85\pm 0.21\ M_{\odot}\) and \(0.80\pm 0.14\ M_{\odot}\), for IPs and polars, respectively. In recent years _NuSTAR_ observations have been effective in measuring mass due to the high energy coverage and sensitivity of the instrument (Hailey et al., 2016; Suleimanov et al., 2016, 2019; Shaw et al., 2020). Shaw et al. (2020) measured the mass of 19 IPs using _NuSTAR_ and found the mean mass to be \(0.77\pm 0.10\ M_{\odot}\). These studies suggest that CVs, IPs, and polars have similar masses but higher than the pre-CVs and isolated WDs, giving rise to a WD mass problem. The pre-CV population have mean mass of \(0.67\pm 0.21\ M_{\odot}\)(Zorotovic et al., 2011) and isolated WDs have a mean mass of \(0.53\pm 0.15\ M_{\odot}\)(Kepler et al., 2016). Understanding the mass distribution of accreting WDs is crucial in explaining the formation and evolution of magnetic and non-magnetic CVs. We obtained the mass of the WDs by fitting a physical spectral model to the spectra. For both sources, the estimated mass is consistent with the mean mass of CVs. While doing the spectral fit, we freeze the \(B\) and \(\dot{m}\) due to degeneracy; this may have some effect on the estimation of the mass. Few IPs in the GC and bulge regions are found to have masses above \(1\ M_{\odot}\) such as IGR J1807-4146 (\(1.06^{+0.19}_{-0.10}\ M_{\odot}\); Coughenour et al., 2022), 4XMM J174033.8-301501 (\(1.05^{+0.16}_{-0.21}\ M_{\odot}\); Mondal et al., 2022), and CXO J174517.0-321356.5 (\(1.1\pm 0.1\ M_{\odot}\); Vermette et al. in preparation). ## 5 Conclusions In this paper, we performed detailed spectral and timing studies of two highly variable X-ray sources located within \(1^{\circ}.5\) of the Galactic center. Furthermore, we characterize the sources using their multi-wavelength counterparts. The 1-10 keV spectra of 4XMM J174917.7-283329 can be characterized as emission from optically thin plasma with temperature \(kT=13^{+10}_{-2}\) keV. In addition to that, a partial covering absorption with column density much higher than the Galactic value is required to fit the spectrum. The partial covering can be inferred as absorption due to circumstellar gas located close to the source. We estimate the mass of the central WD as \(0.9^{+0.3}_{-0.2}\ M_{\odot}\). Our timing analysis revealed pulsations with a period of \(1212\pm 3\) s, and the long-term flux measurements suggest the source is highly variable. The hard X-ray spectrum of 4XMM J174954.6-294336 resembles in shape the typical spectra seen from accreting WDs. The source was already identified as an IP with an orbital period of 0.3587 days. The X-ray spectra are well fitted by a model of optically thin plasma of \(kT\sim 35\) keV. The estimated mass of the WD is \(1.1^{+0.2}_{-0.3}\ M_{\odot}\). We performed Fourier timing analysis and detected pulsations with a period of \(1002\pm 2\) s. The long-term observations indicate a flux variability by a factor of three. Since these types of sources are naturally variable, a flux variation of this amplitude is expected. In addition to that, the short-term X-ray light curves display complete eclipses and absorption dips. Due to the limited statistical quality of the data and the number of eclipses detected, a detailed phase-dependent study is not possible. Follow-up X-ray observations of 4XMM J174954.6-294336 with more eclipses detected will help to constrain the binary system parameters. Furthermore, a detailed study of the eclipses has the potential to test the boundary layer picture of X-ray emission from accreting WDs. ###### Acknowledgements. SM, GP, and KA acknowledge financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program "HotMik" (grant agreement No. 865637). GP acknowledges support from Bando per il Finanziamento della Ricerca Fomentamale 2022 dell'Istituto Nazionale di Astrofisica (INAF): GO Large program. MRM acknowledges support from NASA under grant GOI-221338X to UCLA. NR is supported by the ERC Consolidator Grant "MAGMESIA" under grant agreement No. 817661, and also partially supported by the program Unidad de Excelencia Maria de Maerta de Maeztu CEX2020-00158-M. The work has made use of publicly available data from HEASARC Online Service. _XMM-Newton_ Science Analysis System (SAS) developed by the European Space Agency (ESA). _Software:_ Python (Van Rossum & Drake 2009), Jupytter (Kluyver et al., 2016), NumPy (van der Walt et al., 2011; Harris et al., 2020), matplotlib (Hunter, 2007).
2306.13869
Coherent spore dispersion via drop-leaf interactions
The dispersion of plant pathogens, such as rust spores, is responsible for more than 20% of global yield loss annually, and poses a significant threat to human health. However, the release mechanics of pathogens from flexible plant surfaces into the canopy is not well understood. In this study, we investigated the interplay between leaf elasticity and raindrop momentum, revealing how it induces flow coherence and enhances spore transport with 2-10 times greater energy compared to impacts on stationary surfaces. We observed that a flexible leaf generates vortex dipoles, leading to a super-diffusive stream flow. We then developed a theoretical model that accurately predicted the average air flux from leaf edges and the vortex strength to be proportional the vibration speed of the leaves. With Lagrangian diagnostics, we further revealed the presence of hyperbolic and elliptical coherent structures around fluttering leaves, providing the dynamical description of spore transport. Our model demonstrated that a leaf aspect ratio (length/width) negatively correlates with dispersion, indicating that shorter and wider leaves promote greater pathogen spread. Additionally, we found that leaf rigidity positively correlates with dispersion due to damping effects. These mechanistic insights would help the construction of physically informed analytical models for improve local crop disease management.
Zixuan Wu, Saikat Basu, Seungho Kim, Mark Sorrells, Francisco J. Beron-Vera, Sunghwan Jung
2023-06-24T05:27:48Z
http://arxiv.org/abs/2306.13869v3
# Coherent spore dispersion via drop-leaf interaction ###### Abstract The dispersion of plant pathogens, such as rust spores, is responsible for more than 20% of global yield loss annually, and poses a significant threat to human health. However, the release mechanics of pathogens from flexible plant surfaces into the canopy is not well understood. In this study, we investigated the interplay between leaf elasticity and raindrop momentum, revealing how it induces flow coherence and enhances spore transport with 2-10 times greater energy compared to impacts on stationary surfaces. We observed that a flexible leaf generates vortex dipoles, leading to a super-diffusive stream flow. We then developed a theoretical model that accurately predicted the average air flux from leaf edges and the vortex strength to be proportional the vibration speed of the leaves. With Lagrangian diagnostics, we further revealed the presence of hyperbolic and elliptical coherent structures around Huttering leaves, providing the dynamical description of spore transport. Our model demonstrated that a leaf aspect ratio (length/width) negatively correlates with dispersion, indicating that shorter and wider leaves promote greater pathogen spread. Additionally, we found that leaf rigidity positively correlates with dispersion due to damping effects. These mechanistic insights would help the construction of physically informed analytical models for improve local crop disease management. ## I Introduction Plant pathogens (i.e., viruses, bacteria, oomycetes, and fungi) have inflicted devastating damage to fourteen major crop species that support the bulk of food production every year [1; 2; 3; 4; 5]. Specifically, biotrophic fungus species that cause commonly known rust diseases release microscopic airborne spores during the reproduction stage and execute the strategy of aerial long-distance dispersal (LDD) for intercontinental range expansion across thousands of kilometers [5]. This airborne nature of atmospheric transport is associated with hazards that traditional plant quarantine could not resolve [5]. From a local pathogen management perspective, more work on how environmental factors, such as raindrops, influence spore liberation can benefit understanding and stopping dispersal at its origin [6]. Ambient wind and rainfall have been experimentally shown to facilitate the liberation of bioaerosol through mechanical splashing and fragmentation of pathogen-bearing drops [1; 6; 7; 8; 9]. Local spore transport can be achieved by wet splashing of droplets with trapped particles below 100 \(\mu\)m [10]. However, larger droplets [11] cannot sustain airborne transport from drift and have less chances of escaping the plant canopy [12; 10; 13]. Recent work on dry dispersal from raindrop-induced vortex rings shows dispersal of rust spores away from the boundary layers of a wheat leaf [14]. However, the experiments simulate only an impact condition onto a rigid and stationary substrate. During heavy rainfall, raindrop impacts with high momentum can cause significant flapping of flexible leaves, shaped as a thin foil, generating curious flow structures regardless of the wetting conditions [15; 16; 17; 18; 19]. Works from the past have also shown that potential energy stored in plant structures is highly effective in bio-aerosol dispersion [20; 21]. This leads to the question of what role drop-leaf interactions play in dispersing bio-aerosols on the surface. In the present work, we studied the coupling of beam mechanics and flow dynamics to analyze the escape of spores from a vibrating leaf, triggered by raindrop impacts or ambient perturbation. We present organized particle dispersal patterns following drop impacts on leaf substrates with low flexural rigidity (\(10^{-4}\)-\(10^{-5}\) Nm\({}^{2}\)) [22]. Wheat leaves used in this study have bending rigidity measured at \(EI=0.9\pm 0.3\times 10^{-5}\) Nm\({}^{2}\). Cantilever vibration and field potential analysis are prescribed in the coupled vibration-vortex system on the 2D transverse plane. The mechanical details are analyzed via a parametric study with an artificial raindrop-leaf-particle system. To describe the influence of flow coherence on airborne particle transport, we apply Lagrangian diagnostics commonly adopted in geophysical transport [23; 24; 25] at the scale of the leaf's boundary layer, to reveal the hyperbolic and elliptical Lagrangian coherent structures (LCS) embedded in the impact-induced vortex system. Combining predictive modeling and Lagrangian metrics, we seek to reveal here the full dynamical picture that can be triggered from raindrop-leaf interactions alone, which delivers particles as parcels on "fluid conveyor belts". ## II Results ### Experiments Common wheat, _Tritcium aestivum_ (see Materials and Methods section for preparation details), is used as a representative species as it is one of the most common crops susceptible to rust infection [26; 14]. Wheat leaf samples are measured at width, \(b=10-20\) mm, length, \(L=150-200\) mm, and thickness, \(t=0.2-0.3\) mm (see details of wheat leaf growth and preparations in SI Appendix, section A). The drop impact experiment is conducted with a syringe pump (NE-1000, New Era Pump Systems) with DI water droplet of radius \(R_{d}=1.2-2.0\) mm, released at different heights \(H=0.020-1.20\) m onto a leaf/beam sample, as shown in Fig. 1\(A\), resulting in impact velocity, \(U_{d}=0.4-5.0\) m/s. The choice of \(R_{d}\) and \(U_{d}\) yields We\(=\rho_{d}U_{d}^{2}(2R_{d})/\gamma=33-1400\), which is a typical range for raindrop impacts [27; 28; 29]. The longitudinal leaf axis is defined in \(\hat{x}\), and the transverse leaf direction and vertical deflection are defined in \(\hat{y}\) and \(\hat{z}\). Side view (\(xz\)-plane) and top view (\(xy\)-plane) of the wheat sample are shown in Fig. 1\(A\). A uniform, thin layer (100-200 \(\mu\)m) of micro-particles is deposited on substrate surface as spore surrogates, as shown in Fig. 1\(A\) right panel inset. A singular drop impact is released at 10-20 mm from the substrate tip to trigger the first-mode, free-end substrate vibration. Other impact conditions (multiple impacts, asymmetric, off-tip impacts) initiate higher vibration modes and rotations that can be approximated as a super-position of the first-mode vibration and higher modes minor in magnitudes. Asymmetric impacts empirically shed smaller, daughter vortices minor to the primary generation. Therefore, the vortex dynamics in the first-mode vibration is the basis of dispersion that is focused here. Details of the variant impact conditions are characterized and summarized in SI Appendix, section B. Energy of the system is injected via impinging drop momentum, then converting into airflow energy via the elastic potentials of the beam. For non-dimensional analysis, Reynolds number of the drop is defined as \(\text{Re}_{\text{d}}=U_{d}R_{d}/\nu_{d}\approx 10^{3}\sim 10^{4}\) where \(\nu_{d}\) is water kinematic viscosity taken as 8.9 \(\times\) 10\({}^{-7}\) m\({}^{2}\)/s. Reynolds number of the beam/leaf Figure 1: Vortex-induced particle dispersal on wheat leaf surfaces. **A.** Wheat leaf drop impact configurations in side view (left panel) and top view (right panel),and front view deposition schematics (inset). **B.** Side view image sequences of wheat leaf impact experiments with particle deposited, from \(\tau\in\) [0.5 1.0] (\(\tau=0\) at impact). The drop momentum and size are [\(U_{d}\), \(R_{d}\)]=[3.13 m/s, 1.6 mm]. Color coding indicates vorticity direction. **C.** Front view image sequences of leaf-induced dispersion from \(\tau\in\) [0.5 1.2]. Corresponding videos of **B, C** are in SI video 1. Scale bars are set at 50 mm for **A** right panel and 10 mm for **A** left panel and **B, C**. vibration is defined as \(\rm Re_{b}=\bar{V}_{b}L/\nu_{a}\approx\)\(10^{2}\sim\) 1.5 \(\times\)\(10^{3}\) where \(\nu_{a}\) is the air kinematic viscosity as 1.5 \(\times\)\(10^{-5}\) m\({}^{2}\)/s. Here, the averaged beam speed is defined as \(\bar{V}_{b}\approx\)\(2(\delta_{max}-\delta_{min})f\), where \(\delta_{max}\) and \(\delta_{min}\) are the maximum and minimum deflection of the leaf substrate; \(f\) is the first-mode natural frequency of vibration. Particle Stokes number is defined as \(\rm St=2/9(t_{p}/t_{f})\). \(t_{p}=\rho_{p}r_{p}^{2}/\mu_{a}\), is the particle relaxation time (Stokes time) where \(\rho_{p}\), \(r_{p}\), \(\mu_{a}\) are the particle density, radius, and the dynamics viscosity of air respectively. \(t_{f}=\bar{R}_{v}/\bar{V}_{b}\), is the characteristic time of the carrying flow where the average vortex radius \(\bar{R}_{v}\approx A\) and \(\bar{V}_{b}\) are chosen as characteristic length and velocity as the particles are dispersed via vortices. Digital particle image velocimetry (DPIV) with smoke particles are used to extract the carrying fluid (air) velocity and vorticity fields, \(\mathbf{u}(\mathbf{x},t)\) and \(\omega_{\mathbf{v}}(\mathbf{x},t)\). Particle tracking velocimetry with glass micro-particles and pollen's is conducted to extract particle trajectories (see details in Methods section and SI Appendix, section A). Figure 2: **A.** Flow trace visualization of dispersion on surrogate beam from \(\tau=0.00-3.25\). Corresponding videos is in SI video 2. Impact condition is [\(U_{d}\), \(R_{d}\)]=[1.72 m/s, 1.60 mm], on a \(L=\)80 mm, \(b=\)20 mm beam. **B.** Corresponding schematics of dispersion steps in **A. C.** Normalized mean square displacement of particles from the beam center over two cycles from \(\tau=0-2.25\), on log scale. It is normalized to zero at \(\tau=0.75\) (beginning of active dispersion) and one at \(\tau=3.25\) (the typical end of active dispersion by particles). **D.** Reynolds number of the particle dispersion across the range of \(\rm Re_{b}\). **E.** Vorticity field (\(\omega_{v}\)) plot of the upstroke and downstroke vortices at \(\tau=3/8-7/8\). **F.** Average horizontal velocity field \(\bar{V}_{y}\) (in \(\hat{y}\) direction) over a period of \(\tau=1-5\). The velocity vector fields include the vertical direction velocity. Impact condition is [\(U_{d}\), \(R_{d}\)]=[2.80 m/s, 1.60 mm]. **G.** Normalized circulation vs. \(\rm Re_{d}C_{M}C_{bR}C_{D1}C_{\nu}\). Different symbols correspond to different drop-beam conditions (see Methods section). Inset here shows \(\rm Re_{d}\) vs. \(\rm Re_{b}\) in experiments and theory. **H.** Reynolds number of the stream vs. \(\rm Re_{b}\). **I.** Normalized circulation (measured on left edge, normalized by the peak circulation) across time for different \(\rm Re_{b}\) system. Time periods are aligned at the maximum circulation time \(\tau_{vm}\). Scale bars are at 10 mm for all panels. ### Spore dispersion in impact-induced vibration For spore liberation, spores are initially hygroscopically loosened up at the mature reproductive stage, allowing further release [30]. During drop impacts, surface vortices are generated and the spreading drop collides dynamically with the spores with forces (\(\sim 10\) nN) above the inter-particle cohesion (\(\sim 0.7\) nN) [14], loosening spores for dispersion. Vibration generates further vortices at the two side edges and dislodges spores into surrounding vortices. The transport from leaf surface to vortices is discussed in detail in the SI Appendix, section C, where three mechanisms are discussed: impact drop collision, impact vortex diffusion, and edge vortex attraction. The following analysis focuses on post-detachment delivery right after entrance into the ambient vortex flow. Therefore, with such mechanisms above, initial impact and the first downstroke bring particles into the boundary layers and surrounding vortices at \(\tau=t/T=0-0.25\), with \(\tau=0\) defined at impact. Here, \(\tau\) is dimensionless time normalized by the time period \(T=1/f\). At \(\tau=0.25-0.50\), sudden change in acceleration leads to the shedding of the impact vortex ring along with a stroke-reversal (SR) vortex of the opposite circulation as a dipole pair. Similar vortex dynamics in flapping is documented in the literature [18; 31; 32]. The side view of such structure is visualized at \(\tau=0.5\) in Fig. 1\(B\), with front view in Fig. 1\(C\) at \(\tau=0.5\). The shed vortex dipole can be seen in vorticity fields in Fig. 2\(E\) at \(\tau=3/8\). During the subsequent upstroke motion, \(\tau=0.50-0.75\), another upstroke vortex is generated and follows the leaf substrate upward until \(\tau=0.75\) at \(\delta_{\rm max}\), the highest position of the substrate. This sequence is shown in Fig. 1_B, C_. Immediately after the substrate reaches the peak, \(\tau=\)0.75-1.0, similar stroke reversal shedding dynamics is initiated to complete the cycle. The upstroke and downstroke vortices form a counter-rotating dipole during shedding as shown in Fig. 1_B, C_ right panels, confirmed by vorticity field in Fig. 2\(E\) at \(\tau=7/8\). Preferential concentration of particles at certain regions is observed to develop, as particles are transported outward. This is shown in Fig. 1\(C\) at \(\tau=\)0.8-1.2, where particles form clustered structures as they expand outward in time. This is a clear indication of coherent flow development. To describe such coherent flows in the dynamics, we utilize the concept of Lagrangian coherent structure (LCS), a set of fluid parcels with attractive or repulsive properties for neighboring particles [23]. The growth of these coherent profiles enhances mixing, divides up flow regions and ejects particles in specific pathways. The repetition of the described shedding cycle, enabled by leaf elasticity, produces an outward flow stream with nested layers of LCS, in which the particle cluster grows and expands under a defined dynamical sequence. Therefore, detailed LCS diagnostics is needed and used in later section to reveal the delivery pathways. The wake patterns under the Re\({}_{b}\) tested are in a transition regime between 2S and 2P [33], depending on the vibration amplitude. We primarily focus on the low-amplitude 2S cases while it should be noted that higher shedding modes exist. For the 2S scenario, flow asymmetry is observed in the shedding stream about the leaf width axis. Traditionally asymmetry is primarily induced by flow mechanics and beam geometry [34]. However, we empirically observe that asymmetry is introduced by two factors here. First, gravity deflection on drop and beam causes asymmetric vibration profile. Second, time separation of the peak vortex strength between the newer downstroke vortex and the older, decaying upstroke vortex, biases the shedding angel at 45\({}^{\circ}\) above width axis. This lifts the center shearing layer upward as shown in Fig. 1\(C\) at \(\tau=\)1.2. Therefore, asymmetric shedding is observed here and in later LCS analysis. In the following analysis, we first parametrically investigate the relationship between Re\({}_{d}\), Re\({}_{b}\), vorticity, and dispersion efficiency of the generated flow. A reduced-order free-end vibration model is built experimentally and theoretically with thin, poly-carbonate cantilever beams to simulate the first-mode leaf vibration. Wheat leaves have high aspect ratios \(C_{bL}=L/b=2-8\), which makes the thin-beam surrogate model appropriate (See discussion section for how the current modeling extends to lower \(C_{bL}\) leaf systems, unlike wheat). ### Vortex system and dispersion capacity Using a beam surrogate model, dispersion stream flow from vibrating surface is visualized experimentally as shown in Fig. 2\(A\) (see corresponding video in SI video 2). The corresponding schematics illustrating the dispersion process are presented in Fig. 2\(B\). These figures demonstrate the observed patterns of dipole shedding, which exhibit a bias towards the upper plane, as described in the previous section. To quantitatively analyze the dispersion, we calculated the mean square displacement (MSD) \(\langle\mathbf{x}^{2}\rangle\) of the particle clusters at different Re\({}_{b}\), as shown in Fig. 2\(C\). The normalized MDS is defined as \(\frac{\langle\mathbf{x}^{2}\rangle-\langle\mathbf{x}^{2}\rangle_{\tau=1.00}}{ \langle\mathbf{x}^{2}\rangle_{\tau=3.25}-\langle\mathbf{x}^{2}\rangle_{\tau=1.00}}\)). This highlights an active dispersion period that can be observed in Fig. 2\(A\). Details of MSD extraction can be found in SI Appendix, section D. The relation between Re\({}_{b}\) and average particle dispersion speed \(\bar{V}_{disp.}\), (non-dimensionalized as Re\({}_{disp.}=\bar{V}_{disp.}L/\nu_{a}\)), is also extracted and presented in Fig. 2\(D\). We approximated the dispersion speed \(\bar{V}_{disp.}\) as \(\bar{V}_{disp.}\approx\langle\mathbf{x}^{2}\rangle/\tau\), the overall average dispersion rate. A monotonically increasing trend is obtained empirically with beams of different rigidity, with \(\mathrm{Re}_{disp.}\approx\mathrm{Re}_{b}/2\) in a linear approximation. Extracting MSD in relation with time, \(\langle\mathbf{x}^{2}\rangle\sim t^{\alpha}\), \(\alpha\) is measured to be between 0.5-1.8, indicating a mixed diffusion and advection process for the range of \(\mathrm{Re}_{b}\approx\)800-1000, as distinguished by the color coding, the dispersion flow is consistently super-diffusive. The strengthening role of advection with \(\mathrm{Re}_{b}\) reassures that the beam vibration dictates dispersion kinetics. To understand the origin of this stream flow, average velocity fields over \(\tau\in[1.0~{}5.0]\) are extracted from DPIV, shown in Fig. 2\(F\), indicating increasing outward \(|\hat{y}|\) velocity, \(\bar{V}_{y}\), in the field. A cone-shaped advection corridor is observed with the average flow field, proving the existence of a vibration-generated stream flow that expands outward. We also observed edge flux zone denoted by high outward velocity near the two edges. A mechanical model is thus constructed based on 2D beam potential and drop-beam kinematics to model the average velocity magnitude of the edge flux zones with drop inertia, beam conditions, and the generated vorticity. We define here the complex coordinates on the \(yz\)-plane as \(\zeta=y+iz\), the complex velocity as \(\chi=V_{y}-iV_{z}\) and the complex potential as \(\Phi=\phi+i\psi\). By applying boundary condition \(V_{z,\pm}=V_{b}(t)\) on the plate, where \(V_{b}(t)\) and \(V_{z,\pm}\) are vertical beam velocity and solution for \(V_{z}\) directly above and below the beam, respectively, solution of \(V_{y}\), \(V_{z}\) on a thin vibrating beam is obtained as \(V_{y,\pm}=\pm V_{b}\frac{y}{\sqrt{(b^{2}-y^{2})}}\); \(V_{z,\pm}=V_{b}\). By calculating the circulation on the left edge vortex, we obtained \(\Gamma_{\mathrm{L}}=b\,V_{b}(t)\) (details of complex potential analysis is placed in SI Appendix, section D). We then couple it with the drop-beam interactions, with \(\delta(t)\sim(U_{d}/f)e^{\varsigma\omega t}\mathrm{sin}(\omega t)\)[16], where \(\omega\) and \(\varsigma\) are the 1\({}^{\mathrm{st}}\) mode natural frequency and a damping coefficient, respectively. Evaluating at \(\tau=0.5\) (maximum circulation over the damped vibrations), we obtain \[|\Gamma_{\mathrm{max}}|/\nu_{a}=\mathrm{Re}_{d}C_{M}C_{bR}C_{D1}C_{\nu}. \tag{1}\] Here, \(C_{M}=\frac{m_{d}}{2m_{d}+m_{b}}\), where \(m_{d}\) and \(m_{b}\) are the drop and beam mass, respectively; \(C_{bR}=b/R_{d}\) is the width-drop-radius size ratio; \(C_{D1}=\pi e^{-\pi\varsigma}\), a constant with damping coefficient; and \(C_{\nu}=\nu_{d}/\nu_{a}\), ratio of drop-air kinematic Figure 3: **A.** Schematics of the hyperbolic Lagrangian coherent structures and coherent vortex evolving over time period of \(\tau=0.75-1.25\). Particle trajectories are included in the evolution. **B.** Laveraged vorticity deviation (LAVD) scalar fields (unitless) for the first full downstroke from max beam position in \(+\hat{z}\) to min beam position in \(-\hat{z}\), at \(\tau=0.75-1.25\) (LAVD calculation details in SI Appendix, section E). \(\delta_{max}\) and \(\delta_{min}\) labels the maximum and minimum beam location repsectively and \(\delta_{0}\) labels the original beam position at \(t=0\). **C.** Backward finite-time Lyapunov exponent (b-FTLE) scalar fields (unit of \(\mathrm{frame}^{-1}=3000~{}\mathrm{s}^{-1}\)) for the same same time sequence ( calculation details in SI Appendix, section F). The attractive LCS are highlighted by high FTLE regions. Movie of the LVD sequence in SI video 3; movie of the FTLE sequence in SI video 4. The analyses are conducted on the same experiment, with integration periods are both \(\tau=-0.17\). **D.** Forward finite-time Lyapunov exponent (f-FTLE) scalar fields for the same same time sequence (calculation details in SI Appendix, section F). The repulsive LCS are highlighted by the high FTLE regions here. The integration period is \(\tau=0.17\) here. viscosity. The theoretical derivation is corroborated by experiments, as shown in Fig. 2\(G\), in which circulations tested from different drop-beam conditions collapse onto the predictions. Reynolds number of the beam can be predicted as \(\mathrm{Re}_{b}=\mathrm{Re}_{d}C_{M}C_{LR}C_{D2}C_{\nu}\), where \(C_{LR}=L/R_{d}\) and \(C_{D2}=e^{-3\pi/2\varsigma}+e^{-\pi/2\varsigma}\). The relation is confirmed from the inset of Fig. 2\(G\). Therefore, we can also derive an average circulation strength over the first period as \(\frac{\Gamma}{\nu_{a}}=\frac{b\bar{V}_{b}}{\nu_{a}}=\mathrm{Re}_{b}\frac{C_{LR }}{C_{b}L}\). The stream flow originates in the \(v_{\theta}\) velocity component of the upstroke and downstroke vortices when they follow and shed off of the beam edge. These two counter-rotating vortices both provide an outward \(|\hat{y}|\) flux on the beam edges, resulting as the edge flux zone (in red) in Fig. 2\(F\). Therefore, to model the average stream flux speed on the edge, \(\bar{V}_{st}\), we assume two separated Rankine vortices and integrates the time-average \(\hat{y}\) flux as the sum of time-average angular velocity \(\bar{v}_{\theta}\): \(2\int\bar{v}_{\theta}dr=\frac{\Gamma}{\pi}\int_{0}^{R_{v}}\frac{r}{R_{v}^{2}} dr=b\bar{V}_{b}/(2\pi)\), where \(\bar{R}_{v}\) is the average radius of the circulation (see schematic for the edge flux modeling in SI Appendix, section D). This \(\hat{y}\) edge flux is approximated experimentally by integrating the average \(\bar{V}_{y}\) vertically around the edge flux zone shown in Fig. 2\(F\) as \(b\bar{V}_{st}=\int_{b/2}^{b/2}\bar{V}_{y}dl\). Integration line segment \(z\in[-b/2\ b/2]\) is chosen as it covers the edge flux zone well for all cases. A ratio of the corresponding Reynolds number, \(\mathrm{Re}_{st}=\bar{V}_{st}L/\nu_{a}\), to the beam Reynolds number becomes \(\mathrm{Re}_{st}/\mathrm{Re}_{b}=\bar{V}_{st}/\bar{V}_{b}=1/(2\pi)\approx 0.16\). Experimentally, the slope is obtained as 0.12 (Fig. 2_H_), a decent agreement considering variability in vortex locations. The linear relationship is corroborated by previous studies on jet stream in the longitudinal direction [34]. Lastly, the shed vortices show a rapid decay that can be approximated linearly, following a relation of \(\Gamma/\Gamma_{\mathrm{max}}=-d(\tau-\tau_{vm})\), in which \(\tau_{vm}\) is the time of peak circulation and \(d\) is a dimensionless decay rate (2.5\(\sim\)4.5) that decreases with increasing \(\mathrm{Re}_{b}\), as shown in Fig. 2\(I\). This is reasonable as faster stream flux reduces vortex annihilation. The vortices are created and get dissipated quickly in \(\tau<0.5\). Therefore, particle dispersion is carried out by the stream flow generated, via a defined dynamical process described by LCS in the next two sections, and not by individual traveling vortices. ### Spore expulsion by elliptical LCS To investigate the dynamics of dispersion, particularly the downstroke leading to active dispersion (\(\tau=0.75-1.25\)), two types of LCS, elliptical and hyperbolic Lagrangian coherent structures, are used. We utilized elliptical structures, or referred as rotationally coherent vorticies (RCV) [35] below, to objectively describe the vortex structure and its role in spore expulsion. For its diagnosis, Lagrangian averaged vorticity deviation (LAVD), an objective quantity defined by \[\mathrm{LAVD}_{t_{0}}^{t_{0}+\tau_{\tau}T}(\mathbf{x}_{0})=\int_{t_{0}}^{t_{0} +\tau_{\tau}T}|\Omega(F_{t_{0}}^{t}(\mathbf{x}_{0}),t)-\bar{\Omega}(t)|\,dt \tag{2}\] is used. The method objectively identifies the vortices in the unsteady flow, by finding the concentrated high-vorticity regions from integration of \(t_{0}\) to \(t_{0}+\tau_{i}T\), with \(t_{0}\) as the start of integration, and \(\tau_{i}\) the dimensionless integration period. \(\Omega(F_{t_{0}}^{t}(\mathbf{x}_{0}),t)\) denotes the vorticity of the fluid over the flow map \(F_{t_{0}}^{t}\), and \(\bar{\Omega}\) is the vorticity at time \(t\) averaged over the tracked fluid bulk. Empirically, \(|\tau_{i}|=0.15\sim 0.25\) is the integration time that captures the fluid structures in a cycle, as vortex growth and shedding occur within a quarter stroke \(\tau=1/4\). The resulting LAVD map is shown in Fig. 3\(B\) for the first downstroke \(\tau=0.75-1.25\) (see movie in SI movie S3). Boundaries of the coherent vortices are calculated and marked in Fig. 3\(B\) (black outlines). In order to characterize the expulsion flux from such vortex, we define a flux criterion \(\mathcal{F}\) to describe the inertial particle ejections in Eq. 3 for the fluid regions within said boundary, labeled as \(\mathcal{V}(t)\). (see full LAVD sequence and details of LAVD, flux calculations in SI Appendix, section E). \[\mathcal{F}\propto t_{p}\frac{1-R_{\rho}}{1+R_{\rho}/2}\int_{\mathcal{V}(t)}QdS. \tag{3}\] Q is the Okubo-Weiss criterion [36], defined here as \(\omega_{v}^{2}-S_{s}^{2}-S_{n}^{2}\). \(\omega_{v}\) is the relative vorticity, \(S_{s}\) is the shear strain, and \(S_{n}\) is the normal strain. Inside a Lagrangian vortex, One can expect \(Q>\)0 [36]. The flux calculation thus predicts a positive outward flux \(\mathcal{F}\) from vortex centers for inertial particles with density ratio \(R_{\rho}=\rho_{a}/\rho_{p}\ll 1\), where \(\rho_{a}\) is the air density. Indeed, as traditional coherent vortices tend to trap ideal tracers and do not expand, experiments with inertial particles here demonstrate strong particle dispersion behavior as the coherent vortex boundary expands, shown in Fig. 4\(C\) for \(\tau=0.57-0.75\). Therefore, the coherent vortices identified in Fig. 3 effectively serve as traveling sources of outward flux for spores around leaves. Strength of flux increases in proportion to the particle response time \(t_{p}\propto t_{f}St\), effectively the Stokes number, and \(\int QdS\). Such flux relation is experimentally validated in Fig. 4\(D\) by the boundary expansion ratio of the coherent vortex, \(R_{exp}=S/S_{0}\), in which \(S_{0}\) is the coherent vortex size before expansion and \(S\) is the expanded size at \(\tau=\)0.75. Sample system with higher Re\({}_{9}\) and St, denoting stronger circulation and particle inertia, display the highest expansion on average as shown. ### Spore transport by hyperbolic LCS Interactions of vortex structures during their growth and shedding organize the airflow near the leaf into nested hyperbolic LCS, which attract or repel particles readily ejected by the vortices. To identify hyperbolic LCS, finite-time Lyapunov exponent (FTLE) diagnostics is initially applied, outputting the flow separation rate for the 2D \(yz\) domain (see calculation details in SI Appendix, section F). In brief, the calculation takes a infinitesimal perturbation around a point \(\mathbf{x}(t_{0})\), expressed as \(|\delta\mathbf{x}(t_{0})|\), and extract the exponent of the perturbation growth \(\sigma\) in time \(\tau_{i}T\): \[|\delta\mathbf{x}(t_{0}+\tau_{i}T)|=e^{\sigma\tau_{i}T}|\delta\mathbf{x}(t_{0} )|. \tag{4}\] Figure 4: **A.** Experimental particle (pollen) tracking overlaid on background FTLE fields for \(\tau=0.66-1.19\), [\(U_{d}\), \(R_{d}\)]=[2.97 ms\({}^{-1}\), 1.60 mm]. **B.** Average FTLE (unit of frame\({}^{-1}=3000\) s\({}^{-1}\)) of particles for \(\tau=0.0-4.0\), [\(U_{d}\), \(R_{d}\)]=[2.97 ms\({}^{-1}\), 1.60 mm]. The LCS growth period from \(\tau=0-1\) and the upstroke particle attraction, downstroke particle release periods are labeled. **C.** Experimental (pollen) time-series of coherent vortex expansion for \(\tau=0.57-0.75\); the background LavID is obtained at \(\tau=0.75\) (end of upstroke dispersion) and an integration period of \(\tau=-0.17\). **D.** Coherent vortex (RCV) expansion for \(\tau\in\) [0.57 0.75], at different particle Stokes number and drop Reynolds number. **E.** Advection of the attractive LCS at \(\tau=1.10-3.25\). **F.** Normalized MSD for particles and FTLE ridge locations. Both are normalized by the beam width \(b^{2}\). Same video source as in **E.** Reynolds number of the beam and Stokes number of particle is listed. **G.** Migration of actual attractive LCS (red lines) onto high FTLE regions (white in background), and the migration of coherent vortices (in blue) over \(\tau=\in\) [1.08 1.34] (downstroke). The high FTLE regions corresponds to that of Fig. 3. Observing the dynamics backward in time \(\tau_{i}<0\), regions with the largest perturbation growth (high \(\sigma\)) reveal the most attractive surfaces as they pull together fluid elements furthest apart, namely, the attractive LCS. They primarily concentrate in the red regions (high FTLE lines) in Fig. 3\(C\) for the downstroke. Equivalently, regions with maximum repulsion, i.e. the repulsive LCS, is obtained with forward integration \(\tau_{i}>0\), and reside mainly in the red regions in Fig. 3\(D\), where particles are stretched apart the most in future time (see full sequence in SI Appendix, section F and SI movie S4). We will refer to such dark red regions with high FTLE values as FTLE ridges below, which typically coincide with LCS locations. Combining with the coherent vortices, a more complete picture of spore dispersion from fluttering leaves can be depicted. Schematics in Fig. 3\(A\) illustrates these dynamics under the two types of LCS. Immediately before downstroke at \(\tau\approx 0.75\), upstroke vortices continue to eject particles outward, indicated also by the surrounding repulsive LCS in blue curves. As the downstroke vortex grows in strength, significant shearing between the two sets of vortices develops, and their coupling creates attractive LCS that pulls particles outward at an angle as mentioned in previous discussion of Fig. 1. A cap-like attractive LCS then develops on the dipole exterior. It has multiple repulsive LCS penetrating through, cooperatively pulling particles outward. The process completes as the substrate reaches minimum position, and the nested hyperbolic LCS expands in size before weakening. During the downstroke, coherent vortices remain active in flux, ejecting more particles onto the nest. Dynamic flow attraction on particles is further validated by overlaying particle locations onto the FTLE map (backward integrated), shown in Fig. 4\(A\). Extracting the FTLE values of these particles overtime reveals the continuous development of flow coherence during the first cycle, shown in Fig. 4\(B\). FTLE increment indicates particles exiting high-vorticity regions and entering high-strain regions, resulting in particle entrapment on LCS. Cyclic rise and fall indicate that particles are pulled into attractive profiles during upstroke, and released into the surrounding in a advection-assisted diffusion process during downstroke, confirming our finding in Fig. 2. Therefore, particle entrapment is only momentary, as the particles are released in each cycle. The hyperbolic LCS can have prolonged influences on more distanced particles, since they can have a long lifetime, \(\tau>3\), as they expand and travel outward, demonstrated in Fig. 4\(E\). The speed of advection is identical to that of the particle cluster boundary. This is shown in Fig. 4\(F\), where the normalized MSD over time is plotted along the outermost FTLE ridge positions, \(\mathbf{x}_{\text{ridge,max}}^{2}\). This makes sense since the frontier of the super-diffusive stream flow discussed in Fig. 2 must be an expanding, attractive LCS that pulls materials outward. Backward FTLE ridges for the wheat leaf samples are also extracted and shown in SI Appendix, section F. Similar attractive LCS profiles to Fig. 3\(B\) are displayed, validating the surrogate beam model. ### Flow dynamics from geodesic transport theory While FTLE diagnostics render the approximate locations of the LCS, ridges are merely coherence imprints that is left behind by true LCS in the flow. For a more rigorous identification, we turned to the geodesic transport theory [37] to calculate the attractive LCS as material lines (a set of fluid elements) in the 2D domain. For each fluid patch that is shown in the domain of Fig. 3_B-D_, an attractive LCS (red line) can be calculated over \(t\in[t_{0},t_{0}+\tau_{i}T]\) as shown in the inset of Fig. 4\(G\). In forward time, these LCS first attract particles in the fluid patch, then pulls the whole fluid patch forward with itself as the center backbone. Eventually, at the end of integration, many of them land near the FTLE ridges, an imprint left by this migration. The dynamics sequence is shown for the downstroke in Fig. 4\(G\) with backward FTLE map and coherent vortices overlaid. Similar analysis is documented in literature for geophysical flows [38]. The landing proximity of LCS to FTLE ridges depends on the Stokes number. Particles with higher Stokes number exhibit more preferential concentration and pattern formation near the FTLE ridges, as the inertia of particles introduces bias in trajectories towards low-vorticity, high-strain regions, commonly observed in literature [39; 40]. St, \(r_{p}\), and \(\rho_{p}\) of common bio-aerosols and experimented particles used are reported in Table 1 under Methods section. ## III Discussion In this work, we studied the dynamics of how pathogenic spores escape vibrating leaf boundaries in generated flows. We discovered that leaf elasticity enabled a vortex-induced stream flow that organizes and promotes spore dispersion. With surrogate beam as a model system, we first use theory and empirical evidence to predictively model dispersion strength with drop-leaf interactions. Utilizing LCS diagnostics, we revealed the full dynamical pathways embedded in the generated stream transport by identifying the nested attraction and repulsion regions. The modeling, along with the dynamics picture, proposes a mechanical explanation for the co-occurrence of rainfall and bio-aerosol dispersion in air [8]. This is directly proven as Reynolds number of the leaf vibration linearly scales with Reynolds number of the drop, \(\mathrm{Re}_{b}\propto\mathrm{Re}_{d}C_{M}\). And Reynolds number of leaf vibration is the main tuning parameter that linearly couples with Reynolds number of the average stream flux \(\mathrm{Re}_{st}\approx 0.12\mathrm{Re}_{b}\), and the circulation strength \(\Gamma/\nu_{a}\propto\mathrm{Re}_{b}\). Most importantly, we show that the average particle dispersion speed scales with beam speed as \(\mathrm{Re}_{disp.}\sim\mathrm{Re}_{b}/2\) under linear approximation. We thus observe empirically dispersion rate increases with \(\mathrm{Re}_{d}\). Therefore, our work directly provides physical explanation for more effective bio-aerosols dispersion during rainfall as observed. While leaf elasticity enables the flow generation, vibration frequency \(f\) itself do not directly influence leaf vibration speed \(\mathrm{Re}_{b}\). A simple scaling shows that amplitude and frequency are coupled as \(A\sim U_{d}/f\), based on the balance of elastic potential and input drop energy as \(E_{d}=m_{d}U_{d}^{2}/2=E_{b}\sim m_{b}(Af)^{2}\) (see details of derivation in [16]). And thus \(V_{b}\sim Af\sim U_{d}f/f\sim U_{d}\), a function of \(U_{d}\) and prefactors only. Such result holds for drop impacts and ambient perturbations as long as the initial elastic deformation is the sole source of potential energy input for the substrate, without consideration for damping. Therefore, leaves of different natural frequency \(f\), can similarly vibrate and disperse under this mechanism, and the Reynolds numbers \(\mathrm{Re}_{b}\) and \(\mathrm{Re}_{disp.}\) only scale with \(\mathrm{Re}_{d}\) for early cycles \(\tau<\)1. The latter cycles \(\tau>\)1 however, is strongly affected by the damping coefficient, \(\varsigma\), as energy loss becomes significant. A significant portion of the energy is dissipated in generating airflow, thus air damping is three orders of magnitude higher than other damping factors [16]. \(\varsigma\) scales with size and frequency parameters as \(\varsigma\sim\frac{\mu_{a}}{\rho_{b}tbf}\)[16]. Therefore, air damping effect is inversely related to vibration frequency. Since we know that \(f\propto\sqrt{\frac{EI}{m_{b}}}\frac{1}{L^{1.5}}=\sqrt{\frac{EI}{\rho_{b}tb}} \frac{1}{L^{2}}\) (\(\rho_{b}\) is the beam density and \(t\) is the substrate thickness) [16], a leaf structure with a higher flexural rigidity \(EI\) thus provides more vibration-based dispersion beyond the first cycle. Damping also comes into play as the leaf shape varies. The aspect ratio, \(C_{bL}=L/b\), is an important shape parameter of the reduced-order leaf model. Assuming both area and mass of the leaf are constant, the beam speed, \(V_{b}\), is independent of \(C_{bL}\) without damping. By considering air damping, however, scaling analysis shows that \(\omega\sim L^{-2}\) and \(b\sim L^{-1}\), and thus \(\varsigma\sim C_{bL}^{3/2}\) in the first order. It shows that the averaged beam speed, \(\bar{V}_{b}\), decreases with an increasing aspect ratio by considering the air damping. Conversely, a beam with a low aspect ratio promotes the dispersion of particles in the air. However, these particles need to travel across the leaf width in order to become suspended (see details in SI Appendix, section C). A critical width scale thus exists as \(b_{c}\sim R_{d}\mathrm{We}^{1/2}\), for direct collision ejection of particles off the surface. We account for three dimensional effects here. First, we observe empirically that substrate velocity and thus vorticity decreases as we move away from leaf tip in the \(\hat{x}\) direction towards the stem. Theoretically, drop impacts at location of \(x\) measured from leaf stem, yield vibration velocity of \(\frac{V_{b,x}}{V_{b,tip}}=(3k^{2}-k^{3})/2\), with ratio \(k\) defined as \(k=x/L\), as discussed in SI Appendix, section B. Therefore, impacts away from the tip simply diminish vibration and dispersion speed. Second, background flow in \(\hat{y}\) is observed to suppress the dispersion stream on the opposing edge, which has a flow stream going against the background velocity, but this ambient flow in \(\hat{y}\) augments the stream on the other edge simultaneously. Lastly, since leaf tips at \(x=L\) are typically pointy with no realistic front edge, the scenario of front edge dispersion in \(\hat{x}\) is not considered here. In summary, we have developed a comprehensive model that can accurately predict the influence of leaf properties, particle characteristics, and raindrop conditions on spore dispersion in a quiescent environment, i.e. minimal background flow and large Strouhal number, Str\(=fA/\bar{U}_{bkg.}\gg 1\). Local acceleration of the vibrating substrate here dominates the background advection, which allows us to parametrically analyze the drop-leaf mechanics alone. We are able to prove that without significant turbulence, the drop impact alone channels enough energy to power a stream flow as \(\mathrm{Re}_{disp.}\sim\mathrm{Re}_{b}/2\), which becomes super-diffusive above \(\mathrm{Re}_{b}>\)1000. To further complete the energy analysis, we approximated the kinetic energy of the vortical flow as \(E_{\Gamma}\sim\rho_{a}\Gamma^{2}L_{\Gamma}\)[41], where \(L_{\Gamma}\) is the length of the connected vortex tube. We can then compare the energy budget spent in airflow generation via a drop impact on stationary vs. flexible surfaces. We could first estimate \(\Gamma_{stat.}\approx 0.1U_{d}R_{d}\mathrm{Re}_{d}^{3/8}\mathrm{Re}_{a}^{-1/4}\) (where \(\mathrm{Re}_{a}=U_{d}(2R_{d})/\nu_{a}\)) from literature [14] and obtained the expression for \(\Gamma_{flex.}\) from \(\Gamma_{max}\) above. Then, at varying input drop energy \(E_{d}=\)0-0.01 J (from the natural range of [\(U_{d}\), \(R_{d}\)] in this present study), we calculated that a ratio of the rotational energy to the drop kinetic energy, \(E_{\Gamma}/E_{d}\), for a flexible leaf surface ranges from 2.5-5.5%, whereas for a stationary leaf the ratio is around 0.5-1.5% across the parameter space. Leaf elasticity has permitted more energy budget for dispersive vortex generation, a key role that has been largely omitted in leaf-store dispersion mechanics. Therefore, the current study renews the current understanding of spore dispersal avenues [42; 43; 44; 14] and connects impact mechanics to Lagrangian coherence, uncovering an active spore dispersion mechanism that shows less reliance on passive environmental carriers such as traveling splashed droplets or background canopy currents. The work establishes the coupling of leaf elasticity and rainfall in generating a stream flow that disperses surface-bound spores upward (\(+\hat{z}\)) and sideways (\(+\) or -\(\hat{y}\)), with defined dynamics pathways hidden within. ## IV Methods ### Drop-impact experiments High-speed photography (FASTCAM, Photron) at 1,000-3,000 fps is used. A flapping wheat leaf is mechanically modeled as an angularly flapping thin cantilever beam; thin polycarbonate beams, \(\rho_{b}\)=1,220 kg/m\({}^{3}\), are used experimentally, whose dimensions, rigidity, and wetting conditions are documented in SI Appendix, section A), along with that of the wheat samples. Mechanically, the beam substrate is fixed by clamping on one end along the longitudinal axis. The rotational degree of freedom around the longitudinal axis is thus limited with the high \(L/b\) ratio used and the center impact along this axis. After securing the beam, drop impacts are induced with a syringe pump at a pumping rate of 0.2 mm/min. An impact is induced near the tip of the beam (8-10 mm) to observe the maximum impact consequences. The location is chosen to prevent significant spillage as well, since the maximum spreading radius \(R_{m}\) can be calculated as 6 - 11 mm, with \(R_{m}\approx(1/2)R_{d}\mathrm{Re}_{d}^{1/4}\) from the aforementioned [\(U_{d}\), \(R_{d}\)] conditions [14]. Combinations of beam, drop, and impact velocity tested in Fig. 2G are listed in SI Appendix, section D. ### Visualization methodology Particle visualization employs the use of glass particles and pine pollens. They are uniformly deposited on top of the substrate surfaces prior to the drop impact experiment, with size, density, and Stokes number, \(\mathrm{St}=2/9(t_{p}/t_{f})\), reported in Table 1. Particle layer thickness is consistent with experimentation methodology in [14], at 0.1-0.2 mm. Particle size and density ranges are typically \(r_{p}=1.0-20.0\)\(\mu\)m and \(\rho_{p}=1.0-2.5\)\(\times 10^{3}\) kg/m\({}^{3}\) respectively. Smoke visualization is used to perform two-dimensional digital particle image velocimetry (DPIV) on the 2D transverse cross section, in order to extract the velocity and vorticity fields at the location of impact and shedding. Chauvet smoke machine is paired with a 40-60 glycerol-water mixture to produce a thick smoke layer that fills the field of view. A laser beam (sheet laser) with the intensity of 5 mW is used to illuminate the smoke layer at the 2D transverse cross-section of impact point, with laser sheet thickness of 0.2 - 0.5 mm. DPIV is conducted with the MATLAB package PIVLab by Thielicke [48]. CLAHE is enabled with window size 64 as the only image setting. The analysis uses an FFT window deformation with 3 passes; pass 1 is integration area of 120 pixel, and 64-pixel step; pass 2 is integration area of 64 pixel, and 32 pixel-step; pass 3 is integration area of 32 pixel, and 16 pixel-step. Gauss 2X3-point estimator is used with high correlation robustness. The error of the velocity vectors are estimated to be 0.0128 m/s from difference of actual tracer measurements and DPIV analysis, an error rate of 2-10 %. ###### Acknowledgements. Wheat samples are grown at the Plant Breeding and Genetics Section, School of Integrative Plant Breeding and Genetics Section at Cornell University. Please see wheat preparation details in the SI Appendix, section A. This work was supported by the National Science Foundation Grant No. ISO-2120739. The collaboration between F.J.B.-V. and S.J. was initiated at the Aspen Center for Physics, which is supported by the National Science Foundation grant \begin{table} \begin{tabular}{|c|c|c|c|} \hline Particle Types (*: experimented) & \(\rho_{p}\) (g/cm\({}^{3}\)) & \(r_{p}\) (\(\mu\)m) & St \\ \hline *Soda lime glass spheres (SGS) & 2.5 & 10 & 0.02-0.08 \\ \hline *Glycerine-water smoke & 1.0 & 1-2 & 1e-4-4e-3 \\ \hline *Pine (_P. contorta_) pollen & 1.2 & 20-25 & 0.1-0.9 \\ \hline Forget-Me-Not (_M. palustris_) pollen & 1.2 & 2.5-5.0 [47] & 0.002-0.005 \\ \hline Wheat rust (_P. triticina_) spore & 1.0 & 10 [14] & 0.02-0.1 \\ \hline \end{tabular} \end{table} Table 1: Physical properties of particles and Stokes number. \(t_{f}=0.01-0.05\). Pollen and spore density are from literature [46; 44]. PHY-1607611.
2303.08329
Cross-speaker Emotion Transfer by Manipulating Speech Style Latents
In recent years, emotional text-to-speech has shown considerable progress. However, it requires a large amount of labeled data, which is not easily accessible. Even if it is possible to acquire an emotional speech dataset, there is still a limitation in controlling emotion intensity. In this work, we propose a novel method for cross-speaker emotion transfer and manipulation using vector arithmetic in latent style space. By leveraging only a few labeled samples, we generate emotional speech from reading-style speech without losing the speaker identity. Furthermore, emotion strength is readily controllable using a scalar value, providing an intuitive way for users to manipulate speech. Experimental results show the proposed method affords superior performance in terms of expressiveness, naturalness, and controllability, preserving speaker identity.
Suhee Jo, Younggun Lee, Yookyung Shin, Yeongtae Hwang, Taesu Kim
2023-03-15T02:34:03Z
http://arxiv.org/abs/2303.08329v1
# Cross-Speaker Emotion Transfer by Manipulating Speech Style Latents ###### Abstract In recent years, emotional text-to-speech has shown considerable progress. However, it requires a large amount of labeled data, which is not easily accessible. Even if it is possible to acquire an emotional speech dataset, there is still a limitation in controlling emotion intensity. In this work, we propose a novel method for cross-speaker emotion transfer and manipulation using vector arithmetic in latent style space. By leveraging only a few labeled samples, we generate emotional speech from reading-style speech without losing the speaker identity. Furthermore, emotion strength is readily controllable using a scalar value, providing an intuitive way for users to manipulate speech. Experimental results show the proposed method affords superior performance in terms of expressiveness, naturalness, and controllability, preserving speaker identity. Suhee Jo, Younggun Lee, Yookyung Shin, Yeongtae Hwang, Taesu Kim Neosapience, Inc. Speech Synthesis, Emotion Transfer, Emotional Speech Synthesis, Latent Space Manipulation ## 1 Introduction As there are growing expectations for human-like TTS, subtle changes in prosody or emotion should be reflected in the output of a TTS model. Humans can speak with different emotions and this leads to rich and diverse conversations. However, emotional speech data are not easy to acquire. It is extremely hard to record multiple sentences for a long time while consistently preserving emotion. Moreover, due to the ambiguity of emotion labels, samples with inconsistent emotion labels are easily observed in open source emotional speech datasets [1]. Even if it is possible to find a correct emotion label, expressiveness of emotion is limited without controllability of emotion intensity. Most of the previous methods for emotional speech synthesis use additional emotion encoding or a reference audio [2, 3, 4]. These models require an emotion label for each sample, consuming a fair amount of emotion-tagged speech data [2, 4]. Furthermore, inconsistency in emotion labels leads to degraded performance of emotional speech synthesis. Also, cross-speaker emotion transfer often does not work when an unseen emotion is transferred to a speaker. In regard to emotion strength control, there have been several attempts [5, 6] to generate emotion strength scores using an emotion classifier or a ranking function. These approaches are still exposed to mislabeling problems, however, as they use emotion labeled data to extract emotion intensity. To address the issues noted above, we propose transferable and controllable emotion speech synthesis by leveraging rich latent representation. Domain adversarial training and cycle-consistency loss disentangle the speaker from style, making the latent style space rich and transferable. During training, the entire model is trained without any emotion label. To transfer emotion, we utilize a SVM hyperplane during inference to manipulate the style latent towards a desirable emotion. Our method successfully transfers emotion to an emotion-neutral reading-style speaker with only a handful of emotion labeled samples. Furthermore, without external labels, emotion intensity can be controlled by a scalar value, which is easy and intuitive. The generated audio samples are available at [http://emo-transfer.github.io](http://emo-transfer.github.io) ## 2 Related Works ### Emotional Speech Synthesis For emotional speech synthesis, it is common to use an emotion label as a global condition [2, 3]. Otherwise, emotion information is extracted from a reference audio [7, 4] or text [8]. Most of these methods not only require a large quantity of emotion labeling, but also often fail to achieve good quality in cross-speaker emotion transfer. Some approaches [9, 10] use Speech Emotion Recognition (SER) to obtain emotion labels. However, SER is another challenging task and it still requires emotion tagged data to train itself. To reflect not only a type of emotion but also its intensity, methods for controlling emotion strength have been suggested [5, 6, 11]. [12] uses an external SER model to extract emotion intensity scores, whereas [6] uses a ranking function to predict emotion intensity. However, for these models all data should still be labeled for training. ### Latent Space Manipulation In the image synthesis domain, attribute editing using StyleGAN [13] has been widely studied [14, 15, 16]. Such methods utilize latent space of StyleGAN to manipulate attributes. Among these methods, [14] provides a simple yet effective approach for editing. Using a hyperplane in latent space that discriminates attribute-positive and attribute-negative samples, facial attributes such as age, gender or pose can be edited. The hyperplane is acquired from SVM training. In practice, any binary attributes can be manipulated when latent vectors from both positive and negative sides are given. In this paper, we adopt the method of latent space manipulation suggested in [14] to synthesize emotional speech from emotion-neutral speakers. Details of training the SVM and manipulating latents will be described in Section 3.2 ## 3 Method To facilitate cross-speaker emotion transfer, we propose domain adversarial training and cycle-consistency loss for the acoustic model to learn disentangled style and speaker representation. Along with this model, we suggest a method for controlling emotion and its intensity by utilizing a hyperplane obtained from training a SVM. ### Disentangled Latent Style Vectors We focus on extracting rich yet disentangled style representation from speech. If style space is disentangled from speaker identity, cross-speaker emotion transfer becomes easier. Otherwise, latent vector manipulation using a SVM will not work well, as a direction vector acquired from the SVM will also transform speaker information. For example, converting an emotion-neutral sample to an angry one can result in a change of the speaker identity. Therefore, we try to disentangle the speaker identity to generate better style latent space for emotion transfer. #### 3.1.1 Tacotron2-based Acoustic Model Our proposed method is based on Tacotron2 with the following modifications. The main difference is that we adopt a style encoder and a speaker encoder. The style encoder has a target mel spectrogram as an input and generates a style vector. The architecture of the style encoder is based on [17]. The style encoder generates the final output, called a style vector, which is added to the output of a text encoder. The speaker encoder also receives a mel spectrogram as an input and generates a speaker vector. The speaker encoder is composed of a LSTM and a projection layer. The speaker vector is concatenated to an input of a decoder. #### 3.1.2 Domain Adversarial Training By adversarially classifying a style vector to a speaker class, the style encoder learns speaker-independent style. In the speech domain, there have been many applications of domain adversarial training, such as [11, 18]. This has been shown to be largely effective in disentangling information. Our adversarial speaker classifier consists of linear layers. Prior to the layers, a gradient reversal layer was attached. The gradient reversal layer reverses gradients from the classifier so that a style vector cannot discriminate speakers. On the speaker encoder side, a speaker classifier follows a speaker encoder to maintain speaker information. The speaker classifier shares the same structure with the adversarial speaker classifier. The only difference is that it does not have a gradient reversal layer. #### 3.1.3 Cycle-consistency Loss For both the style encoder and speaker encoder, we adopt cycle-consistency loss to preserve information. If we let the speaker encoder \(E_{s}\) and the style encoder \(E_{w}\) and the target mel spectrogram as \(x\), a style vector \(s\) and a speaker vector \(w\) are the outputs of \(E_{s}(x)\) and \(E_{w}(x)\), respectively. The generated output of the entire TTS model is \(f(t,w,s)\), where \(t\), \(w\), \(s\) are the input text, style vector, and speaker vector, respectively. After generating a predicted output \(\hat{x}\), we use speaker vectors from randomly sampled speakers to produce \(s^{\prime}\). Then, \(x^{\prime}\) is generated by \(f(t,w,s^{\prime})\). \(x^{\prime}\) is used as an input of both the speaker encoder and the style encoder, which leads to \(E_{s}(x^{\prime})\), \(E_{w}(x^{\prime})\). Cycle-consistency loss for both the speaker and style can be written as below. \(N\) is the batch size during training. We use Mean Squared Error (MSE) for the loss term. \[L_{style}=\frac{1}{N}\sum_{i=1}^{N}(w_{i}-E_{w}(x_{i}^{{}^{\prime}}))^{2} \tag{1}\] \[L_{speaker}=\frac{1}{N}\sum_{i=1}^{N}(s^{\prime}_{i}-E_{s}(x_{i}^{{}^{\prime} })))^{2} \tag{2}\] ### Controllable Cross-speaker Emotion Transfer Before inference, we extract speaker vectors from all utterances of a target speaker. We then obtain the centroid of those vectors to use it as a speaker vector for inference. Likewise, a style vector that is fed to the model during inference is a centroid of all style vectors from the target speaker. A style vector without any manipulation represents emotion-neutral style, as the speakers we experimented on are emotion-neutral speakers. To transfer emotion, we edit the style vector using a SVM. We train a SVM with a positive group of style vectors from a certain emotion and a negative group from emotion-neutral samples. As [19] showed gender and accent transformation through vector operations, it can be assumed that there exists a hyperplane for each attribute that separates attribute-positive and attribute-negative latent vectors. To find a hyperplane, we use a linear SVM. After training the SVM, we define a unit normal vector \(n\) that is perpendicular to the hyperplane with its distance to the hyperplane 0. By using \(n\), we can determine on which side of the hyperplane a style vector \(w\) lies. We use a metric defined as \(d(n,w)=n^{T}w\). If \(d(n,w)\) is negative, \(w\) is on the attribute-negative side. On the other hand, If \(d(n,w)\) is positive, \(w\) is attribute-positive. In our case, \(w\) is the centroid of all style vectors from a target speaker, as described above. By adding \(n\) multiplied by a scaling factor \(\alpha\) to \(w\), as in (3), emotion can be adjusted. \[w_{edit}=w+\alpha n \tag{3}\] If \(\alpha>0\), the emotion of the edited vector \(w_{edit}\) is led towards the direction of an intended emotion since \(d(n,w_{edit})\) becomes \(d(n,w)+\alpha\). If \(\alpha<0\), \(w_{edit}\) moves towards the opposite Figure 1: From the first stage (a), speaker-independent style vectors are extracted. In the second stage (b), we use a unit vector \(n\) to transfer emotion. The unit vector is perpendicular to a SVM hyperplane that separates emotional style vectors from emotion-neutral ones. direction. By increasing \(\alpha\) ranging from [0, 2], a gradual change in emotion intensity is shown. One more interesting point is that even if \(\alpha\) is set to a negative value, it gives a meaningful result. For example, if there exists an editing vector that manipulates a style vector towards sad emotion, we can adjust \(\alpha\) to a negative value to make it sound happy. To manipulate emotion-related style alone without affecting other speaking style from a speaker, we use conditional manipulation. Given \(n_{1}\) and \(n_{2}\) from two hyperplanes, we obtain a projected direction as \(n_{1}-(n_{1}^{T}n_{2})n_{2}\) that makes \(n_{1}\) independent of the direction \(n_{2}\). As we are trying to manipulate style, \(n_{1}\) is a perpendicular unit normal vector from a hyperplane learned from an emotion classification and \(n_{2}\) will be from a speaker classification. ## 4 Experiments and Results ### Dataset In this paper, we conducted experiments in both English and Korean. Our English dataset consists of 25 female speakers and 42 male speakers, with 127 hours in total, whereas our Korean dataset consists of 58 female speakers and 42 male speakers, with 270 hours in total. While an open-source dataset was also used, most of the samples were collected internally, recorded by professional voice actors. For open-source dataset, we used DailyTalk [20] and Korean Single Speaker (KSS) Speech Dataset [21]. Table 1 shows the training data and its speech style used for the acoustic model. ### Experimental Setup First, for training the acoustic mode, we follow details of the style encoder of [17]. The speaker encoder consists of one LSTM layer with hidden size of 258 and one projection layer. Both the style vector and speaker vector are 512 dimensional. Both the adversarial speaker classifier and speaker classifier consist of three fully-connected layers, followed by ReLU activation and dropout rate by 0.1 for each layer. All the classification losses are multiplied by 0.02 and added to loss terms of the acoustic model. In the training process, we train the model with batch size of 32 using the Adam optimizer. Learning rate scheduling follows the Noam Scheduler, with initial learning rate \(10^{-3}\) and a warm-up threshold of 4000 steps. We trained the model for each gender and language, resulting in four models. All the models were trained for 60 epochs. Until the attention loss reaches 0.7, the attention module is trained alone to stabilize the training process. To train the SVM, style latent vectors for a negative set and a positive set were collected. The negative set consists of emotion-neutral samples, spoken in reading-style. The positive set consists of emotional samples of a desirable emotion, among angry, happy, and sad. Normally, 100 samples were randomly selected for each negative and positive set among training samples that we could find labels of. Although using 100 samples showed stable performance, results on a single sample were good as well. When we train a SVM under a one-shot setting, we selected paired samples, which share the same script and speaker to eliminate confounding variables other than emotion. In the classification task, our SVM models achieved over 90% accuracy on a validation set. Additionally, a speaker classification task was conducted for conditional manipulation. We used all samples from the positive speaker and randomly selected the same number of negative samples. The speaker classification task achieved over 98% accuracy on a validation set. ### Evaluation We conducted a subjective evaluation using mean opinion score (MOS) to evaluate naturalness, speaker similarity, and emotion similarity. In the test, 15 subjects were asked to rate 180 sentences on a scale from 1 to 5. The generated samples were randomly selected from each emotion category. The number of samples per each category was balanced. For speaker similarity, a ground truth sample was given for each entry, as a reference. The participants compared a given sample with the ground truth on the basis of speaker identity. For emotion similarity, an emotion tag, for example, "happy", was given and the subjects were asked to judge whether a given speech sample expresses the emotion. To ease their decision, we provided a corresponding emotion-neutral sample that was generated from the sample model with the same text but without emotion transfer. For \begin{table} \begin{tabular}{c|c||c|c} \hline \hline **Style** & **Ratio \%** & **Style** & **Ratio(\%)** \\ \hline \hline Reading & 60.4 & Fairglue & 1.3 \\ \hline Conversational & 1.2 & Angry & 2.1 \\ \hline Animation Dubbing & 12.1 & Sad & 2.0 \\ \hline Whisper & 1.0 & Happy & 2.3 \\ \hline Children & 7.6 & Not defined & 10.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Speech Style of the Training Data \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Setting** & **Naturalness** & **Emo. Similarity** & **Spk. Similarity** \\ \hline Ground Truth & \(4.55\pm 0.05\) & - & - \\ Baseline & \(3.54\pm 0.08\) & \(2.48\pm 0.08\) & \(1.90\pm 0.07\) \\ Conventional [5] & \(3.86\pm 0.04\) & \(2.91\pm 0.04\) & \(2.46\pm 0.05\) \\ \hline **Proposed** & \(\mathbf{4.70\pm 0.04}\) & \(\mathbf{4.24\pm 0.06}\) & \(\mathbf{3.98\pm 0.07}\) \\ \hline \hline \end{tabular} \end{table} Table 2: MOS on naturalness, speaker similarity and emotion similarity Figure 2: VAD (Valence-Arousal-Dominance) plot according to emotion and its intensity. the baseline, we used Tacotron2. The baseline model uses speaker embedding to encode speaker information while maintaining a style encoder which has identical structure with the proposed model. The baseline generates output with an averaged style vector, which was derived from 100 randomly selected emotion-labeled style vectors for each emotion. We used a HiFi-GAN [22] as a vocoder to generate waveforms. Table 2 shows the results of the evaluation of the proposed model along with ground truth, the baseline and the conventional model [5]. With respect to the MOS score, the proposed model outperforms the baseline model and the conventional model in all evaluation categories. In terms of naturalness, the proposed model was rated higher than the ground truth. As we collected speech data with various styles regardless of the source, a few ground truth samples contain unnatural prosody or noise. However, during training, this phenomenon appears to be diluted. We also observed that the baseline model and the conventional model fall behind the proposed model due to its vulnerability to a wide range of variation in style. Naturally, emotion similarity and speaker similarity are at variance with each other because speech features such as pitch or timber vary greatly from original speech samples as emotion becomes intense. Even with this contradiction, the proposed model shows high scores in both emotion similarity and speaker similarity. Overall, the proposed model maintains high fidelity in speaker identity and naturalness, expressing proper emotion at the same time. #### 4.3.1 Emotion Intensity Control To test emotion intensity control, we used a pretrained speech emotion recognition (SER) model [23]. This SER model was trained using MSP-Podcast [24]. We used the pretrained model as it was without any fine-tuning. MSP-Podcast was labeled with arousal, valence, and dominance ranging from 1 to 7. [23] normalized the values into an interval of 0 to 1. Happiness and anger are known to include high degrees of arousal and dominance whereas sadness has low degrees of arousal and dominance. Happiness also has high degrees of valence while anger and sadness are low in valence. In Figure 2, the x axis represents \(\alpha\), a scaling factor for emotion intensity, and the y axis represents prediction for arousal, valence, and dominance ranging from 0 to 1. Fifty randomly selected samples were used to extract a mean value for each metric. As shown in Figure 2, both for angry and happy, arousal increases as we raise the intensity of emotion by controlling \(\alpha\), whereas arousal decreases for sad. Valence increases for happy whereas it slightly decreases for both angry and sad emotion. The overall tendency shows that emotion intensity is well adjusted towards an intended emotion. #### 4.3.2 Few-shot Emotion Transfer To demonstrate few-shot emotion transfer, we compare one-shot emotion transfer to 100-shot setting in an A/B preference test. In the preference test, participants were asked to select which of the two samples is more similar to a given speaker or emotion. As shown in Figure 3, one-shot setting shows comparable results to 100-shot. Neither showed superior results in terms of preference. #### 4.3.3 Ablation Study To show the effectiveness of each module of the proposed model, we conducted an ablation study. Table 3 represents MOS scores for each case in terms of naturalness, speaker similarity and emotion similarity. Removing an adversarial speaker classifier or cycle-consistency loss resulted in degradation in emotion similarity. In particular, the score for emotion similarity drops drastically without cycle-consistency loss. It can be inferred that cycle-consistency loss plays an important role in generating disentangled yet rich latent style space. Even though it was highly rated in speaker similarity, this is because emotion was not transferred. In comparison, the proposed model shows superior performance in terms of emotion expression, maintaining sufficiently good scores in speaker similarity. This indicates that both components are effective in disentangling and preserving style and speaker information. Figure 4 also supports this argument, showing that speaker vectors learned by the model successfully reserve speaker identity. ## 5 Conclusion This paper suggests a novel method for cross-speaker emotion transfer and manipulation by applying vector arithmetic on a disentangled latent style space. To extract latent style without interference from speaker information, we propose domain adversarial training and cycle-consistency loss. In addition, we provide an intuitive way to transfer and to manipulate the style latent vector by using a SVM hyperplane. Experimental results show that our method greatly improves speaker similarity and emotion similarity while keeping naturalness, without leveraging a large amount of emotion labeled data. In future work, we will conduct experiments on other semantic attributes of speech data, such as age or gender. Figure 4: Visualization of speaker vectors learned by a speaker encoder. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Setting** & **Naturalness** & **Emo. Similarity** & **Spk. Similarity** \\ \hline **Proposed** & **4.77 \(\pm\) 0.04** & **4.32 \(\pm\) 0.07** & **4.31 \(\pm\) 0.09** \\ \hline w/o Adv. Speaker Classifier & \(4.72\pm 0.04\) & \(4.13\pm 0.07\) & \(4.34\pm 0.06\) \\ w/o Cycle-consistency loss & \(4.77\pm 0.04\) & \(3.55\pm 0.08\) & \(4.71\pm 0.05\) \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation Study Figure 3: AB preference test on emotion similarity and speaker similarity
2308.12003
Purification Dynamics in a Continuous-time Hybrid Quantum Circuit Model
We introduce a continuous time model of many-body quantum dynamics based on infinitesimal random unitary operations, combined with projective measurements. We consider purification dynamics in this model, where the system is initialized in a mixed state, which then purifies over time as a result of the measurements. By mapping our model to a family of effective 1D quantum Hamiltonians, we are able to derive analytic expressions that capture how the entropy of the system decays in time. Our results confirm the existence of two distinct dynamical phases, where purification occurs over a timescale that is exponential vs. constant in system size. We compare our analytic expressions for this microscopic model to results derived from field theories that are expected to capture such measurement-induced phase transitions, and find quantitative agreement between the two.
Sebastian Leontica, Max McGinley
2023-08-23T08:41:46Z
http://arxiv.org/abs/2308.12003v1
# Purification Dynamics in a Continuous-time Hybrid Quantum Circuit Model ###### Abstract We introduce a continuous time model of many-body quantum dynamics based on infinitesimal random unitary operations, combined with projective measurements. We consider purification dynamics in this model, where the system is initialized in a mixed state, which then purifies over time as a result of the measurements. By mapping our model to a family of effective 1D quantum Hamiltonians, we are able to derive analytic expressions that capture how the entropy of the system decays in time. Our results confirm the existence of two distinct dynamical phases, where purification occurs over a timescale that is exponential vs. constant in system size. We compare our analytic expressions for this microscopic model to results derived from field theories that are expected to capture such measurement-induced phase transitions, and find quantitative agreement between the two. ## I Introduction The continuing development of programmable quantum devices with increasing numbers of degrees of freedom has led to a great deal of interest in addressing fundamental questions regarding the dynamics of information in many-body quantum systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. In recent years, there has been a particular focus on the competition between unitary operations, which generate entanglement, and local projective measurements, which are non-unitary processes that break entanglement. Models of dynamics that feature both of these ingredients are often referred to as hybrid quantum circuits, the study of which has led to the discovery of a sharp entanglement phase transition driven by the rate of measurements, separating regimes where many-body entanglement is either stable or fragile against these measurements [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Typically, the studied geometry is that of a 1D chain of qudits, but similar transitions have also been found in more complex geometries such as random tensor networks [25; 26; 27; 28]. The existence of this transition was first understood in terms of the entanglement structure of an ensemble of pure many-body states at equilibrium. Subsequent studies also revealed the existence of a simultaneous dynamical phase transition, which can be understood as the ability of the measurement protocol to learn an initially mixed state [29; 30; 31; 32]. The latter suggests a connection between the dynamics of hybrid quantum circuits and quantum error correcting codes [33], which by construction protect information against deleterious non-unitary processes. The transition was also shown to play an important role in the context of simulating the behaviour of open quantum systems [34; 35; 36; 37]. These considerations have led to the notion of purification dynamics, where one studies how the entropy of an initially mixed state decreases over time as a result of the measurements. Away from the critical measurement rate we find two phases where the state purifies over a timescale that increases exponentially with system size ('mixed phase') or is independent of the system size ('purifying phase'). To understand the phenomenology of these phases in a fully quantitative way, arguments based on capillary wave theory have been put forward [38]. Using an effective field theory which is expected to capture the universal features of the transition, one can obtain concrete predictions of how the purity of the system will depend on time in each phase. However, direct verification of these predictions by means of a direct calculation from a microscopic model are as of yet lacking. In this paper, we introduce and study a hybrid quantum circuit model of dynamics that is defined in continuous time, the properties of which we are able to calculate analytically. In particular, by means of a mapping onto an effective Hamiltonian, we are able to compute the time dependence of a particular family of operator-space entanglement measurements, which can be related to the purity of the system at a time \(t\), starting from a maximally mixed initial state. We look in detail at both the mixed and purifying phases, as well as at the transition between them. Our results agree with those of capillary wave theory in both phases: the entropy decays exponentially with time in the purifying phase, and decreases as \(-\log t\) in the mixed phase over an exponentially long time window [Eqs. (64,71)]. In our calculations, we consider both periodic and open boundary conditions, and show that the two choices give rise to quantitatively different behaviour when in the mixed phase: in particular, a \((1/2)\log N\) contribution to the entropy appears when we impose periodic boundary conditions, but this is absent for open boundary conditions. We also look at the dynamics at criticality, where there exists a regime during which the entropy decays algebraically, Eq. (77). The structure of our paper is as follows. In Section II we introduce a continuous time model of dynamics based on infinitesimal random unitary operations, and describe how one can calculate various measures of entanglement and information spreading in this model. We supplement the unitary dynamics with projective measurements in Section III, and in Section IV, we explain how the re sultant unitary-projective dynamics can be mapped onto imaginary-time evolution under an effective 1D Hamiltonian. We then present our main quantitative results in Section V, giving analytic expressions that quantify how the purity of the system increases as a function of time in the purifying/mixed phase and at criticality. Finally, we discuss our results and conclude in Section VI. ## II Continuous-time random circuit model In this section, we introduce a random unitary circuit (RUC) model of unitary dynamics, and describe how its entanglement properties can be analysed. We will later incorporate measurements into this model, which will allow us to study the dynamics of purification. We consider a one-dimensional array of \(N\) qudits, each with a local Hilbert space of dimension \(d\). The evolution is driven by a spatially local unitary circuit with a brick-work structure, illustrated in Fig. 1. In a given timestep \(\tau=1,2,\ldots\), two-site unitaries are applied to each pair of qudits on the odd bonds \((2j-1,2j)\), followed by another layer of unitaries on the even bonds \((2j,2j+1)\). These elementary two-site unitaries each have the same structure, also depicted in Fig. 1. First, single-site gates \(U\otimes V\) are applied, followed by evolution under some two-qudit Hamiltonian \(H\) for a time \(\Delta t\), and finally the change of basis is undone by applying the inverse single-site rotations \(U^{\dagger}\otimes V^{\dagger}\). We denote the unitary operator describing the evolution from time \(0\) to \(\tau\) as \(W(\tau)\). Throughout this work, \(H\) will be treated as a free parameter of the model and it is kept fixed across both time and space. To simplify calculations, we will assume it is real, hermitian and symmetric under swapping the two qudits it acts on. The single-qudit unitaries will be sampled randomly and independently from the Haar ensemble for each unit cell. We will generally be interested in the limit where \(\Delta t\to 0\), which we refer to as the continuous-time limit. Note that the state only evolves by an infinitesimal amount in each timestep, in contrast to discrete-time RUC models of quantum dynamics (e.g. Refs. [6; 39]). A model of continuous-time dynamics was studied numerically in [40]. Our method for constructing the unit cell is more general and more easily amenable to analytical treatment. Our focus will be on the dynamics of entanglement and quantum information in these continuous-time models. For this purpose, it is useful to consider the Choi-Jamiolkowski (CJ) state \(\ket{W(\tau)}\) corresponding to the unitary \(W(\tau)\). This state is defined on two copies of the system, which we can associate with the inputs and outputs of the time evolution operator. Formally, we have \(\ket{W(\tau)}=\left[\mathbb{I}\otimes W(\tau)\right]\ket{\Phi^{+}}\), where \(\ket{\Phi^{+}}=\bigotimes_{j=1}^{N}(d^{-1/2}\sum_{a=1}^{d}\ket{a}\otimes\ket{ a})\) consists of maximally entangled states between each input qudit and its corresponding output [41]. Many important quantities that are used to diagnose the spreading of quantum information can be expressed as simple functions of this operator-state \(\ket{W(\tau)}\)[5], and in particular we will find this representation useful when it comes to studying purification dynamics. As is now common in studies of RUC dynamics, we use the Renyi entropies to quantify the entanglement properties of the state \(\ket{W(\tau)}\) \[S^{(n)}(\rho_{A})=\frac{1}{1-n}\log\tr\left(\rho_{A}^{n}\right), \tag{1}\] where \(n\) is some positive parameter. Here, \(\rho_{A}\) is the reduced density matrix of \(\ket{W(\tau)}\) corresponding to some subset \(A\) of inputs and outputs. Compared to the usual von Neumann entropy \(S_{\text{VN}}\), the Renyi entropies for \(n=2,3,\ldots\) are more amenable to analytic studies, since they only involve integer moments of the density matrix and hence can be computed using a replica method. The von Neumann entanglement entropy \(S_{\text{VN}}\) can be recovered by constructing an analytical continuation of the function and taking the limit \(n\to 1\) (see, e.g. Ref. [42]). For the largest part of this work, we will only be concerned with the second Renyi entropy \(S^{(2)}(\rho_{A})\), which is the simplest to evaluate. This is a lower bound on the von Neumann entropy \(S_{\text{VN}}\geq S^{(2)}\), which in certain cases is known to be asymptotically tight [43]. Since the purity \(\Tr[\left(\rho^{A}\right)^{2}]\equiv\exp[-S^{(2)}(\rho^{A})]\) is a quadratic function of \(\ket{W(\tau)}\bra{W(\tau)}\), it can be expressed using a fourfold copy of the evolution operator, which we denote \[\mathbf{W}^{(2)}(\tau)\coloneqq(W(\tau)\otimes W^{*}(\tau))^{\otimes 2}. \tag{2}\] Note here that the operator replicated in the expression differs from \(\ket{W(\tau)}\bra{W(\tau)}\) by a reshuffling of the indices. Henceforth, we will use this convention, but retain the essence of the CJ isomorphism by noting that we treat inputs and outputs on par when discussing Renyi entropies. Define (unnormalized) states \[\ket{\mathbf{I}}_{j} =\sum_{a,b=1}^{d}\Big{(}\ket{a}\otimes\ket{a}\otimes\ket{b} \otimes\ket{b}\Big{)}_{j}, \tag{3}\] \[\ket{\mathbf{S}}_{j} =\sum_{a,b=1}^{d}\Big{(}\ket{a}\otimes\ket{b}\otimes\ket{b} \otimes\ket{a}\Big{)}_{j}, \tag{4}\] which live in the fourfold-replicated Hilbert space of each physical site \(j\). In terms of these, we have \[e^{-S^{(2)}}=\Tr[\left(\rho^{A}\right)^{2}]=\bra{\Psi_{A_{\text{out}}}}\mathbf{ W}^{(2)}(\tau)\ket{\Psi_{A_{\text{in}}}}, \tag{5}\] where we denote the set of input (output) sites included in the region \(A\) as \(A_{\text{in}}\) (\(A_{\text{out}}\)), and the states \[\ket{\Psi_{A_{\text{in}}}}=\left(\bigotimes_{j\in A_{\text{in}}}\ket{\mathbf{ S}}_{j}\right)\otimes\left(\bigotimes_{j\notin A_{\text{in}}}\ket{\mathbf{ I}}_{j}\right), \tag{6}\] and similar for \(\ket{\Psi_{A_{\text{out}}}}\). To make progress, we look at the average of the Renyi entropy (5) over the random ensemble of unitary circuits. More precisely, we will evaluate the average purity as opposed to the average entropy, which is equivalent to performing averages inside the logarithm of Eq. (1). This simplification is common in analyses of RUCs [17], and still recovers the correctly-averaged von Neumann entropy if one takes the replica limit \(n\to 1\). Accordingly, averaging the purity amounts to replacing \(\mathbf{W}^{(2)}(\tau)\) with its ensemble average \(\overline{\mathbf{W}^{(2)}(\tau)}\). As shown in App. A, \(\overline{\mathbf{W}^{(2)}(\tau)}\) maps states spanned by tensor products of \(\ket{\mathbf{I}}_{j}\), \(\ket{\mathbf{S}}_{j}\) to other such states, meaning we can focus on the restriction of this averaged operator to the subspace \(V(S_{2})^{\otimes N}\), where \(V(S_{2})=\mathrm{span}(\ket{\mathbf{I}},\ket{\mathbf{S}})\). Because the single-site Haar-random unitaries appearing in each of the two-site elementary blocks of the circuit [Fig. 1(b)] are sampled independently, we can consider the ensemble average of the evolution under a single one of these blocks, which we denote \(\mathcal{T}:V(S_{2})^{\otimes 2}\to V(S_{2})^{\otimes 2}\). Using the Weingarten diagrammatic calculus as seen in App. A, we find that, for small \(\Delta t\), this map can be expressed as \[\mathcal{T}_{ij}=1-\Delta t^{2}\Omega(H)\mathcal{W}g^{(i)}\mathcal{W}g^{(j)} \left(1-\sigma_{z}^{(i)}\sigma_{z}^{(j)}\right)+\mathcal{O}(\Delta t^{4}), \tag{7}\] where \(i,j\) label the sites on which the unit cell acts, \(\Omega(H)\) is a measure of the entangling power of the Hamiltonian \[\Omega(H)=d^{2}\operatorname{tr}(H^{2})-2d\operatorname{tr}(\operatorname{tr }_{1}(H)^{2})+\operatorname{tr}(H)^{2}, \tag{8}\] and \(\mathcal{W}g\) is the Weingarten matrix corresponding to the symmetric group \(S_{2}\) \[\mathcal{W}g=\frac{1}{d(d^{2}-1)}\begin{bmatrix}d&-1\\ -1&d\end{bmatrix}=\frac{1}{d^{2}-1}\left(1-\frac{\sigma_{x}}{d}\right). \tag{9}\] The induced evolution can be equivalently described using the effective imaginary-time Hamiltonian \[\mathcal{H}_{ij}=\Omega(H)\mathcal{W}g^{(i)}\mathcal{W}g^{(j)}\left(1-\sigma_ {z}^{(i)}\sigma_{z}^{(j)}\right), \tag{10}\] in terms of which the unit cell map is \[\mathcal{T}_{ij}=e^{-\Delta t^{2}\mathcal{H}_{ij}}+\mathcal{O}(\Delta t^{4}). \tag{11}\] It is interesting to note that there are no contributions from odd powers of \(\Delta t\) in the expansion of Eq. 7. Looking at the form of the Hamiltonian (10), we see that in the effective Hilbert space spanned by the states (3, 4), the only mobile degrees of freedom are domain walls separating regions of \(\ket{\mathbf{I}}\) from \(\ket{\mathbf{S}}\), consistent with discrete-time RUCs discussed previously [39; 6]. The initial Hamiltonian \(H\) only enters the expression through its entangling rate \(\Omega(H)\). This sets the overall timescale of quantum information transfer through the system. In App. B, we compute the transfer matrix for a higher number of replicas and show that this statement holds more generally. This result suggests that the qualitative behavior of entanglement dynamics derived from our model should be insensitive to most of the microscopic details, and hence applicable to a wide range of physical processes. The propagator for the whole circuit \(\mathcal{T}\) can be constructed by concatenating the two-site maps (11) according to the brickwork circuit structure illustrated in Fig. 1. We have \[\mathcal{T}(\tau)=\left(\prod_{\begin{subarray}{c}i=2\\ i\,\mathrm{even}\end{subarray}}^{N-2}e^{-\Delta t^{2}\mathcal{H}_{i,i+1}} \prod_{\begin{subarray}{c}i=1\\ i\,\mathrm{odd}\end{subarray}}^{N-1}e^{-\Delta t^{2}\mathcal{H}_{i,i+1}} \right)^{\tau}+\mathcal{O}(\tau\Delta t^{4}). \tag{12}\] We now define the effective time as \(t=\tau\Delta t^{2}\) and take the limit \(\tau\to\infty\), \(\Delta t\to 0\) such that \(t\) is kept constant. Using the Suzuki-Trotter formula, we find the limit of the previous equation \[\mathcal{T}(t)=\exp\left(-t\sum_{i=1}^{N-1}\mathcal{H}_{i,i+1}\right). \tag{13}\] We reiterate here that this operator acts as the restriction of \(\overline{\mathbf{W}^{(2)}(\tau)}\) to its invariant subspace \(V(S_{2})^{\otimes N}\), and therefore may replace it in average entropy calculations (e.g. averaging Eq. 5). In its current form, the effective Hamiltonian \(\sum_{i=1}^{N-1}\mathcal{H}_{i,i+1}\) is not Hermitian, but can be made so through a local similarity transformation, a technique commonly encountered in the study of non-equilibrium dynamics [44]. If we define a new evolution operator by \(\hat{\mathcal{T}}=(\mathcal{W}g^{-\frac{1}{2}})^{\otimes N}\mathcal{T}( \mathcal{W}g^{\frac{1}{2}})^{\otimes N}\), each 2-local term in the effective Hamiltonian transforms as \(\hat{\mathcal{H}}_{ij}=(\mathcal{W}g^{-\frac{1}{2}})^{\otimes 2}\mathcal{H}_{ij}( \mathcal{W}g^{\frac{1}{2}})^{\otimes 2}\), which gives us the Hermitian interaction \[\begin{split}\hat{\mathcal{H}}_{ij}&=\frac{\gamma}{2} \left[1-\frac{d^{2}-1}{d^{2}}\sigma_{z}^{(i)}\sigma_{z}^{(j)}\right.\\ &+\left.\frac{1}{d^{2}}\sigma_{x}^{(i)}\sigma_{x}^{(j)}-\left. \frac{1}{d}(\sigma_{x}^{(i)}+\sigma_{x}^{(j)})\right],\end{split} \tag{14}\] where the overall strength is given by \(\gamma=2\Omega(H)/(d^{2}-1)^{2}\). This type of local interaction is found in the literature both as the quantum equivalent of the classical two-dimensional axial next-nearest neighbor Ising Figure 1: Schematic representation of the random circuit geometry for open boundary conditions. The construction of the unit cells is illustrated on the right. The Hamiltonian \(H\) and evolution time \(\Delta t\) are kept fixed, but the random unitaries \(U,V\) are sampled independently at each spacetime location in the circuit. model (ANNNI) [45; 46] or more recently as the Jordan-Wigner transform of the balanced interacting Kitaev chain [47; 48]. In the limit of \(d\to\infty\) we are left with a simple ferromagnetic nearest-neighbor Hamiltonian, with each domain wall incurring an energy penalty of \(\gamma\). The Hamiltonian is symmetric under the global spin-flip operator \(\mathcal{C}=\prod_{i}\sigma_{x}^{(i)}\), as can be seen through the commutation relation \([\mathcal{C},\mathcal{H}_{ij}]=0\). ## III Including measurements In this section, we will introduce the formalism that can be used to incorporate measurements into the random circuit evolution. For the purpose of this work, we will consider projective measurements in the computational basis of each qudit that occur stochastically. The same framework can accommodate for weak-measurement schemes as seen in Ref. [17]. Due to the continuous nature of our circuits, the effective model will be identical in the two cases. A projective measurement is a non-unitary stochastic process, where the wavefunction of the system \(\ket{\psi}\) collapses to a post-measurement state \(\ket{m}\) with probability \(p_{m}=|\bra{m}\ket{\psi}|^{2}\). Here, the set of wavefunctions \(\{\ket{m}\}\) is the computational basis in which the measurement is performed and \(m=1,\,2,\ldots d\). For any fixed realisation of the random unitary circuit and positioning of the measurements, the final state of the system will depend on all the measurement outcomes \(\mathbf{m}=(m_{1},\,m_{2},\ldots)\). Thus, we can write the ensemble of final states as \(\{(p_{\mathbf{m}},\ket{W_{\mathbf{m}}})\}\), where \(p_{\mathbf{m}}\) is the joint probability of the measurement results, and \(\ket{W_{\mathbf{m}}})\) is the (normalized) conditional state. As before, we will imagine the Choi-Jamiolkowski state, so \(\ket{W_{\mathbf{m}}})\) is a state on a twofold copy of the system, and is constructed by preparing a maximally entangled state between the two copies in the computational basis, and evolving one of the copies under the evolution in question. As is typical in the study of hybrid quantum circuits, our interest is on the statistics of the entanglement properties of individual conditional wavefunctions \(\ket{W_{\mathbf{m}}})\); see, e.g. Refs. [14; 29]. The natural quantity to consider for this purpose is the von Neumann entropy, \(S_{\mathrm{vN}}(\rho_{\mathbf{m}}^{A})\), where \(\rho_{\mathbf{m}}^{A}\) is the reduced density matrix of \(\ket{W_{\mathbf{m}}}\) over a subset of inputs and outputs \(A\). Specifically, we would want to compute the average of this quantity over all realizations of the random circuit, measurement locations and measurement results, which we denote \(\overline{S_{\mathrm{vN}}(\rho_{\mathbf{m}}^{A})}\). However, this quantity is very difficult to compute directly in random circuit models. We follow Ref. [17] and introduce the series of related quantities \[\tilde{S}_{A}^{(n)}=\frac{1}{1-n}\log\Bigg{|}\frac{\sum_{\{M\}}p_{M}d^{|M|(n-1 )}\overline{\sum_{\mathbf{m}}p_{\mathbf{m}}^{n}\operatorname{tr}[(\rho_{ \mathbf{m}}^{A})^{n}]}}{\sum_{\{M\}}p_{M}d^{|M|(n-1)}\overline{\sum_{\mathbf{ m}}p_{\mathbf{m}}^{n}}}\Bigg{|}, \tag{15}\] where \(M\) labels a particular configuration of measurement locations in spacetime, which occurs with probability \(p_{M}\), and \(\mathbf{m}\) runs over all measurement results for the given configuration \(M\). These quantities are related to measurement-averaged Renyi entropies, with the main difference that each outcome is weighted by \(p_{\mathrm{I_{M}}}^{n}\). The additional factor of \(d^{|M|(n-1)}\) ensures that the correct order of magnitude, in powers of \(d\), of the correct weight is preserved, and only deviations from it are amplified by the number of replicas. The renormalization is also performed on average, i.e. we compute the average of the numerator and the denominator independently. Knowledge of this quantity for all integers \(n\geq 2\) can be used to recover the average entanglement entropy \(\overline{\tilde{S}_{A}}\) of subsystem \(A\) using the replica limit \[\overline{S_{\mathrm{vN}}(\rho_{\mathbf{i}}^{A})}=\lim_{n\to 1}\tilde{S}_{A}^{(n)}. \tag{16}\] Each term in the sums over \(M\) appearing in the numerator and the denominator in Eq. (15) is a scalar that depends linearly on the tensor \[\overline{\langle\mathbf{W}^{(n)}\rangle}=d^{|M|(n-1)}\overline{\sum_{ \mathbf{m}}p_{\mathbf{m}}^{n}\left(W_{m}\otimes W_{m}^{*}\right)^{\otimes n}}, \tag{17}\] which is analogous to the duplicated state in Eq. (2) defined earlier. The angled brackets are a short-hand notation for the weighted sum on the RHS. For simplicity, we once again revert to the normal operator indices, but keep in mind that the probabilities are obtained from expectation values of the appropriate projectors in the CJ state \(\ket{W_{m}}\). To see how this tensor evolves as the circuit progresses, let us consider how \(\overline{\langle\mathbf{W}^{(n)}\rangle}\) is updated when a new measurement is performed on site \(i\), the outcome of which we denote \(m\). Each of the \(n\) replicas transforms via the action of a projector \(P_{m}^{(i)}\), which corresponds to the \(m\)th computational basis state for qudit \(i\). Since all measurement outcomes \(m\) are summed over in Eq. (17), we find \[\overline{\langle\mathbf{W}^{(n)}\rangle}\to d^{n-1}\mathcal{M}_{i}( \overline{\langle\mathbf{W}^{(n)}\rangle})=d^{n-1}\sum_{m}(P_{m}^{(i)})^{ \otimes 2n}\overline{\langle\mathbf{W}^{(n)}\rangle}. \tag{18}\] The effect of post-selection is included by assuming a perfect correlation of the measurement results in all \(n\) replicas. Since adding an infinitesimal time evolution to the averaged tensor only leads to linear transformations by left multiplication due to both the chaotic dynamics and the measurements, we can proceed again by mapping the evolution of \(\overline{\langle\mathbf{W}^{(n)}\rangle}\) to a reduced system with an effective Hamiltonian. If we focus again on the twofold replica \(n=2\), we see that the action of the measurement opera tor on the reduced Hilbert space at each site is \[\mathcal{M}\left|\mathbf{I}\right\rangle=d\sum_{m}P_{m}^{\otimes 4} \left|\mathbf{I}\right\rangle=d\sum_{m}\left|m\right\rangle^{\otimes 4}:=\left| \mathbf{O}\right\rangle,\] \[\mathcal{M}\left|\mathbf{S}\right\rangle=d\sum_{m}P_{m}^{\otimes 4} \left|\mathbf{S}\right\rangle=\left|\mathbf{O}\right\rangle, \tag{19}\] \[\mathcal{M}\left|\mathbf{O}\right\rangle=d\sum_{m}P_{m}^{\otimes 4} \sum_{n}\left|n\right\rangle^{\otimes 4}=\left|\mathbf{O}\right\rangle.\] Therefore, we find that the new vector space \(V_{\mathcal{M}}(S_{2})=\mathrm{span}(\left|\mathbf{I}\right\rangle,\left| \mathbf{S}\right\rangle,\left|\mathbf{O}\right\rangle)\) is closed under measurements. If we promote this to the reduced Hilbert space of the entire chain \(V_{\mathcal{M}}^{\otimes N}(S_{2})\), we find that this is also closed under the action of the Haar averaged unit cell between any pair of sites. To show this, we can consider the properties of the following linear combination \[\left|\mathbf{X}\right\rangle:=\left|\mathbf{O}\right\rangle-\frac{d}{d+1}( \left|\mathbf{I}\right\rangle+\left|\mathbf{S}\right\rangle)\in V_{\mathcal{M} }(S_{2}). \tag{20}\] It is straightforward to show that this becomes null under any contraction between a normal and a complex conjugate leg. Due to the rules of the Weingarten calculus, this means that such local states are preserved by averaged unit cells. This is summarised in the following equation \[\mathcal{T}_{ij}^{\mathcal{M}}\left|\mathbf{X}\right\rangle\otimes V_{ \mathcal{M}}(S_{2})\in\left|\mathbf{X}\right\rangle\otimes V_{\mathcal{M}}(S_ {2}). \tag{21}\] In Appendix C we give an explicit representation of the new operator \(\mathcal{T}_{ij}^{\mathcal{M}}\), acting on \(V_{\mathcal{M}}(S_{2})^{\otimes 2}\). We find that evolution in subspaces that contain \(\left|\mathbf{X}\right\rangle\) states happen at a different rate \(\Gamma\), independent of the rate of information propagation \(\gamma\). This is defined by \[\Gamma=\frac{2d}{(d^{2}-1)^{2}}\operatorname{tr}(\operatorname{tr}_{1}(H)^{2}), \tag{22}\] and can be qualitatively understood as an energy cost associated with \(\left|\mathbf{X}\right\rangle\) states. In App. D we derive a more explicit relation between the rates \(\Gamma,\,\gamma\) and the microscopic Hamiltonian \(H\). The new state \(\left|\mathbf{X}\right\rangle\), which appears after a measurement, ensures that we obtain the correct correlations between measurements performed consecutively at short time intervals on the same qudit. The timescale \(1/\Gamma\) represents the time it takes a qudit to relax before we can obtain new information by measuring again in the same basis. For the rest of this work, we set \(\Gamma\to\infty\), such that no measurement inertia can be observed. In App. C, we show that doing so is effectively equivalent to projecting out the \(\left|\mathbf{X}\right\rangle\) state and working in the previous 2-dimensional reduced Hilbert space \(V(S_{2})\). The action of the measurements is also projected onto this subspace and can be expressed as \[\mathcal{M}=\frac{d}{d+1}(1+\sigma_{x}). \tag{23}\] It can be shown that this same operator is obtained in the reduced subspace if we consider instead measurements in random bases. In the following, the measurements are distributed through the circuit according to an independent Poisson process for each site, at some uniform rate \(f\) (in the natural time units of the continuous model). The transfer matrix at time \(t\) under both random dynamics and measurements is then given by an effective imaginary-time evolution \(\mathcal{T}_{\mathrm{eff}}(t)=\exp(-t\mathcal{H}_{\mathrm{eff}})\), with \(\mathcal{H}_{\mathrm{eff}}\) given by \[\mathcal{H}_{\mathrm{eff}}=\sum_{i=1}^{N-1}\mathcal{H}_{i,i+1}-f\sum_{i=1}^{N }\left(\mathcal{M}_{i}-1\right). \tag{24}\] From Eq. 5 and Eq. 15 we see that we can express the second moment of the entanglement entropy of some subregion \(A\) at time \(t\) using matrix elements of the transfer matrix \[\tilde{S}_{A}^{(2)}=-\log\Bigg{|}\frac{\left\langle\Psi_{A_{\mathrm{sut}}} \right|\mathcal{T}_{\mathrm{eff}}(t)\left|\Psi_{A_{in}}\right\rangle}{\left< \mathbf{I}\right|^{\otimes N}\mathcal{T}_{\mathrm{eff}}(t)\left|\Psi_{A_{in}} \right\rangle}\Bigg{|}. \tag{25}\] The denominator acts as a normalization factor, so using the expression above allows us to safely drop constant terms in the effective Hamiltonian. We can perform a similar analysis for the case of multiple replicas. Using the results in App. B and the limits \(d,\Gamma\to\infty\) we show that the effective Hamiltonian of the \(n\)'th replica theory is given by \[\mathcal{H}_{\mathrm{eff}}^{(n)}=\frac{\gamma}{2}\sum_{i=1}^{N-1}D_{ij}-f\sum_ {i=1}^{N}\mathcal{M}^{(n)}, \tag{26}\] where \(\mathcal{M}^{(n)}\) is the generalization of the operator in Eq. (23) that acts as \[\mathcal{M}^{(n)}\left|\tau\right\rangle=\sum_{\sigma\in S_{n}}\left|\sigma \right\rangle, \tag{27}\] and \(D\) is a diagonal two-site operator with entries given by \[D_{\kappa\epsilon,\sigma\tau}=\delta_{\kappa\sigma}\delta_{\epsilon\tau}D( \sigma,\tau), \tag{28}\] with \(D(\sigma,\tau)\) the bi-invariant metric on \(S_{n}\) given by the Hamming distance between \(\sigma\) and \(\tau\), i.e. the number of elements that are not mapped onto themselves under \(\tau^{-1}\sigma\). This form is manifestly consistent with the expected symmetry group \(S_{n}\times S_{n}\). It is interesting to note that the \(d\to\infty\) limit does not result in the fine tuned \(S_{n!}\)-symmetric Potts model observed in circuits with fully Haar random unit cells [17]. ## IV Fermionic mapping If we take the limit of large local dimension \(d\) and keep only the leading contributions, we obtain dynamics driven by \[\mathcal{H}_{\mathrm{eff}}=-\frac{\gamma}{2}\left(\sum_{i=1}^{N-1}\sigma_{z}^ {(i)}\sigma_{z}^{(i+1)}+g\sum_{i=1}^{N}\sigma_{x}^{(i)}\right), \tag{29}\] where \(g=2f/\gamma\). This is easily recognized as the transverse field Ising model (TFIM) in 1D, subject to open boundary conditions. It is well-known that this can be mapped to a system of non-interacting fermions using the Jordan-Wigner transformation [49]. In this section, we will introduce the general formalism used to compute quantities of the form shown in Eq. 25. We start by constructing a set of non-local Majorana operators as \[\gamma_{i}^{(1)} =\sigma_{z}^{(i)}\prod_{j>i}\sigma_{x}^{(j)}, \tag{30}\] \[\gamma_{i}^{(2)} =\sigma_{y}^{(i)}\prod_{j>i}\sigma_{x}^{(j)}=-i\sigma_{z}^{(i)} \prod_{j\geq i}\sigma_{x}^{(j)}, \tag{31}\] defined on all sites \(i=1,2\ldots N\)[50]. These operators are Hermitian \((\gamma^{\mu})^{\dagger}=\gamma^{\mu}\) and obey the standard anti-commutation relations \[\{\gamma^{\mu},\gamma^{\nu}\}=2\delta^{\mu\nu}, \tag{32}\] where the indices \(\mu\), \(\nu\) are understood to run over all \(2N\) previously defined operators. From the definition, we get the additional relation \[\gamma_{i}^{(1)}\gamma_{i}^{(2)}=-i\sigma_{x}^{(i)}, \tag{33}\] such that the product of all Clifford operators is \[\prod_{i=1}^{N}\gamma_{i}^{(1)}\gamma_{i}^{(2)}=(-i)^{N}\prod_{i=1}^{N}\sigma _{x}^{(i)}\coloneqq(-i)^{N}\mathcal{C}. \tag{34}\] This operator anti-commutes with all the Majorana fermions \(\{\mathcal{C},\gamma^{\mu}\}=0\) and it is a conserved quantity, since it commutes with the full Hamiltonian \([\mathcal{C},\mathcal{H}]=0\). We can couple Majorana fermions living on adjacent sites into domain wall creation and annihilation operators \[a_{i}^{\dagger} =\frac{1}{2}\left(\gamma_{i}^{(1)}-i\gamma_{i+1}^{(2)}\right), \tag{35}\] \[a_{i} =\frac{1}{2}\left(\gamma_{i}^{(1)}+i\gamma_{i+1}^{(2)}\right), \tag{36}\] for \(i=0,1\ldots N-1\), where we assume periodic boundary condition \(N=0\). These obey the typical anti-commutation relations \[\{a_{i},a_{j}\}=0,\quad\left\{a_{i}^{\dagger},a_{j}^{\dagger}\right\}=0,\quad \left\{a_{i},a_{j}^{\dagger}\right\}=\delta_{ij}. \tag{37}\] A simple calculation shows that \[a_{i}^{\dagger}a_{i}=\frac{1}{2}\left(1-\sigma_{z}^{(i)}\sigma_{z}^{(i+1)} \right), \tag{38}\] such that the number operator of the fermionic mode at some site \(i\neq 0\) is a projector onto configurations that have a domain wall between sites \(i\) and \(i+1\). With this convention, the Hamiltonian becomes a quadratic form \[\mathcal{H}_{\text{eff}}=\frac{\gamma}{2}\left[\sum_{i=1}^{N-1}a_{i}^{ \dagger}a_{i}-g\sum_{i=1}^{N}\left(a_{i}^{\dagger}+a_{i}\right)\left(a_{i-1}- a_{i-1}^{\dagger}\right)\right]. \tag{39}\] This can be more succinctly expressed using the Bogoliubov-de Gennes notation \[\mathcal{H}_{\text{eff}}=\frac{1}{2}\mathbf{a}^{\dagger}\mathcal{D}\mathbf{a}, \tag{40}\] where \(\mathbf{a}=(a_{0},\,a_{1},\,\ldots\,,\,a_{N-1},\,a_{0}^{\dagger},\,a_{1}^{ \dagger},\,\ldots\,,\,a_{N-1}^{\dagger})^{T}\). The matrix \(\mathcal{D}\) is called the grand-dynamical matrix and it obeys the particle-hole symmetry equation \[\eta\mathcal{D}^{T}\eta=-\mathcal{D},\,\text{where}\,\,\eta=\begin{bmatrix}0 &I\\ I&0\end{bmatrix}. \tag{41}\] The exponentials of such Hamiltonians are most easily treated using the algebra of fermionic Gaussian states, as worked out in Ref. [51]. We will briefly outline some of the results relevant to our calculation. It is convenient to define a Gaussian state through its generating quadratic form as \[\rho[W]=\frac{1}{Z(W)}\exp\left(\frac{1}{2}\mathbf{a}^{\dagger}W\mathbf{a} \right), \tag{42}\] with a normalisation constant \(Z(W)\) chosen such that \(\operatorname{Tr}\rho[W]=1\). By Wick's theorem, such many-body states are fully characterized by their two-body correlation matrix, defined as \[\Gamma_{\mu\nu}=2\operatorname{Tr}\left(\rho[W]\mathbf{a}_{\mu}^{\dagger} \mathbf{a}_{\nu}\right)-\delta_{\mu\nu}. \tag{43}\] The correlation matrix is related to the generator of the quadratic form through the useful relations \[\Gamma=\tanh\left(\frac{W}{2}\right),\,e^{W}=\frac{1+\Gamma}{1-\Gamma}, \tag{44}\] where it is assumed that \(1-\Gamma\) is invertible. Using a special case of the Baker-Campbell-Hausdorff formula, it is shown that fermionic Gaussian states are closed under multiplication and we have \[\rho[\Omega]=\frac{Z(W)Z(W^{\prime})}{Z(\Omega)}\rho[W]\rho[W^{\prime}], \tag{45}\] with the new generating matrix \(\Omega\) given by \[\Omega=\log(\exp(W)\exp(W^{\prime})). \tag{46}\] If we denote the correlation matrix of \(\Omega\) by \(\Gamma\times\Gamma^{\prime}\), with \(\Gamma\) and \(\Gamma^{\prime}\) the correlation matrices of \(W\) and \(W^{\prime}\) respectively, the following formula is proven in Ref. [51] \[\Gamma\times\Gamma^{\prime}=1-\left(1-\Gamma^{\prime}\right)\frac{1}{1+\Gamma \Gamma^{\prime}}(1-\Gamma). \tag{47}\] Inner products can be easily computed using the following trace formula \[\{\Gamma,\Gamma^{\prime}\}=\operatorname{Tr}\left(\rho[W]\rho[W^{\prime}] \right)=\pm\sqrt{\left|\det\frac{1+\Gamma\Gamma^{\prime}}{2}\right|}, \tag{48}\] where the ambiguity of the sign is in general a complex issue, but this will not be a problem for our purposes. Dynamics of purification In the preceding sections, we developed a formalism that allows us to study entanglement dynamics in our continuous-time random quantum circuit models. Here, we focus specifically on purification dynamics in these models. Namely, starting from an initial mixed state, we are interested in how fast the state of the system is purified by measurements. We will be particularly interested in the purification transition that occurs as a function of measurement frequency \(f\)[29], which is thought to be concomitant with the measurement-induced entanglement transition separating area- and volume-law phases [13; 14; 15; 16; 18]. Thanks to the exact solvability of our model in the \(d\rightarrow\infty\) limit, we are able to compute analytical expressions for the order parameters of this dynamical phase transition, and infer the key critical exponents. ### Setup and phase diagram The setup we study is as in Ref. [29]: The system is initialized in a maximally mixed state, which is represented in the above formalism by the input state \(\ket{\mathbf{I}}^{\otimes N}\). After some evolution time \(t\), the purity of the state of the system will have increased from its initial value due to the measurements. As explained previously, we will use the quantity (15) as a measure of the the typical entropy of the ensemble of states. Since we are looking at the purity of the entire state after a time \(t\), the set \(A\) that appears in Eq. (25) will contain all of the output qubits. Accordingly, we can express the quantity in question in terms of the transfer matrix \(\mathcal{T}_{\mathrm{eff}}(t)\) \[\tilde{S}^{(2)}(t)=-\log\Bigg{|}\frac{\bra{\mathbf{S}}^{\otimes N}\mathcal{T} _{\mathrm{eff}}(t)\ket{\mathbf{I}}^{\otimes N}}{\langle\mathbf{I}|^{\otimes N }\mathcal{T}_{\mathrm{eff}}(t)\ket{\mathbf{I}}^{\otimes N}}\Bigg{|}. \tag{49}\] The purification transition that occurs in our model is associated with a quantum phase transition in the effective Hamiltonian \(\mathcal{H}_{\mathrm{eff}}\), which generates the time evolution operator \(\mathcal{T}_{\mathrm{eff}}(t)\). Based on the phase diagram of the TFIM, we can deduce that such a transition must occur at the critical measurement rate \(g_{c}=1\), i.e. \(f_{c}=\gamma/2\). In the spin basis (29), the two phases correspond to the \(\mathbb{Z}_{2}\) symmetric phase under the symmetry \(\mathcal{C}=\prod_{i=1}^{N}\sigma_{x}^{(i)}\) for \(g>1\), and a spontaneous symmetry-broken phase for \(g<1\). For the problem in hand, the relevant order parameter that we use to distinguish these phases is not a correlation function, as is usually the case, but rather the many-body overlap appearing inside the logarithm in Eq. (49). To provide intuition into how this quantity behaves either side of the transition, we can reformulate our expression for \(\tilde{S}^{(2)}(t)\) as follows. Since \(\ket{\mathbf{S}}^{\otimes N}=\mathcal{C}\ket{\mathbf{I}}^{\otimes N}\), the above fraction becomes equal to the expectation value of \(\mathcal{C}\) in the state \(\ket{\mathbf{\Psi}(t)}=\mathcal{T}_{\mathrm{eff}}^{\frac{1}{2}}\ket{\mathbf{I }}^{\otimes N}\), namely \[\tilde{S}^{(2)}(t)=-\log\bigg{|}\langle\mathcal{C}\rangle_{\tilde{\mathcal{ R}}(t)}\Big{|}, \tag{50}\] where \(\ket{\tilde{\Psi}(t)}\coloneqq\ket{\mathbf{\Psi}(t)}/\sqrt{\bra{\mathbf{\Psi}(t)} \ket{\mathbf{\Psi}(t)}}\) is the wavefunction after imaginary time evolution under \(\mathcal{H}_{\mathrm{eff}}/2\), appropriately normalized. If the measurement rate is sufficiently high such that the Hamiltonian (29) is in a symmetry-unbroken phase, then the ground state is non-degenerate and thus \(\ket{\tilde{\Psi}(t)}\) inherits the symmetry of the Hamiltonian. Since the Hamiltonian is also gapped, we see that the (accordingly normalized) state \(\ket{\mathbf{\Psi}(t)}=\exp(-t\mathcal{H}_{\mathrm{eff}}/2)\ket{\mathbf{I}}^{ \otimes N}\) converges to the ground state exponentially quickly. The ground state must be an eigenstate of \(\mathcal{C}\), whose eigenvalues are \(\pm 1\), so we can then conclude that \(\big{|}\langle\mathcal{C}\rangle_{\mathbf{\Psi}(t)}\big{|}\to 1\) exponentially quickly as \(t\rightarrow\infty\), and hence \(\tilde{S}^{(2)}\to 0\) at a rate independent of the system size, as expected in this regime. When \(g<1\) the symmetry is spontaneously broken. In this case, the ground eigenspace is doubly degenerate in the thermodynamic limit \(N\rightarrow\infty\), and the effect of the transfer matrix at long times is to project onto this subspace. The projected state may no longer be an eigenstate of \(\mathcal{C}\), so we can have a non-zero residual entropy. As we will see, this residual entropy is extensive, with a \(\log(N)\) correction [Eq. (69)]. While this picture allows us to understand the transition at a qualitative level, to obtain an analytic expression for the residual entropy we will instead use the fermionic mapping detailed in the previous section. We will find it convenient to work with states of definite fermion parity, and hence we define the density matrices \(\rho_{\pm}=|\pm\rangle\,\langle\pm|\), where \(|\pm\rangle\coloneqq(\ket{\mathbf{I}}^{\otimes N}\pm\ket{\mathbf{S}}^{ \otimes N})/\sqrt{2}\), which are eigenstates of \(\mathcal{C}\). We can then write \[\tilde{S}^{(2)}=\log\bigg{|}\frac{1+\Theta}{1-\Theta}\bigg{|}, \tag{51}\] where the parameter \(\Theta\) is defined by \[\Theta=\frac{\mathrm{Tr}\left(e^{-t\mathcal{H}_{\mathrm{eff}}}\rho_{+}\right) }{\mathrm{Tr}\left(e^{-t\mathcal{H}_{\mathrm{eff}}}\rho_{-}\right)}, \tag{52}\] We note that Eqs. (51, 52) are quite general, and could be applied even if we didn't take the \(d\rightarrow\infty\) limit. Because the Hamiltonian (39) is a fermion bilinear, the exponential \(e^{-t\mathcal{H}_{\mathrm{eff}}}\) can be written in the form of Eq. (42), with the grand dynamical matrix \(\mathcal{D}\) in place of \(W\). Hence we can define correlation matrices \(\Gamma[-t\mathcal{D}]\) that correspond to this fermionic state, according to Eq. (43). The states \(\rho_{\pm}\) are also Gaussian fermionic states, and hence can be characterized through their correlation matrices. These have the simple diagonal form \(\Gamma_{GS}=\mathrm{diag}(-1,\,-1,\,\ldots,\,-1,\,1,\,1,\,\ldots,\,1)\) and \(\Gamma_{E}=\mathrm{diag}(1,\,-1,\,\ldots,\,-1,\,-1,\,1,\,\ldots,\,1)\), with \(\pm 1\) each appearing \(N\) times. Then, using Eq. (48) we obtain \[\Theta=\frac{\{\Gamma[-t\mathcal{D}],\Gamma_{+}\}}{\{\Gamma[-t\mathcal{D}], \Gamma_{-}\}}. \tag{53}\] This expression for \(\Theta\), which determines the Renyi entropy via Eq. (51), will help us study the purification transition at a quantitative level. The above considerations help us anticipate the existence of two distinct dynamical phases, consistent with previous work on purification dynamics in discrete time random circuit models, which we refer to as'mixed' (\(g<1\)) and 'purifying' phases (\(g>1\)), following Ref. [29]. In the following, we derive analytical expressions for the time dependence of \(\Theta\), which in turn determines the Renyi entropy \(\tilde{S}^{(2)}(t)\) via Eq. (51). We will use these expressions later to understand the nature of the two phases and the transition between them at a quantitative level. ### Expressions for \(\Theta(t)\) While we have so far left the boundary conditions unspecified, in computing \(\Theta(t)\) we will consider open and periodic boundary conditions separately in our calculations. The conventional timescale \(\gamma=1\) is employed throughout this chapter. #### iv.2.1 Periodic boundary conditions We start by considering periodic boundary conditions, which can be realised by introducing additional random unitary gates that act between sites \(1\) and \(N\) in the original circuit model. In this case, a standard calculation shows that the Jordan-Wigner-transformed Hamiltonian (39) acquires an additional term which imposes either periodic or antiperiodic boundary conditions depending on the fermion parity sector one works in (see, e.g. Ref. [49]). Taking \(N\) to be even from hereon for simplicity, the even (odd) parity sector features antiperiodic (periodic) boundary conditions. These are sometimes referred to as Ramond and Neveu-Schwarz sectors, respectively. Thanks to the translation invariance of the system, the single particle Hamiltonian \(\mathcal{D}\) can be block diagonalized using momentum eigenstates, whose wavevector is quantized to \(k_{l}=l\pi/N\), with \(l\in\{1,\ldots,N-1\}\). In the even parity sector \(l\) must be odd to be compatible with the boundary conditions, and likewise _vice-versa_. Recognizing that the states \(\ket{\pm}\) are the ground states of the Hamiltonian in the \(g\to 0\) limit in each parity sector, we can express \(\Theta\) as a ratio of products over \(k_{l}\) modes, with even \(l\) in the numerator and odd \(l\) in the denominator. In Appendix E, we show that \[\Theta=e^{t}\frac{\prod_{n=1}^{N/2-1}\theta(k_{2n},t)}{\prod_{n=1}^{N/2}\theta( k_{2n-1},t)}, \tag{54}\] where \[\theta(k,t)=\cosh(\lambda_{k}t)\left[1+\tanh(\lambda_{k}t)\frac{1-g\cos k}{ \lambda_{k}}\right], \tag{55}\] and the energy eigenvalues are given by \[\lambda_{k}=\sqrt{1+g^{2}-2g\cos k}. \tag{56}\] The factor \(e^{t}\) in Eq. (54) accounts for the modes \(k=0\), \(k=\pi\), which are only present in the odd parity sector. #### iv.2.2 Open boundary conditions We can also consider open boundary conditions (OBCs), where the Hamiltonian \(\mathcal{H}_{\mathrm{eff}}\) no longer features the term connecting sites \(1\) and \(N\). While the change of boundary condition makes little difference in the purifying phase, we will find that in the mixed phase, quantitative differences between OBCs and PBCs can be seen in the behaviour of the Renyi entropy. As such, will focus mainly on mixed phase \(g<1\), although of the following holds true throughout the phase diagram. With OBCs, the JW-transformed does not contain a term that manifestly depends on the fermion parity sector. This leaves us with the problem of diagonalizing the single-particle matrix \(\mathcal{D}\), which is now the same in both parity sectors. Since the system is no longer translation invariant, we cannot treat momentum eigenmodes separately as we did before. Instead, we must explicitly compute the correlation matrices \(\Gamma[-t\mathcal{D}]\) and \(\Gamma_{\pm}\), and use the more general expression (53). To do so, we calculate the single-particle eigenstates, which form the columns of a real orthogonal eigenvector matrix \(\mathcal{O}\). In terms of these, we obtain a spectral decomposition of the grand dynamical matrix \(\mathcal{D}=\mathcal{O}\Lambda\mathcal{O}^{-1}\). The eigenvalues \(\Lambda\) come in pairs due to the particle-hole symmetry (41), which we arrange as \(\Lambda=\mathrm{diag}(\lambda_{0},\,\lambda_{1},\,\lambda_{2},\,\ldots\, \lambda_{N-1},-\lambda_{0},\,-\lambda_{1},\,-\lambda_{2},\,\ldots\,-\lambda_{ N-1})\) with \(\lambda_{i}>0\) and in non-decreasing order. The corresponding pairs of eigenvectors are also related through \(\ket{-\lambda_{i}}=\eta\ket{\lambda_{i}}\), with \(\eta\) defined in Eq. (41). In terms of these eigenvalues, the correlation matrix \(\Gamma[-t\mathcal{D}]\) becomes \[\Gamma[-t\mathcal{D}]=\mathcal{O}\tanh\!\left(\frac{-t\Lambda}{2}\right) \!\mathcal{O}^{-1}, \tag{57}\] which can be substituted directly into Eq. (53). In Appendix F, we show that the single particle eigenstates take the form of sinusoids, whose wavevectors \(k_{l}\) can be found as the solutions to the equation \[\tan Nk_{l}=\frac{g\sin k_{l}}{1-g\cos k_{l}}, \tag{58}\] lying in the interval \([0,\pi)\) and labeled in increasing order. In terms of these wavevectors, the eigenvalues \(\lambda_{l}\) themselves again follow the well-known dispersion for the TFIM, Eq. (56). In the mixed phase, there is also a single imaginary solution to the above, which we label \(k_{0}=\mathrm{i}K\), corresponding to a Majorana edge mode localized at the two boundaries of the chain. The energy of this edge mode \(\lambda_{0}\) can be shown to be exponentially small in the system size \(N\), and this energy is associated with a long timescale \(\lambda_{0}^{-1}\) which we will show is responsible for the slow decay of purity in this phase. ### Behaviour of the Renyi entropy With the above expressions in place, we are now ready to study the behaviour of the Renyi entropy in the two phases, as well as the critical point which separates them. #### iv.3.1 Mixed phase--periodic boundary conditions In the mixed phase, the Renyi entropy shows quantitative differences depending on the boundary conditions, and thus we will consider both PBCs and OBCs in the regime \(g<1\), starting with the former. Beginning with Eq. (54), we can use complex integration methods to transform the alternating product over even and odd \(k\) modes into an infinite product \[\Theta=\prod_{q=0}^{\infty}\tanh\frac{Nx_{q}}{2}, \tag{59}\] with \(x_{q}\)'s found as the solutions of the equation \[t\sqrt{2g\cosh x-1-g^{2}}+\phi(x)=\pi(q+\frac{1}{2}), \tag{60}\] \[\tan\phi(x)=\frac{g\cosh x-1}{\sqrt{2g\cosh x-1-g^{2}}}, \tag{61}\] in the interval \((K,\infty)\), where \(K=-\log g\) is the point in the complex plane where the dispersion function \(\lambda(\mathrm{i}K)\) has a zero. This form makes it manifestly clear that \(\Theta<1\). The details of this calculation are given in App. E. Except at criticality, for sufficiently large \(N\) we can have \(NK\gg 1\), which in turn implies \(Nx_{q}\gg 1\) for all \(q\). In this case, by virtue of the approximation \(\log\tanh(y)\approx-2e^{-2y}\) for \(y\gg 1\), we can approximate \(\log\Theta\) by an integral \[\log\Theta\approx-2\int_{K}^{\infty}\frac{dq}{dx}e^{-Nx}, \tag{62}\] where \(dq/dx\) is the density of solutions and can be found by differentiating Eq. (60). The result to highest order in powers of \(N\) is \[\log\Theta(t)=-\sqrt{\frac{1-g^{2}}{\pi N}}e^{-NK}\left(t+\frac{2}{1-g^{2}} \right). \tag{63}\] To relate this expression to the Renyi entropy (15), we focus on the regime where \(t\) scales no faster than polynomial in \(N\), such that the small factor \(e^{-NK}\) dominates, making \(-\log\Theta(t)\) itself small. Then, if we make the approximations \(1-\Theta\approx-\log\Theta\) and \(1+\Theta\approx 2\), we can deduce the following form for the Renyi entropy, valid for a broad window of times \(e^{NK}\gg t\gg 2/(1-g^{2})\) (restoring the original units of time) \[\tilde{S}^{(2)}(t)=N\log|g|-\log\frac{t}{\sqrt{N}}+\frac{1}{2}\log\frac{4\pi} {1-g^{2}}+o(1) \tag{64}\] where the term \(o(1)\) represents terms that tend to zero in the limit of large \(t\) or \(N\). We see that the entropy decreases very slowly in time in this window, which is a defining feature of the mixed phase. Finally, for times \(t\) that scale exponentially with system size \(\gamma t\gtrsim e^{NK}\), such that \(|\log\Theta(t)|\gg 1\), we find that the entropy decays as \[\tilde{S}^{(2)}\approx 2\Theta(t)\approx 2\exp\left(-\gamma t\sqrt{\frac{1-g^{2} }{\pi N}}e^{-NK}\right) \tag{65}\] #### iv.3.2 Mixed phase--open boundary conditions We now wish to compute the same quantity with open boundary conditions in the mixed phase \(g<1\). In particular, we are interested in the regime during which the entropy decays very slowly. As such, we can separate out the bulk single-particle eigenstates, whose energies lie above the bulk gap \(\Delta=(1-g)\), from the Majorana edge mode, which is exponentially small in \(N\). In particular, as long as one is not too close to criticality \((1-g)\gg 1/N\), the Majorana eigenvalue can be approximated as \[\lambda_{0}\approx(1-g^{2})e^{-NK} \tag{66}\] This indicates that there is a regime of times \(\Delta^{-1}\ll t\ll\lambda_{0}^{-1}\) during which the transient bulk modes have decayed away \(\exp(-t\lambda_{i})\approx 0\), while the Majorana mode has not decayed. The approximate correlation matrix in this regime will then take the form \[\Gamma[-t\mathcal{D}]=\mathcal{O}\tanh\!\left(-\frac{t\mathcal{E }}{2}\right)\!\mathcal{O}^{-1}\] \[\approx\mathcal{O}\begin{pmatrix}-\tanh\frac{t\lambda_{0}}{2}&0&0& 0\\ 0&-I_{N-1}&0&0\\ 0&0&\tanh\frac{t\lambda_{0}}{2}&0\\ 0&0&0&I_{N-1}\end{pmatrix}\mathcal{O}^{-1}. \tag{67}\] In App. F we show that the form of the eigenvectors \(\mathcal{O}\) can be found explicitly and the parameter \(\Theta\) can be expressed using Vandermonde determinants. The factorization of the latter is known, leading to the exact final expression \[\Theta(t)=e^{-t\lambda_{0}}\frac{\tanh\frac{NK}{2}\prod_{l\text{ odd}}(\cosh K-\cos k_{l})}{\sinh K\prod_{l\text{ even}}(\cosh K-\cos k_{l})}, \tag{68}\] where \(k_{l}\) for \(1\leq l\leq N-1\) are the wavevectors of the bulk modes, defined in Eq. (58), and \(K\) is the spatial decay rate of the edge mode. As with the analogous expression for periodic boundary conditions (54), this product can be evaluated with the help of complex integration techniques, which we describe in Appendix G. We find that the large \(N\) asymptotic expression of \(\Theta\) takes the form \[\log\Theta(t)=2e^{-NK}\left(\sqrt{\frac{N}{\pi}}\sqrt{1-g^{2}}-1-\frac{1-g^{2} }{2}t\right) \tag{69}\] Again, we focus on the regime where \(t\) scales polynomially with \(N\), in which case the right hand side of the above is small. Moreover, noting that \(\Theta\) should be no greater than unity, we find that the above expression should only be trusted in the regime \[t\gtrsim t_{c}=2\sqrt{\frac{N}{(1-g^{2})\pi}}, \tag{70}\] We view the above constraint as a condition for validity of the approximation (67) made earlier. Then, in this regime we can make the same series of approximations as before to relate \(\Theta(t)\) to the Renyi entropy. Thus, for \(t_{c}\lesssim t\ll e^{NK}\), we find \[\tilde{S}^{(2)}(t)=N\log|g|-\log t+\log\frac{2}{1-g^{2}}+o(1) \tag{71}\] At very long times, when \(t\) scales exponentially with \(N\) such that \(|\log\Theta(t)|\gg 1\), we find that the entropy decays as \[\tilde{S}^{(2)}\approx 2\exp\left(-(1-g^{2})te^{-NK}\right) \tag{72}\] Together, Eqs. (71, 72) characterize the salient features of purification dynamics in our model in the mixed phase sufficiently far from criticality (\(e^{-NK}\ll 1\)). #### iv.1.3 Purifying phase The purifying case corresponds to the regime \(g>1\), where measurements occur so often that they overcome the scrambling and an initially mixed state quickly becomes pure. Looking at the spectrum of the single-particle matrix \(\mathcal{D}\), one finds that all eigenvalues are at least as large as the bulk gap \(\Delta=(g-1)\), which sets a timescale \(t\gtrsim(g-1)^{-1}\) after which the correlation matrix \(\Gamma[-t\mathcal{D}]\) will have converged close to its \(t\to\infty\) limit. Using Eq. 59 we see that there is exactly one root \(x_{0}\) in the interval \(0<x_{0}<\log g\). For large enough \(N\) we have that \(N\log g\gg 1\), so for the other roots \(\tanh Nx_{q}/2\approx 1\). This means that for \(t\sim\text{poly}(N)\) we can make the approximation \[\tilde{S}^{(2)}(t)\approx Nx_{0}, \tag{73}\] so \(x_{0}\) is the entropy density in the chain at time \(t\). The equation determining \(x_{0}\) can be written as \[\tanh\left(t\sqrt{1+g^{2}-2g\cosh x_{0}}\right)=\frac{\sqrt{1+g^{2}-2g\cosh x _{0}}}{g\cosh x_{0}-1}. \tag{74}\] Taking \(t\gg 1\) we see that the solution is approximately \[\tilde{S}^{(2)}(t)/N\approx x_{0}\approx\frac{2(g-1)}{g}e^{-t(g-1)}, \tag{75}\] and follows the expected decay rate set by the spectral gap \(\Delta\). Since the system is disordered in this regime, the result is expected to hold in the thermodynamic limit, irrespective of the boundary conditions. #### iv.1.4 Critical point We now address dynamics at the critical point \(g=1\). While we are no longer able to reliably make an approximation of the kind (67) in the open boundary condition case, we find that the purification dynamics for periodic boundary conditions is amenable to analytical treatment in this regime. In particular, Eq. (59) continues to hold at \(g=1\), with the the equation for the roots \(x_{q}\) now given as solutions to the equation \[2t\sinh\frac{x}{2}+\arctan\sinh\frac{x}{2}=\pi(q+\frac{1}{2}). \tag{76}\] This equation is still transcendental, making it difficult to find a universal expression for its solutions. However, we can study the three regimes \(t\ll 1\), \(1\ll t\ll N\) and \(t\gg N\) separately. _Early times \(t\ll 1\).--_In the initial time frame \(t\ll 1\) it can be shown that the entropy is approximately given by \[\tilde{S}^{(2)}\approx-N\log\frac{t}{2}. \tag{77}\] The logarithmic divergence at the origin has a simple intuitive explanation: since the averaging is performed over the purities rather than the entropies and we work in a \(d\to\infty\) system, the only scenarios that contribute to the average purity at early times are those where the entire chain is measured. If this happens, the system is immediately purified, since we can neglect the unitary evolution at \(t\ll 1\). The entropy is then simply the logarithm of the probability that all qudits are measured within the time Figure 2: Qualitative plot showing the behaviour of the system entropy on different timescales in the mixed(red) and purifying(black) phase. In the mixing phase, the entropy undergoes a period of very slow logarithmic decay up to exponentially long times in the system size. \(t\), which is \(p\approx(ft)^{N}=(gt/2)^{N}\). This intuitive picture matches the exact answer we found above at criticality, and is expected to hold for all values of \(g\) in both the periodic and open boundary conditions. _Intermediate times \(1\ll t\ll N\).--_In this regime we can assume \(Nx_{q}\gg 1\) but \(x_{q}\ll 1\) for all solutions \(x_{q}\) that contribute meaningfully to the value of \(\Theta\). Using these approximations we find that the \(x_{q}\) are equally spaced and the formula for \(\log\Theta\) is calculated as a geometric sum. The final expression for the entropy is \[\tilde{S}^{(2)}\approx\frac{N\pi}{2t+1}. \tag{78}\] The algebraic relationship \(\tilde{S}^{(2)}\propto t^{-1}\) is an important feature and only occurs exactly at criticality. _Late times \(t\gg N\).--_Finally, in the long time limit, we see that we can approximate \(\Theta\) by \[\Theta\approx\frac{(e^{-\frac{N\pi}{2t}},e^{-\frac{N\pi}{t}})_{\infty}}{(-e^{ -\frac{N\pi}{2t}},e^{-\frac{N\pi}{t}})_{\infty}}\approx\sqrt{2}e^{-\frac{\pi t }{4N}}, \tag{79}\] where \((a,q)_{\infty}\) is the q-Pochammer symbol. This leads to an exponential decay of the entropy \[\tilde{S}^{(2)}\approx 2\sqrt{2}e^{-\frac{\pi t}{4N}}. \tag{80}\] ### Comparison to result from field theory Having derived expressions for the time-dependence of the Renyi entropy of our model of hybrid quantum dynamics, it is instructive to compare our findings to the approach introduced in Ref. [38]. There, the authors invoke an effective field theory known as capillary-wave theory, which was first developed to model the dynamics of domain walls in the low-temperature phase of the Ising model. The correspondence between the two is rooted in the mapping between discrete-time hybrid quantum circuits and two-dimensional ferromagnets, see e.g. Refs. [14; 27]. The parameters of the theory are a phenomenological surface tension \(\sigma\) and inverse temperature \(\beta\), and once these are fixed, it is possible to find approximations for the time-dependence of the Renyi entropy starting from a mixed initial state in the associated discrete-time monitored quantum circuit model. Upon comparing their expression to our results, we find that the same universal features hold. In particular, for both cases, there is a marked regime of times \(t\sim\text{poly}(N)\) in the mixed phase during which the entropy decays as an extensive constant with a \(-\log t\) contribution. The sensitivity to boundary conditions we see [Eq. (64) vs. (71)] can also be understood in the capillary-wave picture as a consequence of the difference in configurational entropies of the endpoints of a domain wall for periodic versus open boundary conditions. Moreover, by looking at the prefactor of the term proportional to \(N\), we can relate the microscopic parameters of our model to the phenomenological parameters of the field theory; in particular, we can fix the code rate \(\beta\sigma=K=-\log g\), which vanishes non-analytically at the transition \(g=1\). ## VI Discussion Our work introduces a class of random unitary circuits following a brickwork geometry, where each unit cell performs an infinitesimally small unitary transformation. We show that the limiting case of the construction above leads to a continuous stochastic process through the many-body Hilbert space. We show that the non-equilibrium behaviour of statistical averages of a large class of operator-space entanglement measures (the Renyi entropies) of this dynamical process can be obtained as equilibrium partition functions in an effective quantum spin system, governed by a universal, time-independent Hamiltonian. The construction relies on an initial microscopic Hamiltonian describing local interactions, but we prove that this only enters the effective quantum information dynamics by setting the overall timescale. We only perform a thorough investigation of the second Renyi entropy, where the effective theory is the spin-1/2 ferromagnetic TFIM, with an integrability breaking term that becomes quadratically small in the local dimension \(d\). The ground state becomes degenerate in the thermodynamic limit and it is ferromagnetically ordered. Taking a phenomenological perspective, the two types of stable ordering roughly correspond to the measuring agent having full knowledge or no knowledge about the state of the system. The lowest energy excitations are topological domain walls, and roughly represent the geometric boundaries of our knowledge. We show that local measurements can also be studied within the same framework by adding an extra state to the spins of the effective system. When the local tumbling rate of the microscopic Hamiltonian is sufficiently strong, this extra state is adiabatically eliminated, and the effect of measurements is to introduce a transverse magnetic field whose strength is proportional to the measurement frequency. When this exceeds a critical threshold, the system undergoes an Ising-type phase transition into a disordered phase. This is recognized as the purification transition observed in numerical studies of similar models [29; 31]. The signature of the transition is a logarithmically decaying residual uncertainty in the state of the system after a purification procedure using uncorrelated local measurements, which is present only if the system is in the ordered phase. We identify the order parameter corresponding to the residual second Renyi entropy in the effective model and prove exact product expansion formulae that can be used to calculate it in both open and closed boundary conditions. Complex integration techniques are used to find thermodynamic limit approximations on various timescales, and we see that their scaling agrees with field theoretic arguments. The method is not restricted to the residual entropy of the whole chain, and could be adapted to calculations of other second Renyi entropies. Universal characteristics of the transition such as the critical exponents must be the same as for the effective 1D quantum Ising theory. The transition in the von Neumann entropy requires higher replica analysis and may be of a different universality class, but we expect a qualitatively similar behaviour away from criticality. We begin the investigation of higher replica calculations by proving a formula for the matrix elements of the effective Hamiltonian in App. B. In contrast with similar models studied in literature, our circuits are not expected to lead to a percolation transition of the von Neumann entropy, even in the \(d\to\infty\) limit. This is because the small gate action limit \(\Delta t\to 0\) is taken first, making the Hartley entropy \(S_{0}\) undefined for any \(d\). To simplify our calculations, we have set the local tumbling rate \(\Gamma\) to infinity, but it may be interesting to investigate how it affects the transition. This introduces measurement inertia, wherein less information is gained by consecutively measuring the same qudit at intervals less than \(\sim 1/\Gamma\). If the measurement frequency grows beyond this, the qudits become effectively Zeno-locked. To the best of our knowledge, the growth of entanglement in this regime has not been previously investigated. ###### Acknowledgements. This work was supported in part by EPSRC grant EP/S020527/1. S.L. acknowledges support from UCL's Graduate Research Scholarship and Overseas Research Scholarship. M. M. acknowledges support from Trinity College, Cambridge. ## Appendix A Diagrammatic calculations of matrix elements In this appendix, we show how diagrammatic manipulations can be used to find the effective transfer matrix in Eq. 7. Our goal is to find how the unit cell defined in Sec. II can be understood as a linear map on \(V(S_{2})\). If we denote by \(V^{\star}(S_{k})\) the dual space of \(V(S_{k})\) then we can define the dual element \(\sigma^{\star}\) of \(\sigma\) by the following expression \[\bra{\sigma^{\star}}\ket{\tau}=\sigma^{\star}(\tau)=\delta_{\sigma,\tau}. \tag{10}\] It is then easy to check that the following expansion holds for all \(\sigma\) \[\ket{\sigma}=\sum_{\tau\in S_{k}}\bra{\tau^{\star}}\ket{\sigma}\ket{\tau}, \tag{11}\] so we have a resolution of identity \[1=\sum_{\sigma\in S_{k}}\ket{\sigma}\bra{\sigma^{\star}}. \tag{12}\] The dual element can be expanded on a basis formed from Hermitian conjugates of the original \(V(S_{k})\) basis, with coefficients given by the Weingarten function \[\bra{\sigma^{\star}}=\sum_{\tau\in S_{k}}\mathcal{W}g(\sigma\tau^{-1})\bra{ \tau}. \tag{13}\] We can compute a matrix associated to the unit cell by considering contractions of its inputs and outputs with elements of \(S_{k}\). We denote the result of this diagrammatic contraction by \[\mathcal{T}^{\star}_{\kappa\epsilon,\sigma\tau}=\bra{\kappa}\otimes\bra{ \epsilon}\hat{\mathcal{T}}\ket{\sigma}\otimes\ket{\tau}, \tag{14}\] where \(\kappa\), \(\epsilon\), \(\sigma\), \(\tau\in S_{k}\) and \(\hat{\mathcal{T}}\) is the diagram corresponding to the unit cell. It is now important to notice that, due to the direct contractions on the legs of the unit cell, all random unitary matrices are contracted with a corresponding conjugate and vanish, so that the matrix element indicated above will take the same value independently of which unitary gates are picked from the Haar ensemble. This means we can ignore Haar averaging in the computation and only focus on the contractions of the core Hamiltonian evolution operators. We note that the only interesting diagrams are those that carry some \(t\) dependence, so we need not compute diagrams where the blocks cancel each other. This means we should restrict ourselves to the case of \(\kappa\neq\epsilon\) and \(\sigma\neq\tau\). For two replicas this condition only leaves 2 nontrivial diagrams corresponding to \(\bra{\mathbf{SI}}\hat{\mathcal{T}}\ket{\mathbf{SI}}\) and \(\bra{\mathbf{SI}}\hat{\mathcal{T}}\ket{\mathbf{IS}}\). These are illustrated as tensor contraction diagrams in Fig. 3a and Fig. 3b respectively. Since we will eventually take \(t\) to be small we can Taylor expand the exponential and compute the diagram perturbatively. It can be easily seen that the linear order correction to both diagrams is just 0, as the contribution from the top blocks exactly cancels that of the bottom blocks. The first non-zero correction to the diagrams appears at order \(t^{2}\). The contributions here come from single boxes in the second order of expansion and from pairs of boxes in the first order of expansion. After summing up all the terms we get the following results: \[\bra{\mathbf{SI}}\hat{\mathcal{T}}\ket{\mathbf{SI}} =d^{4}-2t^{2}\Omega(H)+\mathcal{O}(t^{4}), \tag{15}\] \[\bra{\mathbf{SI}}\hat{\mathcal{T}}\ket{\mathbf{IS}} =d^{2}+\mathcal{O}(t^{4}),\] where the only dependence on H appears through the function \(\Omega(H)\) given in Eq. 8. At this level, it appears a coincidence that there is no correction at \(t^{2}\) to the second term. In Appendix B, we show that similar behavior is found in higher replica calculations. Knowing these matrix elements, we see that it is possible to recover the propagation tensor using the resolution of identity \[\hat{\mathcal{T}} =\sum_{\kappa,\epsilon,\sigma,\tau,\mu,\nu\in S_{2}}\mathcal{T}^{ \star}_{\kappa\epsilon,\sigma\tau}\ket{\kappa^{\star}}\bra{\sigma^{\star}}\bra{ \tau^{\star}} \tag{10}\] \[=\sum_{\kappa,\epsilon,\sigma,\tau,\mu,\nu\in S_{2}}\mathcal{W}g( \kappa\mu^{-1})\mathcal{W}g(\epsilon\nu^{-1})\mathcal{T}^{\star}_{\kappa \epsilon,\sigma\tau}\ket{\mu}\ket{\nu}\bra{\sigma^{\star}}\bra{\tau^{\star}}\] \[=\sum_{\sigma,\tau,\mu,\nu\in S_{2}}\mathcal{T}_{\mu\nu,\sigma\tau }\ket{\mu}\ket{\nu}\bra{\sigma^{\star}}\bra{\tau^{\star}},\] where the matrix \(\mathcal{T}_{\mu\nu,\sigma\tau}\) defined in the last line is the matrix we are looking for, which tells us how vectors in \(V(S_{2})\otimes V(S_{2})\) evolve under the action of a unit cell. If we let \(\mathcal{T}\) and \(\mathcal{T}^{\star}\) denote the \(4\) by \(4\) matrices with entries defined above, and let \(\mathcal{W}g\) be the \(2\) by \(2\) matrix with elements \(\mathcal{W}g_{\sigma\tau}=\mathcal{W}g(\sigma\tau^{-1})\), then we can compute the transfer matrix knowing the results of all contraction diagrams from the equation \[\mathcal{T}=\mathcal{W}g\otimes\mathcal{W}g\cdot\mathcal{T}^{\star}. \tag{11}\] Since the unit cell is just the identity for \(t=0\), the matrix \(\mathcal{T}\) is simply the identity at zeroth order when expanding in powers of \(t\). The correction to \(\mathcal{T}^{\star}\) at order \(t^{2}\) was computed above and can be written as \[\Delta^{(2)}\mathcal{T}^{\star} =-2t^{2}\Omega(H)\begin{bmatrix}0&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&0\end{bmatrix} \tag{12}\] \[=-2t^{2}\Omega(H)\cdot\frac{1}{2}\left(1-\sigma_{z}\otimes\sigma_{ z}\right).\] If we plug this correction into Eq. 11 we get the result presented in Eq. 7 of the main text. ## Appendix B Effective dynamics for multiple replicas Here we give an overview of the method used to compute the transfer matrix using a number of replicas \(k>2\). This generalizes the calculation discussed in App. A and it can be reduced to a simple exercise in combinatorics. Suppose we have \(\hat{\mathcal{T}}\) the unit cell operator and denote \(\bra{\kappa\epsilon}\hat{\mathcal{T}}\ket{\sigma\tau}\) the result of contracting its legs according to permutation operators \(\kappa\), \(\epsilon\), \(\sigma\), \(\tau\in S_{k}\). This will lead to diagrams similar to those seen in Fig. 3, but with \(k\) exponential operators of each type. To compute this, we must once again look at the Taylor expansions of the operators. Contributions at order \(t^{2}\) come from the second order term in the expansion of each box and from all pairings of operators expanded to first order. All of these will reduce to one of three possible diagrams, with coefficients of \(d\) raised to some power depending on the number of loops that appear. These three are \(\text{tr}\big{(}H^{2}\big{)}\), \(\text{tr}\big{(}\text{tr}_{1}(H)^{2}\big{)}\) and \(\text{tr}\big{(}H\big{)}^{2}\), the same terms that appear in the definition of \(\Omega(H)\). To find the result of the calculation, consider the following auxiliary construction. Let \(i=1,\dots,k\) and \(\vec{i}=\vec{1},\dots,\vec{k}\) label a series of boxes and let \(\sigma,\,\kappa\in S_{k}\). Denote the first set of boxes by \(\mathcal{B}\) and the second by \(\overline{\mathcal{B}}\). Now imagine that we connect the bottom of the \(i\) boxes to the top of the \(\vec{i}\) boxes according to the map \(i\to\overline{\sigma(i)}\), and similarly the top of the \(i\) boxes to the bottom of the \(\vec{i}\) boxes according to \(i\to\overline{\kappa(i)}\). The result is that the boxes Figure 3: Tensor contraction diagrams corresponding to the \(2\) non-trivial matrix elements found for the case of \(k=2\) replicas. Note that the complex conjugate operators were transposed to better fit in the plane, so that the top operators become the inverses of the bottom ones. The convention of having the bottom legs as inputs and top legs as outputs is also reversed for these blocks. are now tied together in chains of varying lengths, each chain containing an equal number of \(i\) and \(\overline{i}\) boxes. If we call \(l\) the number of chains, then we let \(\mu=1,\ldots,l\) index the chains and form sets \(\mathcal{S}_{\mu}\) as \[\mathcal{S}_{\mu}=\{b\in\mathcal{B}\cup\overline{\mathcal{B}}\mid\text{ $b$ is in chain $\mu$}\}. \tag{10}\] These sets exactly partition the full set of boxes with no overlaps \[\bigcup_{\mu}\mathcal{S}_{\mu}=\mathcal{B}\cup\overline{\mathcal{B}}. \tag{11}\] We now return to our initial problem and consider the sets \(\mathcal{S}_{\mu}^{L}\) and \(\mathcal{S}_{\nu}^{R}\) formed by the left (\(\sigma\), \(\kappa\)) and right (\(\tau\), \(\epsilon\)) pairs of permutations. The indices are allowed to go from \(1\) to the number of chains formed by the respective pair. Denote by \(\mathcal{M}_{\mu\nu}\) the number of elements from \(\mathcal{B}\) that are found in both \(\mathcal{S}_{\mu}^{L}\) and \(\mathcal{S}_{\nu}^{R}\), and by \(\overline{\mathcal{M}}_{\mu\nu}\) the number of elements from \(\overline{\mathcal{B}}\) found in the same \(2\) partitions. Remarkably, counting the number of diagrams of the three types mentioned above that show up in the transfer matrix reveals that the correction to second order in \(t\) is given by the simple expression \[\Delta^{(2)}\mathcal{T}_{\kappa\epsilon,\sigma\tau}^{*}=-\frac{1}{2d^{4}}t^{2 }\Omega(H)\bra{\kappa|\sigma}\bra{\epsilon|\tau}\big{\|}\mathcal{M}-\overline {\mathcal{M}}\big{\|}_{2}^{2}. \tag{12}\] This result shows that even for a higher number of replicas, the characteristics of the evolution are still independent of the microscopic Hamiltonian \(H\), which only sets the overall timescale through the same function \(\Omega(H)\) as in the \(k=2\) case. The interactions produced by this transfer matrix may have an interesting interpretation in terms of domain walls, but a discussion of this is beyond the scope of the current work. ## Appendix C Evolution of the \(\mathbf{X}\) state Following a similar calculation of the matrix elements in the transfer matrix as shown in App. A we can also derive the unitary evolution of the \(\mathbf{X}\) states that appear after a measurement. If we introduce a projection operator onto this state \(P_{\mathbf{X}}\), such that \(P_{\mathbf{X}}\ket{\mathbf{X}}=\ket{\mathbf{X}}\) and \(P_{\mathbf{X}}V(S_{2})=0\) and denote by \(Q_{\mathbf{X}}\) its complement so that \(P_{\mathbf{X}}+Q_{\mathbf{X}}=I\), the total effective Hamiltonian is given by \[\begin{split}\mathcal{H}_{ij}^{\mathcal{M}}=&\mathcal{ H}_{ij}Q_{\mathbf{X}}^{(i)}Q_{\mathbf{X}}^{(j)}+\mathcal{H}_{\mathbf{X}}^{(j)}P_{ \mathbf{X}}^{(i)}Q_{\mathbf{X}}^{(j)}\\ &+\mathcal{H}_{\mathbf{X}}^{(i)}Q_{\mathbf{X}}^{(i)}P_{\mathbf{X }}^{(j)}+E_{\mathbf{X}\mathbf{X}}P_{\mathbf{X}}^{(i)}P_{\mathbf{X}}^{(j)}, \end{split} \tag{13}\] where \(\mathcal{H}_{ij}\) is the restriction to \(V(S_{2})\) of the effective Hamiltonian in Eq. 10 of the main text, the operator \(\mathcal{H}_{\mathbf{X}}^{(i)}\) is given by \[\mathcal{H}_{\mathbf{X}}^{(i)}=\Gamma\frac{(d+1)(d^{2}-1)}{d^{3}}+\gamma(1- \frac{d+1}{d^{3}})+\frac{\gamma}{d^{2}}\sigma_{x}, \tag{14}\] and the \(\mathbf{X}\mathbf{X}\) interaction energy is \[E_{\mathbf{X}\mathbf{X}}=\frac{\gamma}{d^{2}}\left(1-\frac{2}{d(d^{2}-1)} \right)+2\frac{\Gamma}{d^{2}}\left(1+\frac{1+d^{2}}{d(d^{2}-1)}\right). \tag{15}\] We see that the only effect of \(\Gamma\) is to raise the energy of configurations that include \(\mathbf{X}\) states. It should then be intuitively clear that taking a very large \(\Gamma\) will result in the dynamics being projected into the \(\mathbf{X}\)-free subspace of the Hilbert space. The rest of the appendix constitutes a formal proof of this fact. Denote by \(\mathcal{P}\) the projector onto the \(\mathbf{X}\)-free subspace of the Hilbert space and \(\mathcal{Q}=I-\mathcal{P}\) its complement. To simplify notation let \(\mathcal{H}\) denote the full effective Hamiltonian over the course of this proof. This includes interactions of the type \(\mathcal{H}_{ij}^{\mathcal{M}}\) above between all nearest-neighbors in the chain and selective measurements \(\mathcal{M}\) at some frequency \(f\) on all sites (defined in Eq. 19). Then we can separate \(\mathcal{H}\) as \[\begin{split}\mathcal{H}&=(\mathcal{P}\mathcal{H} \mathcal{P}+\mathcal{Q}\mathcal{H}\mathcal{Q})+(\mathcal{P}\mathcal{H} \mathcal{Q}+\mathcal{Q}\mathcal{H}\mathcal{P})\\ &=\mathcal{H}_{0}+\Delta\mathcal{H},\end{split} \tag{16}\] where the diagonal parts become \(\mathcal{H}_{0}\) and the coupling between sectors is \(\Delta\mathcal{H}\). If we let \(\mathcal{U}=\exp(-t\mathcal{H})\) be the imaginary time propagator then its evolution equation is \[\frac{d}{dt}\mathcal{U}=-\mathcal{H}\mathcal{U}. \tag{17}\] We move into the interaction picture of \(\mathcal{H}_{0}\) by letting \(\mathcal{U}(t)=\exp(-t\mathcal{H}_{0})\mathcal{U}_{I}(t)\). The evolution equation of \(\mathcal{U}_{I}(t)\) is \[\frac{d}{dt}\mathcal{U}_{I}(t)=-\Delta\mathcal{H}_{I}(t)\mathcal{U}_{I}(t), \tag{18}\] where \[\Delta\mathcal{H}_{I}(t)=e^{t\mathcal{H}_{0}}\Delta\mathcal{H}e^{-t\mathcal{H} _{0}}. \tag{19}\] The boundary conditions on all quantities of interest lie in the \(\mathbf{X}\)-free subspace of the Hilbert space and configurations that include \(\mathbf{X}\) states have \(0\) overlap with this sector. Therefore, we are only interested in the reduced propagator \(\mathcal{P}\mathcal{U}_{I}(t)\mathcal{P}\). This evolves according to \[\frac{d}{dt}\mathcal{U}_{I}(t)\mathcal{P}=-\mathcal{P}e^{t\mathcal{H}_{0}}P \Delta\mathcal{H}\mathcal{Q}e^{-t\mathcal{H}_{0}}\mathcal{Q}\mathcal{U}_{I}(t) \mathcal{P}. \tag{20}\] We see that the evolution equation requires knowledge about \(\mathcal{Q}\mathcal{U}_{I}(t)\mathcal{P}\). This evolves according to \[\frac{d}{dt}\mathcal{Q}\mathcal{U}_{I}(t)\mathcal{P}=-\mathcal{Q}e^{t \mathcal{H}_{0}}Q\Delta\mathcal{H}\mathcal{P}e^{-t\mathcal{H}_{0}}\mathcal{P} \mathcal{U}_{I}(t)\mathcal{P}. \tag{21}\] We integrate the two equations from \(0\) to \(t\) and substitute the form of \(\mathcal{Q}\mathcal{U}_{I}(t)\mathcal{P}\) into the \(\mathcal{P}\mathcal{U}_{I}(t)\mathcal{P}\) equation to arrive at the integral form \[\begin{split}\mathcal{P}\mathcal{U}_{I}(t)\mathcal{P}& =\mathcal{P}-\int_{0}^{t}\int_{0}^{t^{\prime}}dt^{\prime}dt^{\prime \prime}\mathcal{P}e^{t^{\prime}\mathcal{H}_{0}}\mathcal{P}\Delta\mathcal{H} \\ &\times\mathcal{Q}e^{-(t^{\prime}-t^{\prime\prime})\mathcal{H}_{0}} \mathcal{Q}\Delta\mathcal{H}\mathcal{P}\mathcal{U}_{I}(t^{\prime\prime}) \mathcal{P}\\ &=\mathcal{P}-\delta\mathcal{U}(t)\end{split} \tag{22}\] Let us now look at the operator norm of the correction term on the RHS. Since this is both sub-additive and sub-multiplicative we have \[\begin{split}\|\delta\mathcal{U}(t)\|&\leq\int_{0}^{t} \int_{0}^{t^{\prime}}dt^{\prime}dt^{\prime\prime}\Big{\|}\mathcal{P}e^{t^{ \prime}\mathcal{H}_{0}}\mathcal{P}\Delta\mathcal{H}\mathcal{Q}\Big{\|}\\ &\times\Big{\|}\mathcal{Q}e^{-(t^{\prime}-t^{\prime\prime}) \mathcal{H}_{0}}\mathcal{Q}\Big{\|}\times\|\mathcal{Q}\Delta\mathcal{H} \mathcal{P}\|\times\|\mathcal{P}\mathcal{U}_{I}(t^{\prime\prime})\mathcal{P}\|.\end{split} \tag{14}\] Our aim is now to find a bound for each norm in the expression. The connection between the different sectors \(\Delta\mathcal{H}\) is bound and independent of \(\Gamma\), as it appears only due to the measurements and is not a result of the random unitaries. The \(\mathcal{P}\mathcal{H}_{0}\mathcal{P}\) drives evolution in the \(\mathbf{X}\)-free subspace, which was also shown to be independent of \(\Gamma\). Therefore we define \[m(t)=\sup_{t^{\prime}\in(0,t)}\Big{(}\big{\|}\mathcal{Q}\Delta\mathcal{H} \mathcal{P}\big{\|}\Big{\|}\mathcal{P}e^{t^{\prime}\mathcal{H}_{0}}\mathcal{ P}\Delta\mathcal{H}\mathcal{Q}\big{\|}\Big{)}\,, \tag{15}\] a real and continuous function. Note that \(m(t)\) is strictly positive and non-decreasing by construction. Furthermore, it is independent of \(\Gamma\) by the previous arguments. For the \(\mathcal{Q}\mathcal{H}_{0}\mathcal{Q}\) part of the Hamiltonian, the energy of all configurations is raised by some constant proportional to \(\Gamma\), as it is clear from Eq. 13 and Eq. 14. Therefore there must exist positive constants \(a,b\) such that the following inequality holds \[\Big{\|}\mathcal{Q}e^{-(t^{\prime}-t^{\prime\prime})\mathcal{H}_{0}}\mathcal{ Q}\Big{\|}\leq e^{-(t^{\prime}-t^{\prime\prime})(a\Gamma-b)}, \tag{16}\] for all \(t^{\prime}>t^{\prime\prime}\). The energy \(E(\Gamma)=a\Gamma-b\) can be interpreted as a lower bound on the ground state energy of the restricted Hamiltonian \(\mathcal{Q}\mathcal{H}_{0}\mathcal{Q}\). We are only interested in the large \(\Gamma\) limit, so we will assume \(E(\Gamma)>0\). For the final part we have \[\|\mathcal{P}\mathcal{U}_{I}(t^{\prime\prime})\mathcal{P}\|\leq\|P\|+\|\delta \mathcal{U}(t^{\prime\prime})\|=1+\|\delta\mathcal{U}(t^{\prime\prime})\|, \tag{17}\] since the norm of a projector is always \(1\). If we introduce the notation \(\psi(t)=\|\delta\mathcal{U}(t)\|/m(t)\) and combine all of the previous inequalities, we have \[\psi(t)\leq\int_{0}^{t}\int_{0}^{t^{\prime}}dt^{\prime}dt^{\prime\prime}e^{-( t^{\prime}-t^{\prime\prime})E(\Gamma)}(1+m(t^{\prime\prime})\psi(t^{\prime \prime})). \tag{18}\] To proceed, we invert the order of the integrals, so that \[\psi(t)\leq\int_{0}^{t}dt^{\prime\prime}\int_{t^{\prime\prime}}^{t}dt^{\prime} e^{-(t^{\prime}-t^{\prime\prime})E(\Gamma)}(1+m(t^{\prime\prime})\psi(t^{\prime \prime})). \tag{19}\] We can now perform the integration over \(t^{\prime}\). A quick calculation shows that \[\int_{t^{\prime\prime}}^{t}dt^{\prime}e^{-(t^{\prime}-t^{\prime\prime})E( \Gamma)}\leq\frac{1}{E(\Gamma)}. \tag{20}\] Adding this to the above and renaming the dummy variable \(t^{\prime\prime}\to s\) we arrive at \[\psi(t)\leq\frac{t}{E(\Gamma)}+\int_{0}^{t}ds\frac{m(s)}{E(\Gamma)}\psi(s). \tag{21}\] We now use the integral form of Gronwall's inequality to get the bound on \(\psi(t)\) \[\psi(t)\leq\frac{t}{E(\Gamma)}\exp\biggl{(}\int_{0}^{t}\frac{m(s)}{E(\Gamma)} ds\biggr{)}, \tag{22}\] and from the definition of \(\psi(t)\) we get \[\|\delta\mathcal{U}(t)\|\leq\frac{tm(t)}{E(\Gamma)}\exp\biggl{(}\int_{0}^{t} \frac{m(s)}{E(\Gamma)}ds\biggr{)}. \tag{23}\] We can now take the limit of \(\Gamma\to\infty\) and use the positivity of the norm to get \[\lim_{\Gamma\to\infty}\|\delta\mathcal{U}(t)\|=0, \tag{24}\] which implies that the operator itself must become null in this limit. From Eq. 13, we then have that the interaction picture propagator acts as identity on the \(\mathbf{X}\)-free subspace in this limit. Returning to the original frame, we see that the propagator must then be \[\mathcal{U}(t)=\exp(-t\mathcal{H}_{0})\mathcal{P}=\exp(-t\mathcal{P}\mathcal{H }\mathcal{P}), \tag{25}\] and the dynamics are driven by the restriction of the Hamiltonian \(\mathcal{P}\mathcal{H}\mathcal{P}\), as claimed in the main text. ## Appendix D Connection between the microscopic and the effective Hamiltonian In Sec. II we saw that \(\gamma\) sets the timescale for information transfer across the network. To deduce the meaning of \(\Gamma\), defined in Eq. (22), we return to the microscopic Hamiltonian \(H\) and expand it in a convenient qudit basis called the generalized Pauli group [52]. This is generated by operators \(X_{d}\), \(Z_{d}\) defined through their action on the basis states \[X_{d}\ket{j} =\ket{j\oplus 1}, \tag{26}\] \[Z_{d}\ket{j} =\omega^{j}\ket{j}, \tag{27}\] where \(\omega\) is a primitive \(d\)'th root of unity and addition is understood to be modulo \(d\). These obey the braiding equation \[\left(X_{d}^{a}Z_{d}^{b}\right)\left(X_{d}^{s}Z_{d}^{t}\right)=\omega^{bs-at} \left(X_{d}^{s}Z_{d}^{t}\right)\left(X_{d}^{a}Z_{d}^{b}\right). \tag{28}\] We can then expand the Hamiltonian on this basis as \[H=\sum_{a,b,s,t=0}^{d-1}h_{ab;st}X_{d}^{a}Z_{d}^{b}\otimes X_{d}^{s}Z_{d}^{t}. \tag{29}\] It is convenient to separate the Hamiltonian into an entangling and a non-entangling sector. We define the non-entangling sector of the basis to be spanned by those operators which act trivially as identity on at least one of the qudits, and the entangling sector is everything else. We can write this split as \[H=H_{loc}+H_{int}, \tag{104}\] and equivalently for the matrices \[h=h_{loc}+h_{int}. \tag{105}\] In a microscopic physical picture the \(H_{int}\) stores information about the nearest neighbor interactions in the chain, while \(H_{loc}\) generates local dynamics such as interactions with magnetic fields. For simplicity, we can assume \(\operatorname{tr}(H)=0\) such that \(h_{00;00}=0\). Additionally, since we assumed \(H\) is symmetric under swapping the two qudits, we get \[h_{ab;st}=h_{st;ab}, \tag{106}\] and the hermiticity condition \(H=H^{\dagger}\) of the Hamiltonian implies \[h_{-a-b;-s-t}=h^{*}_{ab;st}\omega^{ab+st}, \tag{107}\] where the inverse \(-a\) is with respect to modulo \(d\) addition \(a\oplus(-a)=0\). We can start by calculating \(\Gamma\) for this Hamiltonian. If we notice that all elements in our basis other than \(I\) are traceless, we have \[\operatorname{tr}_{1}(H) =\sum_{a,b,s,t=0}^{d-1}h_{ab;st}X_{d}^{s}Z_{d}^{t}\operatorname{ tr}\bigl{(}X_{d}^{a}Z_{d}^{b}\bigr{)} \tag{108}\] \[=d\sum_{s,t=0}^{d-1}h_{00;st}X_{d}^{s}Z_{d}^{t}.\] If we now square this result and apply the braiding equation 104 we get \[\operatorname{tr}_{1}(H)^{2} =d^{2}\sum_{a,b,s,t=0}^{d-1}h_{00;ab}h_{00;st}X_{d}^{a}Z_{d}^{b}X_ {d}^{s}Z_{d}^{t} \tag{109}\] \[=d^{2}\sum_{a,b,s,t=0}^{d-1}h_{00;ab}h_{00;st}\omega^{at}X_{d}^{a \oplus s}Z_{d}^{b\oplus t}.\] Finally, we take the trace of this and use Eq. 106 to get \[\Gamma =\frac{2d}{(d^{2}-1)^{2}}\operatorname{tr}\bigl{(}\operatorname{ tr}_{1}(H)^{2}\bigr{)} \tag{110}\] \[=\frac{2d^{3}}{(d^{2}-1)^{2}}\sum_{a,b,s,t=0}^{d-1}h_{00;ab}h_{00; st}\omega^{at}\operatorname{tr}\bigl{(}X_{d}^{a\oplus s}Z_{d}^{b\oplus t}\bigr{)}\] \[=\frac{2d^{4}}{(d^{2}-1)^{2}}\sum_{a,b,s,t=0}^{d-1}h_{00;ab}h_{00; st}\omega^{at}\delta_{a\oplus s,0}\delta_{b\oplus t,0}\] \[=\frac{2d^{4}}{(d^{2}-1)^{2}}\sum_{a,b=0}^{d-1}h_{00;ab}h_{00;-a -b}\omega^{-ab}\] \[=\frac{2d^{4}}{(d^{2}-1)^{2}}\sum_{a,b=0}^{d-1}\left|h_{00;ab} \right|^{2}.\] We can write this more compactly in terms of the Frobenius norm of the local part of the Hamiltonian \[\Gamma=\frac{d^{4}}{(d^{2}-1)^{2}}\|h_{loc}\|_{\text{F}}^{2}, \tag{111}\] with a coefficient that goes to \(1\) in the \(d\to\infty\) limit. It is clear that this part of the Hamiltonian is only relevant in randomizing the state of individual qudits after measurement and does not play any role in the transfer of information across the chain. For the information transfer rate \(\gamma\), we can compute \[\operatorname{tr}\bigl{(}H^{2}\bigr{)} =d^{2}\sum_{a,b,s,t=0}^{d-1}h_{ab;st}h_{-a-b;-s-t}\omega^{-ab-st} \tag{112}\] \[=d^{2}\sum_{a,b,s,t=0}^{d-1}\left|h_{ab;st}\right|^{2},\] so from Eq. 8 we have \[\frac{\gamma}{2} =\frac{d^{2}}{(d^{2}-1)^{2}}\operatorname{tr}\bigl{(}H^{2}\bigr{)} -\frac{2d}{(d^{2}-1)^{2}}\operatorname{tr}\bigl{(}\operatorname{tr}_{1}(H)^{2} \bigr{)} \tag{113}\] \[=\frac{d^{4}}{(d^{2}-1)}\left(\sum_{a,b,s,t=0}^{d-1}\left|h_{ab; st}\right|^{2}-2\sum_{a,b=0}^{d-1}\left|h_{00;ab}\right|^{2}\right)\] \[=\frac{d^{4}}{(d^{2}-1)^{2}}\|h_{int}\|_{\text{F}}^{2},\] showing that only the interaction part of the Hamiltonian leads to transfer of information, as expected. ## Appendix E Evaluation of \(\Theta(t)\) for periodic boundary conditions With periodic boundary conditions, the non-interacting fermionic Hamiltonian introduced in Eq. (39) can be block diagonalized in terms of states with definite quasimomentum. Working in units where \(\gamma=1\), a standard computation gives \[H_{p}=-\frac{1}{2}\sum_{k\in\mathcal{K}_{p}}2(\cos k-g)a_{k}^{\dagger}a_{k}+(e^{ \mathrm{i}k}a_{k}^{\dagger}a_{-k}^{\dagger}+\mathrm{H.c.})+g. \tag{108}\] where \(a_{k}=\sum_{j}e^{-\mathrm{i}kj}a_{j}\) are annihilation operators for fermions with wavevector \(k\). The set of \(k\)-space points \(\mathcal{K}_{p}\) in the above sum depends on the fermion parity sector \(p\)[49]. Assuming an even number of sites \(N\), the odd parity sector \(p=1\) has periodic boundaries (PBCs) \(e^{ikN}=+1\) while the even parity sector \(p=0\) has antiperiodic boundaries (ABCs) \(e^{ikN}=-1\). Since the anomalous terms pair \(+k\) and \(-k\) modes, it is helpful to single out the positive \(k\)-modes \[\mathcal{K}_{p=1}^{+} =\{k=2\pi n/L,n=1,\ldots,(L/2)-1\} \tag{109}\] \[\mathcal{K}_{p=0}^{+} =\{k=(2n-1)\pi/L,n=1,\ldots,(L/2)\} \tag{110}\] Note that the \(p=1\) case has one fewer \(k\) value, which is resolved by accounting for modes \(k=0\) and \(k=\pi\), for which the anomalous terms vanish \[H_{k=0,\pi}=-(1-g)a_{0}^{\dagger}a_{0}+(1+g)a_{\pi}^{\dagger}a_{\pi}-g \tag{111}\] Overall, this allows us to write the Hamiltonian in matrix form \[H_{p}=-\sum_{k\in\mathcal{K}_{p}}\left(a_{k}^{\dagger}\;\;a_{-k}\right) \mathcal{H}_{k}\begin{pmatrix}a_{k}\\ a_{-k}^{\dagger}\end{pmatrix} \tag{112}\] We can write \(\mathcal{H}_{k}=\vec{h}(k)\cdot\vec{\sigma}\), where \(\vec{\sigma}\) is a 3-vector of Pauli matrices, and we have \[\vec{h}(k)=\begin{bmatrix}0\\ -\sin k\\ \cos k-g\end{bmatrix} \tag{113}\] When this \(k\)-space Hamiltonian \(\mathcal{H}_{k}\) is diagonalized, we obtain energies \[\epsilon_{k}=\pm\sqrt{1+g^{2}-2g\cos k}, \tag{114}\] consistent with the dispersion relation (56) quoted in the main text. As a reminder, the object we wish to calculate is \[\Theta=\frac{\mathrm{Tr}\!\left[\left|+\right\rangle\left\langle+\right|e^{-tH _{p=0}}\right]}{\mathrm{Tr}\!\left[\left|-\right\rangle\left\langle-\right|e^ {-tH_{p=1}}\right]} \tag{115}\] Now, we note that the state \(\left|+\right\rangle\) (\(\left|-\right\rangle\)) is the ground state of the Hamiltonian \(\mathcal{H}_{\mathrm{eff}}\) in the even (odd) parity sector, for the specific case \(g=0\). Both objects in each of the traces in Eq. (115) are \(k\)-diagonal, and so both the numerator and denominator of the above become a product over \(k\). Each factor of the product is given by an overlap between two states of a two-level system spanned by \(\left|\mathrm{VAC}\right\rangle\) and \(a_{k}^{\dagger}a_{-k}^{\dagger}\left|\mathrm{VAC}\right\rangle\), namely the overlap between the ground state with \(g=0\) and the thermal state with temperature \(t\) for nonzero \(g\). We should be careful to include the modes \(k=0,\pi\), that are present in the denominator, separately. Denoting \(\rho_{+,k}\) as the (pure) \(2\times 2\) density matrix for mode \(k\) in the state \(\left|+\right\rangle\), we have \(\rho_{+,k}=(I_{2}+\vec{n}_{+,k}\cdot\vec{\sigma})/2\), where \[\vec{n}_{+,k}=\begin{bmatrix}0\\ -\sin k\\ \cos k\end{bmatrix} \tag{116}\] Meanwhile, using the diagonal representation of \(\mathcal{H}_{k}\) we get \[e^{-t\mathcal{H}_{k}} =e^{+\epsilon_{k}t}\frac{1}{2}(I_{2}+\vec{n}_{H,k}\cdot\vec{ \sigma})+e^{-\epsilon_{k}t}\frac{1}{2}(I_{2}-\vec{n}_{H,k}\cdot\vec{\sigma})\] \[=2\cosh(\epsilon_{k}t)\left(\frac{I_{2}+\tanh(\epsilon_{k}t)\vec {n}_{H,k}\cdot\vec{\sigma}}{2}\right) \tag{117}\] where \[\vec{n}_{H,k}=\frac{1}{\epsilon_{k}}\vec{h}(k) \tag{118}\] is the 3-vector that specifies the ground state of the 2-level Hamiltonian \(\mathcal{H}_{k}\). Now, since \[\mathrm{Tr}\left[\left(\frac{I_{2}+\vec{n}_{1}\cdot\vec{\sigma}}{2}\right) \cdot\left(\frac{I_{2}+\vec{n}_{2}\cdot\vec{\sigma}}{2}\right)\right]=\frac{1+ \vec{n}_{1}\cdot\vec{n}_{2}}{2} \tag{119}\] we can compute the dot product \(\vec{n}_{+,k}\cdot\vec{n}_{H,k}=(\sin^{2}k+\cos k(\cos k-g))/\epsilon_{k}=(1-g \cos k)/\epsilon_{k}\), and the relevant factor for each \(k\) becomes \[\theta(k,t)=\cosh(\epsilon_{k}t)\left[1+\tanh(\epsilon_{k}t)\frac{1-g\cos k}{ \epsilon_{k}}\right] \tag{120}\] Finally, the relevant factor for the \(k=0,\pi\) modes comes from recognizing that in the state \(\left|-\right\rangle\), we have \(a_{0}^{\dagger}a_{0}=1\) and \(a_{\pi}^{\dagger}a_{\pi}=0\). Because \(H_{p=0}\) conserves these occupations, we get a factor of \(e^{t}\) in the numerator. Overall, we obtain the expression \[\Theta=e^{t}\frac{\prod_{k\in\mathcal{K}_{p=0}^{+}}\theta(k,t)}{\prod_{k\in \mathcal{K}_{p=1}^{+}}\theta(k,t)} \tag{121}\] which was quoted in the main text, Eq. (54). Now, to evaluate the products in the above, we define the function \[f(z)=\frac{\log\theta(z,t)}{\sin Nz} \tag{122}\] where \(z\) is now a complex variable which corresponds to the wavevector \(k\) on the real axis. If one considers the integral of \(f(z)\) around a thin contour \(\Gamma\) encircling the real interval \([-\pi,\pi]\), cutting the line at the edges at a straight angle, then using the residue theorem we can show that \[\log\Theta=\frac{N}{4\pi i}\oint_{\Gamma}dzf(z). \tag{108}\] By deforming the integration contour, the above can be re-expressed as \[\oint_{\Gamma}dzf(z)=\int_{-\infty}^{\infty}idxf(ix-\delta)-\int_{-\infty}^{ \infty}idxf(ix+\delta), \tag{109}\] which integrates on both sides of the branch cut chosen along the imaginary line. We do not include it explicitly but we keep in mind that the contour has a small gap such as to not cross the real line. Consider the rotated \(\theta\) function \(\theta_{r}(x)=-i\theta(ix)\) (we leave the \(t\)-dependence implicit), and equivalently for the energy function \(\epsilon_{r}\). We have \[\begin{split}\theta_{r}(x)&=\cos t\epsilon_{r}(x) -\sin t\epsilon_{r}\frac{g\cosh x-1}{\epsilon_{r}(x)}\\ &=\frac{\cos(t\epsilon_{r}(x)+\phi(x))}{\cos\phi(x)},\end{split} \tag{110}\] where we introduced \(\tan\phi(x)=(g\cosh(x)-1)/\epsilon_{r}(x)\). By symmetry we can focus on the integral above the real line and have \[\begin{split}\frac{1}{2}\oint dzf(z)&=i\int_{K}^{ \infty}dx\left[f(ix-\delta)-f(ix+\delta)\right]\\ &=\int_{K}^{\infty}\frac{dx}{\sinh(Nx)}\log\frac{\theta_{r}(x+i \delta)}{\theta_{r}(x-i\delta)},\end{split} \tag{111}\] where \(K=-\log g\) is the point where \(\epsilon_{r}(x)=0\). Consider the Weierstrass factorization of the cosine \[\cos z=\prod_{q\in\mathbb{Z},q\text{ odd}}\left(1-\frac{2z}{q\pi}\right)e^{2 z/q\pi}, \tag{112}\] and apply it to \(\theta_{r}\). The integral will exactly cancel for all terms that are analytic in the top half plane, so we only need to keep the factors that have a \(0\) above \(K\) on the imaginary line. The \(0\) at \(K\) is exactly cancelled by the denominator in the expression of \(\theta_{r}\). Then the integral is \[\begin{split}&\frac{1}{2}\int dzf(z)=\int_{K}^{\infty}\frac{dx}{ \sinh Nx}\\ &\times\sum_{q>0,q\text{ odd}}\log\frac{1-\frac{2}{q\pi}(t \epsilon_{r}(x+i\delta)+\phi(x+i\delta))}{1-\frac{2}{q\pi}(t\epsilon_{r}(x-i \delta)+\phi(x-i\delta))}.\end{split} \tag{113}\] Since the function \(t\epsilon_{r}(x)+\phi(x)\) is strictly increasing from \(-\pi/2\) to \(\infty\) as \(x\) goes from \(0\) to \(\infty\) each of the \(q\) terms will have exactly one node on the integration line. The value of the logarithm differs by exactly \(2\pi i\) starting from the node up to \(\infty\). If we denote by \(x_{q}\) the unique solution of the equation \(t\epsilon(x)+\phi(x)=q\pi/2\), the value of the integral is \[\begin{split}\frac{1}{2}\int dzf(z)&=-2\pi i\sum_{q> 0,q\text{ odd}}\int_{x_{q}}^{\infty}\frac{dx}{\sinh Nx}\\ &=\frac{2\pi i}{N}\sum_{q}\log\tanh\frac{x_{q}N}{2}.\end{split} \tag{114}\] Then we see that \[\log\Theta=\sum_{q\text{ odd}}\tanh\frac{Nx_{q}}{2} \tag{115}\] as claimed in the main text. ## Appendix F Product representation of \(\Theta\) for open boundary conditions To compute the determinants that appear in Eq. 53, we need an explicit representation for the eigenvector matrix \(\mathcal{O}\) of the grand dynamical matrix \(\mathcal{D}\). Due to the particle-hole symmetry, this can be written in the form \[\mathcal{O}=\begin{bmatrix}u&v\\ v&u,\end{bmatrix},\ \mathcal{O}^{-1}=\mathcal{O}^{T}. \tag{116}\] If we let \(i\), \(j=\overline{1,N-1}\) and denote \(m=N/2\) (assuming even \(N\)) \[u=\begin{bmatrix}0&u_{0j}\\ u_{i0}&u_{ij}.\end{bmatrix},\ v=\begin{bmatrix}v_{00}&v_{0j}\\ v_{i0}&v_{ij}.\end{bmatrix} \tag{117}\] Let \(y=i-m\) be the coordinate relative to the middle. Then the eigenvectors can be represented as \[u_{ij}=\left\{\begin{array}{ll}a_{j}\frac{\cos k_{j}y}{\cos k_{j}m},&\text{ for $j$ odd}\\ a_{j}\frac{\sin k_{j}y}{\sin k_{j}m}&\text{for $j$ even}\end{array}\right., \tag{118}\] \[u_{0j}=\left\{\begin{array}{ll}2a_{j},&\text{for $j$ odd}\\ 0,&\text{for $j$ even}\end{array}\right., \tag{119}\] \[v_{ij}=\left\{\begin{array}{ll}a_{j}\frac{\sin k_{j}y}{\sin k_{j}m},&\text{ for $j$ odd}\\ a_{j}\frac{\cos k_{j}y}{\cos k_{j}y},&\text{for $j$ even}\end{array}\right., \tag{120}\] \[v_{0j}=\left\{\begin{array}{ll}0,&\text{for $j$ odd}\\ 2a_{j},&\text{for $j$ even}\end{array}\right., \tag{121}\] \[v_{i0}=a_{0}\frac{\cos k_{0}y}{\cos k_{0}m},\ u_{i0}=a_{0}\frac{\sin k_{0}y}{ \sin k_{0}m}, \tag{122}\] \[v_{00}=2a_{0}, \tag{123}\] The normalization constants \(a_{j}\) are chosen such that for all \(j\) (and the additional \(j=0\) mode) \[4a_{j}^{2}+\sum_{i}\left(u_{ij}^{2}+v_{ij}^{2}\right)=1, \tag{111}\] although their explicit values are not important in our calculation. The wavevectors \(k_{j}\) are in \([0,\pi)\) (except for \(k_{0}\), which becomes imaginary at the transition) and are the solutions of the equation \[\tan k_{j}N=\frac{g\sin k_{j}}{1-g\cos k_{j}}. \tag{112}\] The corresponding eigenvalues are given by the well-known dispersion relation for the TFIM \[\lambda^{2}=1+g^{2}-2g\cos k. \tag{113}\] Since the matrices \(u\) and \(v\) are split into even and odd sectors one can show there must be some linear dependence, such that \(\det u=\det v=0\). This problem disappears when looking at the \(i,j\geq 1\) sector, such that the inverses \(u_{ij}^{-1}\) and \(v_{ij}^{-1}\) exist. We can then proceed to calculate the determinants in the expression for \(\Theta\). We saw that at long enough times, we can make the approximation \[\Gamma[-t\mathcal{D}]\approx\mathcal{O}\begin{bmatrix}0&0&0&0\\ 0&-I_{N-1}&0&0\\ 0&0&0&0\\ 0&0&0&I_{N-1}\end{bmatrix}\mathcal{O}^{-1}. \tag{114}\] Using the formula for \(\Theta\) in Eq. 53 and the definition of the overlap as a determinant in Eq. 48 we see that \[\Theta=\sqrt{\left|\frac{\det(1+\Gamma[-t\mathcal{D}]\Gamma_{GS})}{\det(1+ \Gamma[-t\mathcal{D}]\Gamma_{E})}\right|}. \tag{115}\] Inserting the expression for \(\Gamma[-t\mathcal{D}]\) at long times from the previous line and performing standard determinant manipulations one arrives at the simpler form \[\Theta=\frac{\left|\begin{matrix}v_{0}&u_{0j}\\ v_{i0}&u_{ij}\\ \hline v_{0}&v_{0j}\\ u_{i0}&u_{ij}\end{matrix}\right|}{.} \tag{116}\] By introducing the expressions we have for the eigenvectors and rearranging the rows and columns of the determinants we arrive at the expression \[\Theta=\frac{2\tan mk_{0}\det\overline{C}\det S}{\det\overline{S}\det C}, \tag{117}\] where \[\overline{C}=\begin{bmatrix}1&1&1&\ldots&1\\ \cos k_{0}&\cos k_{1}&\cos k_{3}&\ldots&\cos k_{N-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \cos mk_{0}&\cos mk_{1}&\cos mk_{3}&\ldots&\cos mk_{N-1}\end{bmatrix}, \tag{118}\] \[S=\begin{bmatrix}\sin k_{2}&\sin k_{4}&\ldots&\sin k_{N-2}\\ \sin 2k_{2}&\sin 2k_{4}&\ldots&\sin 2k_{N-2}\\ \vdots&\vdots&\ddots&\vdots\\ \sin(m-1)k_{2}&\sin(m-1)k_{4}&\ldots&\sin(m-1)k_{N-2}\end{bmatrix}, \tag{119}\] \[C=\begin{bmatrix}1&1&\ldots&1\\ \cos k_{1}&\cos k_{3}&\ldots&\cos k_{N-1}\\ \vdots&\vdots&\ddots&\vdots\\ \cos(m-1)k_{1}&\cos(m-1)k_{3}&\ldots&\cos(m-1)k_{N-1}\end{bmatrix}, \tag{120}\] \[\overline{S}=\begin{bmatrix}\sin k_{0}&\sin k_{2}&\sin k_{4}&\ldots&\sin k_{N -2}\\ \sin 2k_{0}&\sin 2k_{2}&\sin 2k_{4}&\ldots&\sin 2k_{N-2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \sin mk_{0}&\sin mk_{2}&\sin mk_{4}&\ldots&\sin mk_{N-2}\end{bmatrix}. \tag{121}\] If we add the rows together to form Chebyshev polynomials, the above can be transformed into Vandermonde matrices with determinants given by the well-known results: \[\det\overline{C}= 2^{m^{2}}\prod_{\begin{subarray}{c}i<j,\\ 0\text{ or odd}\end{subarray}}\sin\!\left(\frac{k_{i}-k_{j}}{2}\right)\sin\! \left(\frac{k_{i}+k_{j}}{2}\right)\!, \tag{122}\] \[\det C= 2^{(m-1)^{2}}\prod_{\begin{subarray}{c}i<j,\\ 0\text{ or even}\end{subarray}}\sin\!\left(\frac{k_{i}-k_{j}}{2}\right)\sin\! \left(\frac{k_{i}+k_{j}}{2}\right)\!,\] (123) \[\det S= 2^{(m-1)^{2}}\prod_{\begin{subarray}{c}i,\\ \text{even}\end{subarray}}\sin k_{i}\] (124) \[\times\prod_{\begin{subarray}{c}i<j,\\ 0\text{ or even}\end{subarray}}\sin\!\left(\frac{k_{i}-k_{j}}{2}\right)\sin\! \left(\frac{k_{i}+k_{j}}{2}\right)\!.\] We can now cancel out the common factors in the numerator and the denominator to obtain \[\Theta=2\frac{\tan mk_{0}\prod_{l\text{ odd}}\sin\!\left(\frac{k_{0}-k_{l}}{2} \right)\sin\!\left(\frac{k_{0}+k_{l}}{2}\right)}{\sin k_{0}\prod_{l\text{ even}}\sin\!\left(\frac{k_{0}-k_{l}}{2}\right)\sin\!\left(\frac{k_{0}+k_{l}}{2} \right)}, \tag{125}\] which becomes the same as Eq. 68 in the main text with the rewriting of the complex wavenumber \(K=ik_{0}\). ## Appendix G Large-\(N\) asymptotic expression of entropy In this appendix, we will derive the large-\(N\) asymptotic behavior of the exact product expansion of \(\Theta\) in Eq. 68 of the main text. We start by introducing the auxiliary complex functions \[\begin{split}& f(z)=\frac{g\sin z}{1-g\cos z},\\ &\phi(z)=\arctan f(z)=\frac{1}{2i}\log\frac{1-ge^{-iz}}{1-ge^{ iz}}.\end{split} \tag{108}\] In terms of these, we can write the quantization equation as \[\begin{split}&\sin(kN-\phi(k))=0,\\ & k_{l}=\frac{\pi l}{N}+\frac{\phi(k_{l})}{N},\end{split} \tag{109}\] so that by symmetry \(k_{0}=0\) and \(k_{-l}=-k_{l}\). Then we consider the complex function \[g(z)=\frac{(N-\phi^{\prime}(z))\log\bigl{(}\sin\bigl{(}\frac{iK-z}{2}\bigr{)} \bigr{)}}{\sin(zN-\phi(z))}, \tag{110}\] where \[\phi^{\prime}(z)=\frac{g\cos z-g^{2}}{1+g^{2}-2g\cos z}. \tag{111}\] This function is not analytic at the simple poles \(z=k_{l}\) and \(z=\pm i\log g\) and at the branch points \(z=\pm iK\). Note that \(K\) becomes exponentially close to \(-\log g\) in the limit of \(KN\to\infty\). If we integrate the function around a contour that encircles (but remains close to) the real axis interval \([-\pi,\pi]\) and apply the residue theorem we obtain \[\begin{split}&\frac{1}{2\pi i}\oint_{C}g(z)dz=\log\biggl{(}\frac{ \sin iK}{2}\biggr{)}+\\ &\sum_{l=1}^{N-1}(-1)^{l}\log\biggl{(}\sin\biggl{(}\frac{iK-k_{l} }{2}\biggr{)}\sin\biggl{(}\frac{iK+k_{l}}{2}\biggr{)}\biggr{)},\end{split} \tag{112}\] where we note that only half the residue at the edge poles is included. Comparing this to the product expansion of \(\Theta\) we find that \[\log\Theta=\log\tanh\frac{NK}{2}-\frac{1}{2\pi i}\oint_{C}g(z)dz. \tag{113}\] We are free to deform the integration contour without changing the result of the integral, so long as we do not cross any singularity. The integrals over the sides of the strip are equal to \[\begin{split}&\int_{-\infty}^{\infty}g(\pi+ix)idx+\int_{\infty}^{ -\infty}g(-\pi+ix)idx=\\ & 2\pi i\int_{-\infty}^{\infty}dx\frac{N+\frac{g\cosh x+g^{2}}{1+g^{2}+2 g\cosh x}}{(e^{xN}\sqrt{\frac{1+ge^{x}}{1+ge^{-x}}}-e^{-xN}\sqrt{\frac{1+ge^{-x}}{1+ ge^{x}}}},\end{split} \tag{114}\] which cancels because the integrand is an odd function of \(x\). We can express the function \(g(z)\) explicitly as \[\begin{split}& g(z)=\\ -& 2ie^{izN}\sqrt{\frac{1-ge^{iz}}{1-ge^{-iz}}}\frac{N- \frac{g\cos z-g^{2}}{(1-ge^{iz})(1-ge^{-iz})}}{1-e^{2izN}\frac{1-ge^{iz}}{1-ge^ {-iz}}}\log\sin\frac{iK-z}{2}\end{split} \tag{115}\] We will focus on the contour integration around the singularity in the top half of the complex plane \(z\sim iK\). The integration on the bottom half follows by analogy, but we find that it has no contribution to the order of approximation we are interested in, so we simply discard it. If we make the change of coordinate \(z\to iz^{\prime}=z-i\log\frac{1}{g}\) and consider points sufficiently far from the origin that \[\begin{split}\frac{|z^{\prime}|}{\log\frac{1}{g}}\gg e^{-2N \log\frac{1}{g}},\end{split} \tag{116}\] along with the additional approximation that \(-N\log g\gg 1\), then \(g^{\prime}(z^{\prime})=g(z)\) is simplified to \[\begin{split}& g^{\prime}(z^{\prime})=\\ -& 2e^{-KN-z^{\prime}N}\sqrt{\frac{1-g^{2}e^{-z^{ \prime}}}{z^{\prime}}}\left(N+\frac{1-g^{2}}{2(1-g^{2}e^{-z^{\prime}})z^{\prime }}\right)\\ &\times\log\sin\frac{z^{\prime}}{2}.\end{split} \tag{117}\] In this coordinate, the integration is performed along a dumbbell contour, going along the branch cut on the positive half of the real axis and encircling the singularity at the origin. The radius \(\epsilon\) is always understood to obey the condition in Eq. 116. Since the function decays very fast along the positive real axis for large \(N\), we can make a further approximation on our contour \[g^{\prime}(z^{\prime})=-2e^{-KN-z^{\prime}N}\sqrt{\frac{1-g^{2}}{z^{\prime}}} \left(N+\frac{1}{2z^{\prime}}\right)\log\sin\frac{z^{\prime}}{2}, \tag{118}\] where the error we incur is of order \(\mathcal{O}(\frac{1}{KN})\) in the integral. We can then express the integral contribution to \(\log\Theta\) as \[\begin{split}&\frac{1}{2\pi}\int_{C}g^{\prime}(z^{\prime})dz^{ \prime}=\\ &\frac{1}{\pi}e^{-NK}\sqrt{1-g^{2}}\int_{C}\frac{dz}{\sqrt{z}}e^ {-Nz}(N+\frac{1}{2z})\log\frac{z}{2}.\end{split} \tag{119}\] The circular region has a contribution of \[\begin{split}\int_{l_{0}}\frac{dz}{\sqrt{z}}e^{-Nz}(N+\frac{1}{2 z})\log\frac{z}{2}&=-\frac{2}{\sqrt{\epsilon}}\log\frac{\epsilon}{2}- \frac{4}{\sqrt{\epsilon}}\\ &=-2\sqrt{\frac{N}{\epsilon^{\prime}}}(\log\frac{\epsilon^{ \prime}}{2N}+2).\end{split} \tag{120}\] The 2 linear portions contribute \[\begin{split}&\int_{l_{+}+l_{-}}\frac{dz}{\sqrt{z}}e^{-Nz}(N+\frac{ 1}{2z})\log\frac{z}{2}=\\ & 2\int_{\epsilon}^{\infty}\frac{dx}{\sqrt{x}}e^{-Nx}(N+\frac{1}{2x}) \log\frac{x}{2}=\\ & 2\sqrt{N}\int_{\epsilon^{\prime}}^{\infty}\frac{dy}{\sqrt{y}}e^{-y}( 1+\frac{1}{2y})\log\frac{y}{2N}.\end{split} \tag{121}\] The integrals appearing in the formula above have the following values \[\int_{0}^{\infty}\frac{dy}{\sqrt{y}}e^{-y} =\sqrt{\pi}, \tag{101}\] \[\int_{0}^{\infty}\frac{dy}{\sqrt{y}}e^{-y}\log y =-\sqrt{\pi}(\gamma+\log 4),\] (102) \[\int_{\epsilon^{\prime}}^{\infty}\frac{dy}{y^{\frac{3}{2}}}e^{-y} =\frac{2}{\sqrt{\epsilon^{\prime}}}-2\sqrt{\pi}+\mathcal{O}(\sqrt{ \epsilon^{\prime}}),\] (103) \[\int_{\epsilon^{\prime}}^{\infty}\frac{dy}{y^{\frac{3}{2}}}(e^{-y }-1)\log y =-2\sqrt{\pi}\psi(-\frac{1}{2})+\mathcal{O}(\sqrt{\epsilon^{\prime}}),\] (104) \[\int_{\epsilon^{\prime}}^{\infty}\frac{dy}{y^{\frac{3}{2}}}\log y =\frac{2(\log\epsilon^{\prime}+2)}{\sqrt{\epsilon^{\prime}}}. \tag{105}\] If we put all pieces together we arrive at \[\int_{C}\frac{dz}{\sqrt{z}}e^{-Nz}(N+\frac{1}{2z})\log\frac{z}{2}=-4\sqrt{\pi N}, \tag{106}\] so we have that \[\frac{1}{2\pi i}\oint_{C}g(z)dz=-4\sqrt{\frac{N}{\pi}}e^{-NK}\sqrt{1-g^{2}}(1+ \mathcal{O}(\frac{1}{NK})). \tag{107}\] If we introduce this into Eq. 102 we get \[\log\Theta=2e^{-NK}(2\sqrt{\frac{N}{\pi}}\sqrt{1-g^{2}}-1), \tag{108}\] which we introduce in the defining equation of the entropy to get the result provided in the main text.
2309.02139
Self-Supervised Pre-Training Boosts Semantic Scene Segmentation on LiDAR Data
Airborne LiDAR systems have the capability to capture the Earth's surface by generating extensive point cloud data comprised of points mainly defined by 3D coordinates. However, labeling such points for supervised learning tasks is time-consuming. As a result, there is a need to investigate techniques that can learn from unlabeled data to significantly reduce the number of annotated samples. In this work, we propose to train a self-supervised encoder with Barlow Twins and use it as a pre-trained network in the task of semantic scene segmentation. The experimental results demonstrate that our unsupervised pre-training boosts performance once fine-tuned on the supervised task, especially for under-represented categories.
Mariona Carós, Ariadna Just, Santi Seguí, Jordi Vitrià
2023-09-05T11:29:30Z
http://arxiv.org/abs/2309.02139v2
# Self-Supervised Pre-Training Boosts ###### Abstract Airborne LiDAR systems have the capability to capture the Earth's surface by generating extensive point cloud data comprised of points mainly defined by 3D coordinates. However, labeling such points for supervised learning tasks is time-consuming. As a result, there is a need to investigate techniques that can learn from unlabeled data to significantly reduce the number of annotated samples. In this work, we propose to train a self-supervised encoder with Barlow Twins and use it as a pre-trained network in the task of semantic scene segmentation. The experimental results demonstrate that our unsupervised pre-training boosts performance once fine-tuned on the supervised task, especially for under-represented categories. ## 1 Introduction Airborne LiDAR (Light Detection And Ranging) is a remote sensing technology that employs near-infrared light to produce highly accurate three-dimensional (3D) representations of the Earth's surface, as exemplified in the header image. The number of points in a scene acquired by a LiDAR sensor is usually immense, typically comprising millions of points. While labeling can be automated for some elements, such as ground and planar surfaces like buildings, some objects require manual annotation due to their varied shapes and relatively low representation, often comprising less than 1% of the points in the data. Consequently, labeling point cloud data is a task that requires a significant amount of time and effort which results in a lack of large annotated 3D datasets. In this work, we aim to use self-supervised learning (SSL) on unlabeled point clouds, inspired by the success of self-supervised methods in natural language processing [1, 2, 3] and computer vision [4, 5, 6, 7], to obtain meaningful representations for a semantic scene segmentation task. Point cloud semantic segmentation, also referred to as point classification, involves the prediction of a categorical label for every point within a given point cloud. This task is especially challenging due to the scattered and irregular nature of aerial LiDAR data, which comprises an extensive number of points. Several architectures [8, 9, 10, 11] have been implemented to process point cloud data, including point-based networks, graph-based networks, voxel-based networks, and multi-view networks. Since LiDAR sensors acquire data in the form of 3D points, our focus is on exploring the efficacy of point-based networks for this task. The pioneering work to directly process point cloud data was PointNet [8]. Qi et al. [12] extended the capabilities of PointNet by incorporating local geometric information through a hierarchical neural network, which resulted in PointNet++. Inspired by the mentioned networks, recent studies [13, 14, 15] focus on redefining sampling and augmenting features using knowledge from other fields to improve its performance. Self-supervised pre-training has emerged as a promising technique for supervised tasks like image segmentation and classification in situations where access to annotations is limited. A successful strategy involves learning embeddings that remain invariant to input data distortions by maximizing the similarity of representations subject to different conditions [16, 17]. In this context, methods differ in the similarity function used, whether the encoders for input samples are the same or different, and the type of transformations utilized. Notable examples are contrastive methods, such as SimCLR [16], clustering approaches [5, 18], and Siamese networks [17]. Our approach is based on Barlow Twins [19] which minimizes redundancy via cross-correlation between outputs of two identical networks fed with distorted versions of a sample. While solving this task, representations that capture semantic prop erties of the point cloud are learned. Barlow Twins does not fall under the categories of either contrastive learning or clustering methods. Its design provides several benefits, such as not requiring large batches [16], asymmetric mechanisms [6], or stop-gradients [17]. Recent advances in SSL for 2D data have motivated research in applying similar techniques to 3D processing. For instance, PointContrast [20] leverages multi-view depth scans with point correspondences for high-level scene understanding tasks. However, this method is limited to static scenes that have been registered with two views. Other works [21, 22, 23, 24] directly feed point cloud data into the network for SSL, although most of them focus on single 3D object representation for reconstruction, classification, or part segmentation. Few studies include scene representations [20, 25], and these mainly focus on indoor and driving scenes provided by terrestrial laser scanners. In order to improve performance across real-world tasks through SSL, exploring strategies on single objects may present limited potential. Hence, we propose pre-training the network on complex scenes obtained by LiDAR to better match the target distributions. To the best of our knowledge, this study is novel in utilizing a self-supervised method such as Barlow Twins to pre-train a neural network for the task of scene segmentation using airborne LiDAR data. The code of this study is publicly available at github.com/marionacaros/barlow-twins-for-sem-seg. Our contributions can be summarized as follows: * We propose a methodology for pre-training a 3D semantic scene segmentation model by using SSL. * We show that SSL can be used with sparse outdoor LiDAR data, even if the dataset is highly imbalanced. * We experiment with PointNet and PointNet++ and show a significant performance improvement in semantic scene segmentation over under-represented categories within different datasets. ## 2 Method We introduce our approach for semantic scene segmentation given a small portion of labeled data. The methodology consists in training a self-supervised point-based network on point clouds and using it as initialization for the supervised task. The SSL method is illustrated in Fig 1. Based on Barlow Twins architecture [19], it applies redundancy-reduction by using a joint embedding of distorted point cloud views, which learns powerful representations from LiDAR data. ### Self-Supervised Training Given a dataset of partially labeled point clouds, we wish to effectively utilize all the available data to train a neural network that can accurately perform semantic scene segmentation. We begin by training a supervised network for this task on the labeled data. Specifically, we use the point-based architectures PointNet [8] and PointNet++ [12] due to their simplicity and efficiency. Next, we split the dataset into \(N\) batches \(D=\{X_{i}\}_{i=1}^{N}\). To each batch \(X_{i}\), we apply a data augmentation to obtain two distorted versions. Let \(t_{A},t_{B}\in\mathcal{T}\), be randomly sampled augmentations from a set of transformations. Consequently, we can define the two batches of distorted point clouds as \(Y^{A}=t_{A}(X)\) and \(Y^{B}=t_{B}(X)\). \(Y^{A}\) and \(Y^{B}\) are fed to the encoder network, which is used as initialization for Barlow Twins method. Then, a projection head is used after the encoder to map representations to the space where the objective function is applied. Finally, the network output are batches of embeddings \(Z^{A}\) and \(Z^{B}\) which are used by the Barlow Twins objective function to measure its cross-correlation matrix. The objective function is defined as follows: \[Loss_{BT}=\sum_{i}(1-C_{ii})^{2}+\lambda\sum_{i}\sum_{j\neq i}C_{ij}^{2} \tag{1}\] where \(\lambda\) is a positive constant weighing the components of the loss, and where \(C\) is the cross-correlation matrix computed between the embeddings \(Z^{A}\) and \(Z^{B}\). The cost function is composed of two terms. The first term aims to make the embedding invariant to the applied distortion by trying to equate the diagonal elements of the cross-correlation matrix to 1. While the second term tries to equate the off-diagonal elements of the cross-correlation matrix to 0, reducing the redundancy between the output units. Once the encoder is pre-trained on unlabeled training data, we train it for semantic scene segmentation. Figure 1: Barlow Twins method consists of: (i) Producing two distorted views for all point clouds of a batch. (ii) Feeding batches to two identical deep encoder networks producing embeddings. (iii) By using the objective function, the cross-correlation between features is optimized to be the identity. ### Point Cloud Distortions Each input point cloud is transformed twice to produce two distorted views, an example of them is shown in Fig 2. We first apply typical data augmentation strategies for point clouds which are: random downsampling, up-sampling by point duplication, and random rotation in \(xy\) axis. Then, we add a new transformation which involves moving a percentage of points by modifying their coordinates with random values, while preserving other attributes. The goal of this transformation is to introduce noise into the point cloud while maintaining its dimension and preserving the shape of objects within it. To achieve this, points are randomly chosen with probabilities ranging from 2% to 5% in the first version, and a fixed probability of 10% in the second version. We consider that higher percentages may excessively alter the shape of certain objects. ### Architecture Self-supervised learningWe use Barlow Twins which is comprised of two networks: an encoder and a projector. The encoder is implemented as a point-based network, wherein the final layers responsible for classification are omitted. The projector is constructed using two linear layers with 512 and 128 units. The first linear layer is followed by a batch normalization layer and a Rectified Linear Unit (ReLU) activation function. The output of the encoder is used for transfer tasks whose dimension depends on the size of the encoder network. The output of the projector is an embedding of 128 which is fed to the loss function for optimization of Barlow Twins. 3D Semantic segmentationWe experiment with PointNet and PointNet++ as our supervised methods for scene segmentation. PointNet uses multi-layer perceptrons (MLPs) to learn local features corresponding to each point and a symmetric function to obtain a representation of the point cloud. PointNet encoder applies input and feature transformations by using a transformation net [26], and then aggregates point features by max pooling producing a global point cloud embedding. The segmentation network concatenates global and local features to produce per-point scores. The details of the architecture can be found in the original work [8]. PointNet++[12] is a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set, which enables capturing local structures in point clouds. Both networks take 4096 points as input. ## 3 Experimental Setup ### Dataset We evaluate the performance of our method on two airborne LiDAR datasets: The DALES benchmark dataset [27], which is a publicly available dataset collected from an aerial laser scanner (ALS) in the city of Dayton; and a private dataset named LiDAR-CAT3, which we describe in greater detail in this section. The LiDAR-CAT3 dataset was collected by a Terrain Mapper 2 system, which combines a LiDAR sensor with two nadir cameras in RGB and NIR (Near InfraRed). The obtained average point density of first returns is 10 ppm. We use 8 of the attributes provided by the system: coordinates \((x,y,z)\), intensity, three color channels (R, G, B), and NIR channel. Additionally, we compute the NDVI (Normalized Difference Vegetation Index) [28]. The study area is composed of hilly and densely forested zones in Spain, where the orthometric heights were normalized to height above ground to account for changes in altitude due to mountainous terrain. Then, ground points (\(z=0\)) and outliers (\(z>100\) m) were filtered. Ground filtering is a common practice in ALS data processing [29], as the number of ground points in a scene often surpasses that of object points. Finally, we divided the area into point clouds of 40 m \(\times\) 40 m \(\times\) 50 m to be used as input samples to our networks. The classified categories with their relative percentage within labeled data are: high vegetation (74.88%), low vegetation (23.06%), roofs (2.01%), pylon (0.02%), wires (0.02%), and other buildings (0.01%). ### Pre-processing In consideration of the highly dense vegetation areas within LiDAR-CAT3 dataset, we employed SemDeDup [30] for reducing redundancy prior to training our SSL network. This approach involves removing redundant samples from the dataset by using pre-trained model embeddings to identify data samples that are semantically similar. Specifically, we utilized the embeddings generated by PointNet, previously trained on labeled data. The similarity threshold was set to 0.996, which reduced our unlabeled training dataset by 60%, increasing the proportion of under-represented categories within the dataset. We found this step essential for effectively learning representations of minority classes. Figure 2: Augmented point clouds. ### Implementation Details We train1 our Barlow Twins network for 300 epochs with a batch size of 150. We use a learning rate of \(10^{-4}\) and a linear warm-up schedule period of 10 epochs. For optimization of the supervised point-based networks, we set the initial learning rate to \(10^{-3}\), with a decay rate of 0.5 every 50 epochs. The optimizer used is Adam. Supervised networks are trained for 100 epochs with an early stopping on the validation loss. As for the batch size, it is set to 32. The loss is the weighted cross-entropy with double weight on low-represented categories (\(<\)1% data points). Footnote 1: GPU: NVIDIA RTX A6000 - 48 GB ## 4 Experiments and Results In this section, we perform several evaluations of the effectiveness of our method on DALES and LiDAR-CAT3 datasets for the task of 3D semantic segmentation and show that it learns meaningful representations. The results of the experiments are presented in Fig 3, Fig 4, and Tables 1 and 2. As evaluation metrics we use intersection-over-union per category (IoU), mean IoU (mIoU), and overall accuracy (OA). In Fig 4, we show our experiment varying the fraction of labeled data up to 50%. Remarkably, our pre-trained models outperform PointNet trained from scratch, especially when fine-tuning on fewer training samples. In Table 1, models were trained on 12% of labeled data from our experimental dataset LiDAR-CAT3. We can see that pre-trained PointNet (SSL + PN) improved mIoU by 3.1 absolute points which is an increase of 6% over the baseline. However, the highest mIoU score is achieved by the pre-trained PointNet++ with a mIoU score of 62.1%. Notably, our method performed particularly well in under-represented categories such as wires and roofs, increasing their IoU score by 26% and 9% respectively when using SSL+PN, and 7.6% and 4.1% when using PointNet++(SSL + PN++). To prove that our method generalizes to other datasets, we test our method on DALES. Table2 presents the results for categories with IoU \(>1\%\). We see that SSL pre-training in a low-data regime (10% labeled data) improves performance in all categories.
2305.15948
Vafa-Witten Theory: Invariants, Floer Homologies, Higgs Bundles, a Geometric Langlands Correspondence, and Categorification (String Math 2022 Proceedings)
This is a concise version of the original article in [arXiv:2203.17115] that will be published in the String Math 2022 Proceedings by the American Mathematical Society.
Meng-Chwan Tan
2023-05-25T11:44:29Z
http://arxiv.org/abs/2305.15948v3
# Vafa-Witten Theory: Invariants, Floer Homologies, ###### Abstract. We revisit Vafa-Witten theory in the more general setting whereby the underlying moduli space is not that of instantons, but of the full Vafa-Witten equations. We physically derive (i) a novel Vafa-Witten four-manifold invariant associated with this moduli space, (ii) their relation to Gromov-Witten invariants, (iii) a novel Vafa-Witten Floer homology assigned to three-manifold boundaries, (iv) a novel Vafa-Witten Atiyah-Floer correspondence, (v) a proof and generalization of a conjecture by Abouzaid-Manolescu about the hypercohomology of a perverse sheaf of vanishing cycles, (vi) a Langlands duality of these invariants, Floer homologies and hypercohomology, and (vii) a quantum geometric Langlands correspondence with purely imaginary parameter that specializes to the classical correspondence in the zero-coupling limit, where Higgs bundles feature in (ii), (iv), (vi) and (vii). We also explain how these invariants and homologies will be categorified in the process, and discuss their higher categorification. We thereby relate differential and enumerative geometry, topology and geometric representation theory in mathematics, via a maximally-supersymmetric topological quantum field theory with electric-magnetic duality in physics. 2010 Mathematics Subject Classification: 57R56. This proceeding is based on joint work with Z.-C. Ong in [**OT22**]. I would like to thank the referee for questions which have led to further refinement of this proceeding. This proceeding is supported in part by the MOE AcRF Tier 1 grant R-144-000-470-114. ###### Abstract We study the asymptotic behavior of the \(\mathcal{Q}\)-exact term in the \(\mathcal{Q}\)-exact form of the Notice that \(\mathcal{Z}_{\mathrm{VW},M_{4}}\) is a topological invariant of \(M_{4}\) which is an algebraic count of VW solutions with corresponding weight given by \(a_{k}q^{m_{k}}\) that we elaborated on above. This defines a novel \(\tau\)-dependent Vafa-Witten invariant of \(M_{4}\).2 Footnote 2: A purely algebro-geometric definition of \(\mathcal{Z}_{\mathrm{VW},M_{4}}\), in particular the \(a_{k}\)’s, was first given by Tanaka-Thomas in [17], albeit for projective algebraic surfaces only. The novelty here is that we provide a purely differentio-geometric definition of the \(a_{k}\)’s for a more general \(M_{4}\). When \(B=0\), \(a_{k}\) will become the Euler characteristic \(\chi(\mathcal{M}_{\mathrm{inst}}^{k})\), while \(m_{k}\) will become the instanton number. Then, \(\mathcal{Z}_{\mathrm{VW},M_{4}}\) will just become the usual partition function for instantons first derived in [14], as expected. ## 2 An \(\mathcal{N}=(4,4)\)\(A\)-model, Higgs Bundles and Gromov-Witten Theory In this section, we will perform dimensional reduction of the 4d VW theory down to 2d. The four-manifold \(M_{4}\) will be taken to be \(M_{4}=\Sigma\times C\), where \(\Sigma\) and \(C\) are both closed Riemann surfaces, and \(C\) is of genus \(g\geq 2\). ### Finiteness Conditions, BPS Equations in 2d and an \(\mathcal{N}=(4,4)\) Sigma-Model We consider a block diagonal metric \(g\) for \(M_{4}=\Sigma\times C\), \[g=\mathrm{diag}\big{(}g_{\Sigma},\epsilon g_{C}\big{)}, \tag{2.1}\] where \(\epsilon\) is a small parameter to deform \(g_{C}\). We shall use capital letters \(A,B=x^{1},x^{2}\) to denote coordinates on \(\Sigma\), and small letters \(a,b=x^{3},x^{4}\) to denote coordinates on \(C\). Taking the limit \(\epsilon\to 0\) then gives us a 2d theory on \(\Sigma\) with \(\mathcal{N}=(4,4)\) supersymmetry. The topological term aside, terms in (1.2) with \(\mu,\nu,\rho=A,B\) vanish as \(\epsilon\to 0\), while those with \(\mu,\nu,\rho=A,b\) survive. For \(\mu,\nu,\rho=a,b\), each term must be set to zero individually as they are accompanied by a factor of \(\epsilon^{-1}\). Since the action (1.2) is a sum of squares of such terms, we will need to set them to zero. This constraint will give us the finiteness conditions. Before we proceed further, we note the fact that \(F_{\mu\nu}^{+}=\frac{1}{2}(F_{\mu\nu}+\frac{1}{2}\epsilon_{\mu\nu\rho\lambda}F ^{\rho\lambda})\), and that \(B_{\mu\nu}\) is an anti-symmetric and self-dual 2-form (\(B_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\lambda}B^{\rho\lambda}\)) with 3 independent components which we can take to be \(B_{12}\), \(B_{13}\) and \(B_{14}\). The first finiteness condition we obtain by using the self-duality property of \(B_{\mu\nu}\) is \[D_{3}B_{12}=-D_{4}B_{12}=0. \tag{2.2}\] The field \(B_{12}\) is a 0-form w.r.t rotations on both \(C\) and \(\Sigma\), so (2.2) tells us that the 0-form \(B_{12}\) is covariantly constant on \(C\), which means \(B_{12}\) generates infinitesimal gauge transformations while leaving \(A_{C}\) fixed. We can however set \(B_{12}=0\), since we require gauge connections to be irreducible to avoid complications on \(\mathcal{M}_{H}^{G}(C)\). Next, identifying \(B_{13}\) and \(B_{14}\) as the two components of a 1-form \(\varphi\) on \(C\) and using the self-duality properties of \(F^{+}\) and \(B_{\mu\nu}\), we obtain Hitchin's equations on \(C\)[1] as the second finiteness condition, given by3 Footnote 3: \(D^{*}\varphi=\star D\star\varphi=D_{\mu}\varphi^{\mu}\), where \(\star\) is the Hodge star operator. \[F_{C}-\varphi\wedge\varphi =0, \tag{2.3}\] \[D\varphi =D^{*}\varphi =0,\] where \[\varphi=B_{13}dx^{3}+B_{14}dx^{4}=\varphi_{3}dx^{3}+\varphi_{4}dx^{4}. \tag{2.4}\] The space of solutions of \((A_{C},\varphi)\) to (2.3) modulo gauge transformations then span Hitchin's moduli space \(\mathcal{M}^{G}_{H}(C)\) for a connection \(A_{C}\) on a principal \(G\)-bundle \(P\) over the Riemann surface \(C\), and a section \(\varphi\in\Omega^{1}(C)\). The above equations leave the \((x^{1},x^{2})\) dependence of \(A_{C}\) and \(\varphi\) arbitrary, and thus the fields \((A_{C},\varphi)\) define a map \(\Phi:\Sigma\to\mathcal{M}^{G}_{H}(C)\). The target space \(\mathcal{M}^{G}_{H}(C)\) is a hyper-Kahler manifold, whence the sigma-model on \(\Sigma\) has an \(\mathcal{N}=(4,4)\) supersymmetry [1]. To obtain the corresponding 2d BPS equations of the \(\mathcal{N}=(4,4)\) sigma model on \(\Sigma\), we perform dimensional reduction of (1.3) on \(C\) with \(s=k=0\). Noting the fact that only terms with mixed indices on \(\Sigma\times C\) survive the reduction on \(C\), together with the self-duality properties of \(B_{\mu\nu}\), we obtain, from (1.3) and \(s=k=0\), \[F^{+}_{Aa} =0,\] \[\mathcal{D}_{A}B^{Aa} =0. \tag{2.5}\] Switching to complex coordinates, (2.5) can be written as \(\partial_{\bar{z}}A_{\bar{w}}=\partial_{\bar{z}}\varphi_{w}=0\).4 With \(A_{\bar{w}}\) and \(\varphi_{w}\) corresponding to bosonic scalars \(X^{i}\) and \(Y^{i}\) in the sigma-model, respectively, we get the 2d BPS equations as Footnote 4: In complex coordinates, we have \(z=x^{1}+ix^{2}\) and \(w=x^{3}+ix^{4}\), where \(A_{\bar{w}}=\frac{1}{2}(A_{3}+iA_{4})\) and \(\varphi_{w}=\frac{1}{2}(B_{13}-iB_{14})\). \[\partial_{\bar{z}}X^{i} =0,\] \[\partial_{\bar{z}}Y^{i} =0. \tag{2.6}\] After suitable rescalings, we can then rewrite (1.2) (with \(idz\wedge d\bar{z}=|dz^{2}|\)) as \[S_{\rm 2d} =\frac{1}{e^{2}}\int_{\Sigma}|dz^{2}|g_{i\bar{j}}\bigg{(}\partial _{z}X^{\bar{i}}\partial_{\bar{z}}X^{j}+\partial_{z}X^{i}\partial_{\bar{z}}X^{ \bar{j}}+\partial_{z}Y^{\bar{i}}\partial_{\bar{z}}Y^{j}+\partial_{z}Y^{i} \partial_{\bar{z}}Y^{\bar{j}}\bigg{)}\] \[+\text{topological term}. \tag{2.7}\] Hence, the path integral of the 2d, \(\mathcal{N}=(4,4)\) sigma model on \(\Sigma\) with action (2.7), localizes on the moduli space of holomorphic maps \(\Phi(X^{i},Y^{i}):\Sigma\to\mathcal{M}^{G}_{H}(C)\): \[\mathcal{M}_{\rm maps}=\{\Phi(X^{i},Y^{i}):\Sigma\to\mathcal{M}^{G}_{H}(C)\mid \partial_{\bar{z}}X^{i}=\partial_{\bar{z}}Y^{i}=0\}, \tag{2.8}\] where we have a 2d \(\mathcal{N}=(4,4)\)\(A\)-model on \(\Sigma\) with target \(\mathcal{M}^{G}_{H}(C)\). ### An \(A\)-model in Complex Structure \(I\) The space of fields \((A_{C},\,\varphi)\) span an infinite-dimensional affine space \(\mathcal{W}\). The cotangent vectors \(\delta A_{C}\) and \(\delta\varphi\) to \(\mathcal{M}^{G}_{H}(C)\) are solutions to the variations of equations (2.6). We can then introduce a basis \((\delta A_{w},\delta\varphi_{\bar{w}})\) and \((\delta A_{\bar{w}},\delta\varphi_{w})\) in \(\mathcal{W}\). From the BPS equations (2.6), which are \(\partial_{\bar{z}}A_{\bar{w}}=0\) and \(\partial_{\bar{z}}\varphi_{w}=0\), one can see that the complex structure relevant to the \(A\)-model is \(I\), with linear holomorphic functions consisting of \(A_{\bar{w}}\) and \(\varphi_{w}\). In complex structure \(I\), \(\mathcal{M}^{G}_{H}(C)\) can be identified as the moduli space of stable Higgs \(G\)-bundles on \(C\), \(\mathcal{M}^{G}_{\rm Higgs}(C)\). One can write the corresponding symplectic form as \(\omega_{I}=\omega^{\prime}_{I}-\delta\lambda_{I}\), where \[\omega^{\prime}_{I}=-\frac{1}{4\pi}\int_{C}\,{\rm Tr}\,\delta A_{C}\wedge \delta A_{C}\quad\text{and}\quad\lambda_{I}=\frac{1}{4\pi}\int_{C}\,{\rm Tr}\, \varphi\wedge\delta\varphi, \tag{2.9}\] and \(\omega_{I}\) is cohomologous to \(\omega^{\prime}_{I}\). Comparing the 4d topological term in (1.4) to (2.9), we see that the topological term can be written as \[i\tau\int_{\Sigma}\,\Phi^{*}(\omega_{I}). \tag{2.10}\] The 2d action (7), including the topological term, is then \[\begin{split} S_{\text{2d}}&=\frac{1}{e^{2}}\int_{ \Sigma}|dz^{2}|g_{i\bar{j}}\bigg{(}\partial_{z}X^{\bar{i}}\partial_{\bar{z}}X^{j }+\partial_{z}X^{i}\partial_{\bar{z}}X^{\bar{j}}+\partial_{z}Y^{\bar{i}} \partial_{\bar{z}}Y^{j}+\partial_{z}Y^{i}\partial_{\bar{z}}Y^{\bar{j}}\bigg{)} \\ &+i\tau\int_{\Sigma}\Phi^{*}(\omega_{I}).\end{split} \tag{11}\] We thus have a 2d, \(\mathcal{N}=(4,4)\)\(A\)-model on \(\Sigma\) with target \(\mathcal{M}^{G}_{\text{Higgs}}(C)\), where the path integral localizes on \[\mathcal{M}_{\text{maps}}=\{\Phi(X^{i},Y^{i}):\Sigma\to\mathcal{M}^{G}_{ \text{Higgs}}(C)\mid\partial_{\bar{z}}X^{i}=\partial_{\bar{z}}Y^{i}=0\}, \tag{12}\] the moduli space of holomorphic maps \(\Phi:\Sigma\to\mathcal{M}^{G}_{\text{Higgs}}(C)\). ### Vafa-Witten Invariants as Gromov-Witten Invariants of Higgs Bundles The virtual dimension of \(\mathcal{M}_{\text{maps}}\), like that of \(\mathcal{M}_{\text{VW}}\), ought to also be zero. This is because the 2d \(A\)-model is obtained via a topological deformation that sets \(C\to 0\) in the original 4d VW theory, whence the relevant index of kinetic operators counting the dimension of moduli space remains the same. Like \(\mathcal{Z}_{\text{VW},M_{4}}\) in 4d, \(\mathcal{Z}^{\text{closed}}_{A,\Sigma}\) can be interpreted as an integral of a virtual zero-form on virtually zero-dimensional \(\mathcal{M}_{\text{maps}}\), whence it can be evaluated as \[\mathcal{Z}^{\text{closed}}_{A,\Sigma}(\tau,\mathcal{M}^{G}_{\text{Higgs}}(C) )=\sum_{l}\tilde{a}_{l}q^{\tilde{m}_{l}}. \tag{13}\] Here, \(l\) denotes the \(l^{\text{th}}\) sector of \(\mathcal{M}_{\text{maps}}\) defined in (12) for _genus one_\(\Sigma\), the rational number \(\tilde{a}_{l}\) is given by \[\boxed{\tilde{a}_{l}=\int_{\mathcal{M}^{l}_{\text{maps}}}e(\mathcal{V})} \tag{14}\] where \(e\) is the signed Euler class of the vector bundle \(\mathcal{V}\) with fiber \(H^{0}(\Sigma,K\otimes\Phi^{*}T^{*}\mathcal{M}^{l}_{\text{maps}})\) and canonical bundle \(K\) on \(\Sigma\), and \(\tilde{m}_{l}\) is the corresponding worldsheet instanton number given by \[\boxed{\tilde{m}_{l}=\frac{1}{2\pi}\int_{\Sigma}\Phi^{*}_{l}(\omega_{I})} \tag{15}\] Notice that \(\mathcal{Z}^{\text{closed}}_{A,\Sigma}\) is an enumerative invariant which is an algebraic count of holomorphic maps with corresponding weight given by \(\tilde{a}_{l}q^{\tilde{m}_{l}}\) that we elaborated on above. This coincides with the definition of the GW invariant, which then means that one can identify \(\mathcal{Z}^{\text{closed}}_{A,\Sigma}\) as \[\mathcal{Z}_{\text{GW},\Sigma}(\tau,\mathcal{M}^{G}_{\text{Higgs}}(C))=\sum_{ l}\tilde{a}_{l}q^{\tilde{m}_{l}} \tag{16}\] where \(\mathcal{Z}_{\text{GW},\Sigma}\) is a \(\tau\)-dependent GW invariant of \(\mathcal{M}^{G}_{\text{Higgs}}(C)\). From the topological invariance of the 4d theory, we have a 4d-2d correspondence of partition functions \[\mathcal{Z}_{\text{VW},M_{4}}(\tau,G)=\mathcal{Z}_{\text{GW},\Sigma}(\tau, \mathcal{M}^{G}_{\text{Higgs}}(C)). \tag{17}\] In other words, we have a correspondence between the VW invariant of \(M_{4}=\Sigma\times C\) and the GW invariant of \(\mathcal{M}^{G}_{\text{Higgs}}(C)\)). In fact, recall that the integers \(\tilde{m}_{l}\) (in (16)) correspond to the integers \(m_{k}\) (in (5)). Hence, (17) means that we have \[\boxed{a_{k}=\tilde{a}_{l}} \tag{18}\] where \(a_{k}\) and \(\tilde{a}_{l}\) are given in (6) and (14), respectively. In other words, one can also determine the \(a_{k}\)'s, the VW invariants of \(T^{2}\times C\), via the signed Euler class of a bundle \(\mathcal{V}\) over \(\mathcal{M}_{\text{maps}}^{l}\).5 Footnote 5: Computing the \(\tilde{a}\)’s and thus \(a_{k}\)’s for \(T^{2}\times C\) explicitly is a purely mathematical endeavour that is beyond the scope of this physical mathematics proceeding which main objective is to furnish their fundamental definitions via the expressions (14) and (6), respectively. The reader who seeks an explicit computation of these invariants may be happy to know that after our work appeared, this was done purely mathematically in [**N23**]. ## 3. A Novel Floer Homology from Boundary Vafa-Witten Theory In this section, we will show how we can physically derive a novel Floer homology by considering boundary VW theory on \(M_{4}=M_{3}\times\mathbb{R}^{+}\) to physically derive a VW Floer homology assigned to \(M_{3}\).6 Footnote 6: To be precise, VW theory is still being defined on an \(M_{4}\) with no boundary. However, to make contact with Floer theory, we will need to examine a hyper-slice of \(M_{4}\), which we can topologically regard as \(M_{3}\times\mathbb{R}^{-}\cup_{M_{3}}M_{3}\times\mathbb{R}^{+}\). As there is no time-evolution in our topological theory, it is sufficient to examine only \(M_{3}\times\mathbb{R}^{+}\), where \(M_{3}\) can then be regarded as a boundary. This is consistent with the idea that categorification of topological invariants can be achieved via successive introductions of boundaries to \(M_{4}\), which we will elaborate upon in §7. ### SQM Interpretation of Boundary Vafa-Witten Theory Let the manifold of the 4d theory in (2) be \(M_{4}=M_{3}\times\mathbb{R}^{+}\), where the \(M_{3}\) boundary is a closed three-manifold, and \(\mathbb{R}^{+}\) is the 'time' coordinate. We also let spacetime indices take the values \(\mu=0,1,2,3\), with \(\mu=0\) being the time direction, while \(\mu=i,j,k=1,2,3\) being the spatial directions. Turning to the BPS equations (1) of boundary VW theory, we split the indices into space and time directions. Using \(F_{\mu\nu}^{+}=\frac{1}{2}(F_{\mu\nu}+\frac{1}{2}\epsilon_{\mu\nu\rho\lambda} F^{\rho\lambda})\) and \(B_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\lambda}B^{\rho\lambda}\), we can reexpress the VW equations (1) as \[\begin{split}\dot{A}^{i}+\frac{1}{2}\epsilon^{ijk}\big{(}F_{jk}- [B_{j},B_{k}]\big{)}&=0,\\ \dot{B}^{i}+\epsilon^{ijk}\big{(}\partial_{j}B_{k}+[A_{j},B_{k}] \big{)}&=0,\end{split} \tag{19}\] where the temporal gauge \(A^{0}=0\) is taken, \(B^{i}=B^{0i}\), \(\epsilon^{ijk}=\epsilon^{0ijk}\), and \(A^{i},B^{i}\in\Omega^{1}(M_{3})\).7 Footnote 7: Using self-duality properties, we have \(B^{0i}=B^{i}=\epsilon^{ijk}B_{jk}\). Introducing a complexified connection \(\mathcal{A}=A+iB\in\Omega^{1}(M_{3})\), of a \(G_{\mathbb{C}}\)-bundle on \(M_{3}\). We then find that (19) can be expressed as \[\dot{\mathcal{A}}^{i}+\frac{1}{2}\epsilon^{ijk}\mathcal{F}_{jk}=0, \tag{20}\] where \(\mathcal{F}\in\Omega^{2}(M_{3})\) is the complexified field strength. Note that \(\epsilon^{ijk}\mathcal{F}_{jk}\) is a gradient vector field of a complex Chern-Simons functional \[V(\mathcal{A})=CS(\mathcal{A})=-\frac{1}{4\pi^{2}}\int_{M_{3}}\text{Tr}\bigg{(} \mathcal{A}\wedge d\mathcal{A}+\frac{2}{3}\mathcal{A}\wedge\mathcal{A}\wedge \mathcal{A}\bigg{)}, \tag{21}\] where we have the gradient flow equation \[\frac{d\mathcal{A}^{i}}{dt}+sg^{ij}_{\mathfrak{A}}\frac{\partial V(\mathcal{A})}{ \partial\mathcal{A}^{j}}=0. \tag{12}\] The action for boundary VW theory in (2) can be rewritten as \[\begin{split} S^{\rm bdry}_{\rm VW}&=\frac{1}{e^{2 }}\int dt\int_{M_{3}}\operatorname{Tr}\biggl{(}\dot{\mathcal{A}}^{i}+sg^{ij}_{ \mathfrak{A}}\frac{\partial V(\mathcal{A})}{\partial\mathcal{A}^{j}}\biggr{)} ^{2}\\ &-\frac{i\tau}{4\pi}\int_{M_{3}}\operatorname{Tr}\biggl{(}A\wedge dA +\frac{2}{3}A\wedge A\wedge A+B\wedge\star DB\biggr{)}.\end{split} \tag{13}\] We can interpret (13) as the action of an SQM model with target \(\mathfrak{A}\), the space of complexified connections \(\mathcal{A}\) on \(M_{3}\) with metric \(g^{ij}_{\mathfrak{A}}\), and a single nilpotent topological scalar supercharge \(\mathcal{Q}\), where (12), which describes the VW equations, can be interpreted as a gradient flow equation between fixed (or time-invariant) critical points of \(CS(\mathcal{A})\) on \(\mathfrak{A}\). Assuming that the fixed critical points are isolated and nondegenerate in \(\mathfrak{A}\),8 the partition function of boundary VW theory will thus be an algebraic count of fixed critical points of \(CS(\mathcal{A})\), i.e., fixed flat \(G_{\mathbb{C}}\)-connections on \(M_{3}\), where there are VW flow lines between these fixed critical points described by the gradient flow equation. Footnote 8: This is guaranteed (though not necessary) when all critical points are isolated and nondegenerate. This can be the case for an appropriate choice of \(G\) and \(M_{3}\). For example, one could choose (1) \(G\) compact and \(M_{3}\) of nonnegative Ricci curvature such as a three-sphere or its quotient, or (2) an \(M_{3}\) with a finite \(G_{\mathbb{C}}\) representation variety, and introduce physically-trivial \(\mathcal{Q}\)-exact terms to the action to perturb \(V(\mathcal{A})\). We would like to thank A. Haydys for discussions on this. The second term in (13) is a \(\tau\)-dependent topological term that contributes to an overall factor in the path integral. ### A Novel Vafa-Witten Floer Homology For an \(M_{4}\) with boundary \(\partial M_{4}=M_{3}\), one needs to specify boundary conditions on \(M_{3}\) to compute the path integral by first defining a restriction of the fields to \(M_{3}\), which we shall denote as \(\Psi_{M_{3}}\), and then specifying boundary values for these restrictions. This is equivalent to inserting in the path integral an operator functional \(F(\Psi_{M_{3}})\) that is nonvanishing in the \(\mathcal{Q}\)-cohomology. The partition function on \(M_{4}\) can be expressed as \[\mathcal{Z}_{{\rm VW},M_{4}}(\tau,G)=\langle 1\rangle_{F(\Psi_{M_{3}})}=\sum_{ k}\mathcal{F}^{G,\tau}_{{\rm VW}}(\Psi^{k}_{M_{3}}). \tag{14}\] Here, the summation in '\(k\)' is over all sectors of \(\mathcal{M}_{\rm VW}\) labeled by the VW number \(m_{k}\), and \(\mathcal{F}^{G,\tau}_{{\rm VW}}(\Psi^{k}_{M_{3}})\) is the \(k^{\rm th}\) contribution to the partition function that depends on the expression of \(F(\Psi_{M_{3}})\) in the bosonic fields on \(M_{3}\) evaluated over the corresponding solutions of the VW equations restricted to \(M_{3}\). As an SQM model on \(\mathfrak{A}\), the partition function can be expressed as \[\mathcal{Z}_{{\rm VW},M_{4}}(\tau,G)=\sum_{k}\mathcal{F}^{k}_{{\rm VW-Floer}}( M_{3},G,\tau), \tag{15}\] where each \(\mathcal{F}^{k}_{{\rm VW-Floer}}(M_{3},G,\tau)\) can be identified with a class in what we shall henceforth call a Vafa-Witten Floer homology \(\operatorname{HF}^{{\rm VW}}_{d_{k}}(M_{3},G,\tau)\) assigned to \(M_{3}\) of degree \(d_{k}\), where \(d_{k}\) counts the number of outgoing VW flow lines from the corresponding fixed critical point of \(CS(\mathcal{A})\). In summary, from (3.6) and (3.7), we can write \[\mathcal{Z}_{\mathrm{VW},M_{4}}(\tau,G)=\sum_{k}\mathcal{F}_{\mathrm{VW}}^{G,\tau }(\Psi_{M_{3}}^{k})=\sum_{k}\mathrm{HF}_{d_{k}}^{\mathrm{VW}}(M_{3},G,\tau)= \mathcal{Z}_{\mathrm{VW},M_{3}}^{\mathrm{Floer}}(\tau,G) \tag{3.8}\] where '\(k\)' sums from zero to the maximum number of fixed VW solutions on \(M_{3}\times\mathbb{R}^{+}\) that correspond to isolated and non-degenerate fixed critical points of \(CS(\mathcal{A})\).9 Footnote 9: See footnote 8. The \(\tau\)-dependence of \(\mathcal{F}_{\mathrm{VW}}^{G,\tau}(\Psi_{M_{3}}^{k})\) and therefore \(\mathrm{HF}_{k}^{\mathrm{VW}}(M_{3},G,\tau)\) arises because in evaluating (3.6), there will be a factor of \(q^{s_{k}}\) for the \(k^{\mathrm{th}}\) term, where from the action \(S_{\mathrm{VW}}^{\mathrm{bdry}}\) in (3.5), the integer \[s_{k}=\frac{1}{8\pi^{2}}\int_{M_{3}}\mathrm{Tr}\bigg{(}A_{(k)}\wedge dA_{(k)} +\frac{2}{3}A_{(k)}\wedge A_{(k)}\wedge A_{(k)}+B_{(k)}\wedge\star DB_{(k)} \bigg{)}. \tag{3.9}\] Here, the subscript '\((k)\)' denotes that they are the \(k^{\mathrm{th}}\) fixed solution to the VW equations on \(M_{3}\times\mathbb{R}^{+}\) restricted to \(M_{3}\). ## 4. A Vafa-Witten Atiyah-Floer Correspondence In this section, we continue with a Heegaard split of \(M_{3}\) into \(M_{3}^{\prime}\) and \(M_{3}^{\prime\prime}\) along a Riemann surface \(C\), as shown in Fig. 1 (left), allowing us to relate Vafa-Witten Floer homology obtained in the previous section to Lagrangian Floer homology, in what is a novel Vafa-Witten version of the Atiyah-Floer correspondence [1] based on instantons. In doing so, we would be able to physically prove and generalize a conjecture by mathematicians Abouzaid-Manolescu about the hypercohomology of a perverse sheaf of vanishing cycles in the moduli space of irreducible flat \(SL(2,\mathbb{C})\)-connections on \(M_{3}\). We can thus write \(M_{4}=\left(\mathbb{R}^{+}\times I^{\prime}\times C\right)\cup_{C}\left( \mathbb{R}^{+}\times I^{\prime\prime}\times C\right)\), where \(M_{3}^{\prime}=I^{\prime}\times C\) and \(M_{3}^{\prime\prime}=I^{\prime\prime}\times C\). This is illustrated in Fig. 1 (right), where taking \(C\to 0\), we indeed have \(\mathbb{R}^{+}\times I^{\prime}\) and \(\mathbb{R}^{+}\times I^{\prime\prime}\). ### A Vafa-Witten Version of the Atiyah-Floer Correspondence If \(C\to 0\), we end up with an open \(A\)-model in complex structure \(I\) on \(\mathbb{R}^{+}\times I^{\prime}\) and \(\mathbb{R}^{+}\times I^{\prime\prime}\), respectively, with target space \(\mathcal{M}_{\mathrm{Higgs}}^{G}(C)\). Because we have an \(A\)-model in complex structure \(I\), the admissible branes are those of type \((A,*,*)\) Specifically, we need an \((A,*,*)\)-brane in \(\mathcal{M}_{\mathrm{Higgs}}^{G}(C)\) that corresponding to a Higgs pair on \(C\) that can be extended to flat complex connections \(\mathcal{A}\) on \(M_{3}^{{}^{\prime\prime}}\). Such an \((A,*,*)\)-brane has indeed been obtained in [10].10 It is an \((A,B,A)\)-brane \(\alpha_{M_{3}^{{}^{\prime\prime}}}\), that is simultaneously an \(A\)-brane in \(\mathcal{M}_{\rm Higgs}^{G}(C)\) and an \(A\)-brane in \(\mathcal{M}_{H}^{G}(C)\) in complex structure \(K\), i.e., \(\mathcal{M}_{\rm flat}^{G_{\mathbb{C}}}(C)\), the moduli space of flat \(G_{\mathbb{C}}\)-connections on \(C\), where it corresponds to flat connections that can be extended to \(M_{3}^{{}^{\prime\prime}}\). It is middle-dimensional, and is therefore a Lagrangian brane. Let us henceforth denote this brane as \(L_{{}^{\prime\prime}}\). Footnote 10: The 4d theory considered in [10] is not the VW but the GL theory of [10], albeit with parameter \(t=0\). However, both these 4d theories descend to the same 2d \(A\)-model with target \(\mathcal{M}_{\rm Higgs}^{G}(C)\) after dimensional reduction on \(C\), and since our \(A\)-branes of interest are \(A\)-model objects within \(\mathcal{M}_{\rm Higgs}^{G}(C)\), the arguments used and examples stated in [10] are applicable here. Now, with two split pieces \(M_{4}^{{}^{\prime\prime}}\), when \(C\to 0\), we have two strings, each ending on pairs of Lagrangian branes \((L_{0},L^{\prime})\) and \((L^{\prime\prime},L_{1})\) (see Fig. 2.) We then glue the open worldsheets together along their common boundary \(L^{{}^{\prime\prime}}\), giving us a single \(A\)-model, with a single string extending from \(L_{0}\) to \(L_{1}\), which is equivalent to gluing \(M_{4}^{{}^{\prime\prime}}\) along \(C\times\mathbb{R}^{+}\). (see Fig. 2 again.) As before, one can recast the \(A\)-model here as an SQM model, where \(\mathbb{R}^{+}\) is 'time', and the target space is \(\mathscr{P}(L_{0},L_{1})\), the space of smooth trajectories from \(L_{0}\) to \(L_{1}\) (arising from the interval \(I\) that connects them). The BPS equations for this \(A\)-model are (2.6), i.e., holomorphic maps from the worldsheet to the target space. They can be written as a gradient flow equation on the worldsheet \[\frac{\partial Z^{l}}{\partial t}+i\frac{\partial Z^{l}}{\partial s}=0, \tag{4.1}\] where we have used real coordinates \(t\) and \(s\) (for \(z=t+is\)), and here, \(Z^{l}=X^{l}+Y^{l}\). The fixed critical points of the underlying potential of the SQM model that contribute to the partition function are defined by \(\dot{Z}^{l}=\partial Z^{l}/\partial s=0\). Since '\(s\)' is the spatial coordinate of \(I\), it would mean that the fixed critical points just correspond to fixed stationary trajectories in \(\mathscr{P}(L_{0},L_{1})\), i.e., the intersection points of \(L_{0}\) and \(L_{1}\). Thus, the partition function of the \(A\)-model, which, from the SQM model perspective, is given by an algebraic count of the fixed critical points of its underlying potential, will be an algebraic count of the intersection points of \(L_{0}\) and \(L_{1}\), where there are flow lines between the intersection points that obey (4.1). These flow lines correspond to holomorphic Whitney disks. From this description of the partition function, we have physically realized the Lagrangian Floer homology first defined in [10], where the intersection points of \(L_{0}\) and \(L_{1}\) actually generate the chains of the Lagrangian Floer complex, and the Floer differential, which counts the number of holomorphic Whitney disks, can be Figure 2. Identifying \(L^{\prime}\) and \(L^{\prime\prime}\) and gluing them together to form a single open string. interpreted as the outgoing flow lines at each intersection point of \(L_{0}\) and \(L_{1}\) which number would be the degree of the corresponding chain in the complex. Specifically, let \((L_{0}\cap L_{1})_{i}^{n_{i}}\) denote the \(i^{\text{th}}\) point of the intersection \(L_{0}\cap L_{1}\) where there are \(n_{i}\) outgoing flow lines, whence we can identify \[(L_{0}\cap L_{1})_{i}^{n_{i}}\in\operatorname{HF}_{n_{i}}^{\text{Lagr}}\bigl{(} \mathcal{M}_{\text{Higgs}}^{G}(C),L_{0},L_{1}\bigr{)}, \tag{4.2}\] where \(\operatorname{HF}_{n_{i}}^{\text{Lagr}}\bigl{(}\mathcal{M}_{\text{Higgs}}^{G }(C),L_{0},L_{1}\bigr{)}\) is the Lagrangian Floer homology of \((L_{0},L_{1})\) on \(\mathcal{M}_{\text{Higgs}}^{G}(C)\) of degree \(n_{i}\). Then, the partition function of the \(A\)-model will be given by \[\mathcal{Z}_{A,L}\bigl{(}\tau,\mathcal{M}_{\text{Higgs}}^{G}(C)\bigr{)}=\sum _{i}\operatorname{HF}_{n_{i}}^{\text{Lagr}}\bigl{(}\mathcal{M}_{\text{Higgs}}^ {G}(C),L_{0},L_{1},\tau\bigr{)}, \tag{4.3}\] A \(\tau\)-dependency appears here because of a \(\tau\)-dependent term in the \(A\)-model action. Since the underlying boundary VW theory on \(M_{4}=M_{3}\times\mathbb{R}^{+}\) is topological, we will have the following equivalence of partition functions: \[\mathcal{Z}_{\text{VW},M_{4}}(\tau,G)=\mathcal{Z}_{A,L}\bigl{(}\tau,\mathcal{ M}_{\text{Higgs}}^{G}(C)\bigr{)}, \tag{4.4}\] which, from (3.8) and (4.3), means that \[\sum_{k}\operatorname{HF}_{d_{k}}^{\text{VW}}(M_{3},G,\tau)=\sum_{i} \operatorname{HF}_{n_{i}}^{\text{Lagr}}\bigl{(}\mathcal{M}_{\text{Higgs}}^ {G}(C),L_{0},L_{1},\tau\bigr{)}. \tag{4.5}\] The gradings '\(d_{k}\)' and '\(n_{i}\)' in (4.5) match. To understand this, recall that the VW flow lines between fixed critical points in \(\mathfrak{A}\) are non-fixed solutions to the VW equations (1.1) on \(M_{3}\times\mathbb{R}^{+}\). Also, in SS2.1, it was shown that the VW equations descend to the worldsheet instanton equations (2.6) defining holomorphic maps from the worldsheet to \(\mathcal{M}_{\text{Higgs}}^{G}(C)\), the non-fixed solutions to which are the flow lines between fixed critical points in \(\mathscr{P}(L_{0},L_{1})\). Thus, there is a one-to-one correspondence between the flow lines that define \(\operatorname{HF}_{*}^{\text{VW}}\) and underlie the LHS of (4.5), and the flow lines that define \(\operatorname{HF}_{*}^{\text{Lagr}}\) and underlie the RHS of (4.5). Moreover, '\(k\)' and '\(i\)' obviously match, too. We thus have a degree-by-degree isomorphism of the VW Floer homology and the Lagrangian Floer homology, whence we would have a Vafa-Witten Atiyah-Floer correspondence \[\operatorname{HF}_{*}^{\text{VW}}(M_{3},G,\tau)\cong\operatorname{HF}_{*}^{ \text{Lagr}}\bigl{(}\mathcal{M}_{\text{Higgs}}^{G}(C),L_{0},L_{1},\tau\bigr{)}. \tag{4.6}\] A Physical Proof and Generalization of a Conjecture by Abouzaid-Manolescu about the Hypercohomology of a Perverse Sheaf of Vanishing Cycles A hypercohomology \(\operatorname{HP}^{*}(M_{3})\) was constructed by Abouzaid-Manolescu in [1], where it was conjectured to be isomorphic to instanton Floer homology assigned to \(M_{3}\) for the complex gauge group \(SL(2,\mathbb{C})\). Its construction was via a Heegaard split of \(M_{3}=M_{3}^{\prime}\cup_{C}M_{3}^{\prime\prime}\) along \(C\) of genus \(g\), and the intersection of the two associated Lagrangians in the moduli space \(X_{\text{irr}}(C)\) of irreducible flat \(SL(2,\mathbb{C})\)-connections on \(C\) (that represent solutions extendable to \(M_{3}^{\prime}\) and \(M_{3}^{\prime\prime}\), respectively), to which one can associate a perverse sheaf of vanishing cycles. \(\operatorname{HP}^{*}(M_{3})\) is then the hypercohomology of this perverse sheaf of vanishing cycles in \(X_{\text{irr}}(M_{3})\), where it is an invariant of \(M_{3}\) independent of the Heegaard split. Based on the mathematical construction of \(\operatorname{HP}^{*}(M_{3})\) described above, it would mean that a physical realization of (the dual of) \(\operatorname{HP}^{*}(M_{3})\) ought to be via an open \(A\)-model with Lagrangian branes \(L_{0}\) and \(L_{1}\) in the target \(X_{\text{irr}}(C)\), where the observables contributing to the partition function can be interpreted as classes in the Lagrangian Floer homology \(\operatorname{HF}_{*}^{\operatorname{Lagr}}\bigl{(}X_{\operatorname{irr}}(C),L_{0},L_{1},\tau\bigr{)}\). First, note that there is an isomorphism between \(\operatorname{HF}_{*}^{\operatorname{Lagr}}\) and the homology of Lagrangian submanifolds in \(X_{\operatorname{irr}}(C)\)[18, Theorem 11], i.e., \[\operatorname{HF}_{*}^{\operatorname{Lagr}}\bigl{(}X_{\operatorname{irr}}(C),L_ {0},L_{1},\tau\bigr{)}\cong\operatorname{H}_{*}(L,\mathbb{Z}_{2})_{\otimes \mathbb{Z}_{2}}\Lambda, \tag{4.7}\] where \(\Lambda\) is a scalar function over \(\mathbb{Z}_{2}\), called the Novikov field, and \(L\) on the RHS can be taken as either \(L_{0}\) or \(L_{1}\). The homology cycles of the Lagrangian (i.e., middle-dimensional) submanifolds of \(X_{\operatorname{irr}}(C)\) have a maximum dimension of \(\frac{1}{2}\mathrm{dim}(X_{\operatorname{irr}}(C))\), where \(\frac{1}{2}\mathrm{dim}(X_{\operatorname{irr}}(C))=2(3g-3)\).11 Including the zero-cycle, the grading of \(\operatorname{H}_{*}(L,\mathbb{Z}_{2})_{\otimes\mathbb{Z}_{2}}\Lambda\) and therefore \(\operatorname{HF}_{*}^{\operatorname{Lagr}}\bigl{(}X_{\operatorname{irr}}(C ),L_{0},L_{1},\tau\bigr{)}\), goes as \(0,1,\dots,2(3g-3)\). Footnote 11: It is a fact that \(\dim(X_{\operatorname{irr}}(C))\) is given by \(4(N^{2}-1)(g-1)\) for \(G_{\mathbb{C}}=SL(N,\mathbb{C})\), where \(g\) is the genus of \(C\). Second, note that in [1, Theorem 1.8], it was computed that \(\operatorname{HP}^{k}\) is nonvanishing only if \(-3g+3\leq k\leq 3g-3\). In other words, the grading of \(\operatorname{HP}^{*}\) goes as \(-(3g-3),\dots,0,\dots,(3g-3)\). These two observations then mean that there is a one-to-one correspondence between the gradings of \(\operatorname{HP}^{*}(M_{3})\) and \(\operatorname{HF}_{*}^{\operatorname{Lagr}}\). Moreover, the generators of \(\operatorname{HP}^{*}\) and \(\operatorname{HF}_{*}^{\operatorname{Lagr}}\) both originate from the intersection points of \(L_{0}\) and \(L_{1}\) in \(X_{\operatorname{irr}}(C)\). Hence, we can identify \(\operatorname{HP}^{*}\) with (the dual of) \(\operatorname{HF}_{*}^{\operatorname{Lagr}}\), i.e., \[\operatorname{HP}^{*}(M_{3})\cong\operatorname{HF}_{*}^{\operatorname{Lagr}} \bigl{(}X_{\operatorname{irr}}(C),L_{0},L_{1},\tau\bigr{)} \tag{4.8}\] This agrees with [12, Remark 6.15]. Notice from the Morse functional (3.3) and the gradient flow equation (3.4) that the definition of \(\operatorname{HF}_{*}^{\operatorname{VW}}\) coincides with the definition of the instanton Floer homology in [18], albeit for a _complex_ gauge group \(G_{\mathbb{C}}\). This means that we can also express the LHS of (4.6) as \(\operatorname{HF}_{*}^{\operatorname{Inst}}\bigl{(}M_{3},G_{\mathbb{C}},\tau \bigr{)}\), the instanton Floer homology of \(G_{\mathbb{C}}\) assigned to \(M_{3}\). Also, recall that the Lagrangian branes \(L_{0}\) and \(L_{1}\) on the RHS of (4.6) are \((A,B,A)\)-branes, i.e., they can also be interpreted as Lagrangian branes in \(\mathcal{M}_{H}^{G}(C)\) in complex structure \(K\), or equivalently, \(\mathcal{M}_{\operatorname{flat}}^{G_{\mathbb{C}}}(C)\), the moduli space of irreducible flat \(G_{\mathbb{C}}\)-connections on \(C\). These two points then mean that we can also write (4.6) as \[\operatorname{HF}_{*}^{\operatorname{inst}}(M_{3},G_{\mathbb{C}},\tau)\cong \operatorname{HF}_{*}^{\operatorname{Lagr}}\bigl{(}\mathcal{M}_{\operatorname {flat}}^{G_{\mathbb{C}}}(C),L_{0},L_{1},\tau\bigr{)} \tag{4.9}\] In other words, the VW Atiyah-Floer correspondence in (4.6) can also be interpreted as an Atiyah-Floer correspondence for \(G_{\mathbb{C}}\)-instantons. It is now clear from (4.9) and (4.8), that for \(G_{\mathbb{C}}=SL(2,\mathbb{C})\), we have \[\operatorname{HP}^{*}(M_{3})\cong\operatorname{HF}_{*}^{\operatorname{inst}}(M _{3},SL(2,\mathbb{C}),\tau) \tag{4.10}\] for complex constant \(\tau\). This is exactly the conjecture by Abouzaid-Manolescu about \(\operatorname{HP}^{*}(M_{3})\) in [1]! This agrees with their expectations in [1, sect. 9.2] that \(\operatorname{HP}^{*}(M_{3})\) ought to be part of 3+1 dimensional TQFT based on the VW equations. It was argued in [1, sect. 9.1] that the construction of \(\operatorname{HP}^{*}(M_{3})\) can be generalized to \(SL(N,\mathbb{C})\). Indeed, notice that (4.9) implies that there ought to be a \(G_{\mathbb{C}}\) generalization of the Abouzaid-Manolescu conjecture in (4.10) to \[\operatorname{HP}^{*}(M_{3},G_{\mathbb{C}})\cong\operatorname{HF}_{*}^{ \operatorname{inst}}(M_{3},G_{\mathbb{C}},\tau) \tag{4.11}\] where the hypercohomology \(\operatorname{HP}^{*}(M_{3},G_{\mathbb{C}})\) of the perverse sheaf of vanishing cycles in \(\mathcal{M}^{G_{\mathbb{C}}}_{\operatorname{flat}}(M_{3})\) is such that \[\operatorname{HP}^{*}(M_{3},G_{\mathbb{C}})\cong\operatorname{HF}^{\operatorname{ Lag}}_{*}\bigl{(}\mathcal{M}^{G_{\mathbb{C}}}_{\operatorname{flat}}(C),L_{0},L_{1}, \tau\bigr{)} \tag{4.12}\] which again agrees with [1, Remark 6.15]. Langlands Duality of Vafa-Witten Invariants, Gromov-Witten invariants, Floer Homologies and the Abouzaid-Manolescu Hypercohomology It is known that \(\mathcal{N}=4\) supersymmetric Yang-Mills theories has a \(SL(2,\mathbb{Z})\) symmetry, with \(S\)- and \(T\)-duality, as mentioned in SS1. In particular, the theory with complex coupling \(\tau\) and gauge group \(G\), is \(S\)-dual to a theory with complex coupling \(-\frac{1}{n_{\mathfrak{g}}\tau}\) and Langlands dual gauge group \({}^{L}G\), i.e., we have, up to a possible phase factor of modular weights that is just a constant, a duality of VW partition functions \[\mathcal{Z}_{\operatorname{VW},M_{4}}(\tau,G)\longleftrightarrow\mathcal{Z} _{\operatorname{VW},M_{4}}\Bigl{(}-\frac{1}{n_{\mathfrak{g}}\tau},\,^{L}G \Bigr{)} \tag{5.1}\] In other words, we have a Langlands duality of VW invariants of \(M_{4}\), given by (5.1). ### Langlands Duality of Gromov-Witten Invariants Note that if \(M_{4}=\Sigma\times C\), from (5.1) and (2.17), 4d \(S\)-duality would mean that we have the 2d duality \[\mathcal{Z}_{\operatorname{GW},\Sigma}\bigl{(}\tau,\mathcal{M}^{G}_{ \operatorname{Higgs}}(C)\bigr{)}\longleftrightarrow\mathcal{Z}_{ \operatorname{GW},\Sigma}\Bigl{(}-\frac{1}{n_{\mathfrak{g}}\tau},\,\mathcal{M }^{{}^{L}G}_{\operatorname{Higgs}}(C)\Bigr{)} \tag{5.2}\] where \(\mathcal{M}^{G}_{\operatorname{Higgs}}\) and \(\mathcal{M}^{{}^{L}G}_{\operatorname{Higgs}}\) are mirror manifolds. In other words, we have a Langlands duality of GW invariants that can be interpreted as a mirror symmetry of Higgs bundles, given by (5.2). ### Langlands Duality of Vafa-Witten Floer Homology If \(M_{4}=M_{3}\times\mathbb{R}^{+}\), from (3.8) and (5.1), we have the duality \[\mathcal{Z}^{\operatorname{Floer}}_{\operatorname{VW},M_{3}}(\tau,G) \longleftrightarrow\mathcal{Z}^{\operatorname{Floer}}_{\operatorname{VW},M_{3 }}\Bigl{(}-\frac{1}{n_{\mathfrak{g}}\tau},\,^{L}G\Bigr{)}. \tag{5.3}\] In turn, from (3.8), this means that we have the duality \[\operatorname{HF}^{\operatorname{VW}}_{*}(M_{3},G,\tau)\longleftrightarrow \operatorname{HF}^{\operatorname{VW}}_{*}(M_{3},\,^{L}G,-1/n_{\mathfrak{g}}\tau) \tag{5.4}\] In other words, we have a Langlands duality of VW Floer homologies assigned to \(M_{3}\), given by (5.4). ### Langlands Duality of Lagrangian Floer Homology From (5.3) and (4.4), we have the duality \[\mathcal{Z}_{A,L}\bigl{(}\tau,\mathcal{M}^{G}_{\operatorname{Higgs}}(C)\bigr{)} \longleftrightarrow\mathcal{Z}_{A,L}\Bigl{(}-\frac{1}{n_{\mathfrak{g}}\tau}, \,^{L}G\Bigr{)}. \tag{5.5}\] Then, from the RHS of the VW Atiyah-Floer correspondence in (4.6), which defines the state spectrum of \(\mathcal{Z}_{A,L}\), we have the duality \[\operatorname{HF}^{\operatorname{Lag}}_{*}\bigl{(}\mathcal{M}^{G}_{ \operatorname{Higgs}}(C),L_{0},L_{1},\tau\bigr{)}\longleftrightarrow \operatorname{HF}^{\operatorname{Lag}}_{*}\bigl{(}\mathcal{M}^{{}^{L}G}_{ \operatorname{Higgs}}(C),L_{0},L_{1},-1/n_{\mathfrak{g}}\tau\bigr{)} \tag{5.6}\] In other words, we have a Langlands duality of Lagrangian Floer homologies of Higgs bundles, given by (5.6). ### Langlands Duality of the Abouzaid-Manolescu Hypercohomology From (4.11), the fact that its RHS can be identified with \(\operatorname{HF}_{*}^{\operatorname{\mathrm{VW}}}(M_{3},G,\tau)\), and the relation (5.4), we have the duality \[\operatorname{HP}^{*}(M_{3},G_{\mathbb{C}},\tau)\longleftrightarrow \operatorname{HP}^{*}(M_{3},{}^{L}G_{\mathbb{C}},-1/n_{\mathfrak{g}}\tau) \tag{5.7}\] In other words, we have a Langlands duality of the Abouzaid-Manolescu hypercohomologies of a perverse sheaf of vanishing cycles in the moduli space of irreducible flat complex connections on \(M_{3}\), given by (5.7). ## 6. A Quantum and Classical Geometric Langlands Correspondence If we let \(M_{4}=I\times\mathbb{R}^{+}\times C\) with \(C\to 0\), \(S\)-duality gives a homological mirror symmetry of the category of \(A\)-branes. This implies a homological mirror symmetry of the \(\tau\)-dependent category of \(A\)-branes: \[\operatorname{Cat}_{A\text{-branes}}\bigl{(}\tau,\mathcal{M}_{\text{Higgs}}^{G} (C)\bigr{)}\longleftrightarrow\operatorname{Cat}_{A\text{-branes}}\Bigl{(}- \frac{1}{n_{\mathfrak{g}}\tau},\,\mathcal{M}_{\text{Higgs}}^{{}^{L}G}(C) \Bigr{)} \tag{6.1}\] where \(\mathcal{M}_{\text{Higgs}}^{G}\) and \(\mathcal{M}_{\text{Higgs}}^{{}^{L}G}\) are mirror manifolds. For \(\theta=0\) (\(\operatorname{Re}(\tau)=0\)), the category of \(\tau\)-dependent \(A\)-branes can be identified with a category of twisted \(D\)-modules on \(\operatorname{Bun}_{G_{\mathbb{C}}}(C)\) with parameter \(q\). Thus, this mirror symmetry would mean that we have \[\mathcal{D}_{-h^{\vee}}^{\mathbf{c}}\text{-mod}\bigl{(}q,\operatorname{Bun}_{ G_{\mathbb{C}}}\bigr{)}\longleftrightarrow\mathcal{D}_{-{}^{L}h^{\vee}}^{ \mathbf{c}}\text{-mod}\Bigl{(}-\frac{1}{n_{\mathfrak{g}}q},\,\operatorname{Bun }_{{}^{L}G_{\mathbb{C}}}\Bigr{)} \tag{6.2}\] This is a quantum geometric Langlands correspondence for \(G_{\mathbb{C}}\) with complex curve \(C\) and purely imaginary parameter \(q\)[5, eqn. (6.4)]. On the other hand, in the zero-coupling, 'classical' limit of the 4d theory in \(G\) where \(\operatorname{Im}(\tau)\to\infty\), we have \(q\to\infty\). In this limit, the LHS of (6.2) can be identified with the category \(\operatorname{Cat}_{\text{coh}}\bigl{(}\mathcal{M}_{\text{flat}}^{G_{\mathbb{C }}}(C)\bigr{)}\) of coherent sheaves on \(\mathcal{M}_{\text{flat}}^{G_{\mathbb{C}}}(C)\)[5]. This 'classical' limit corresponds to the 'ultra-quantum' limit of the \(S\)-dual 4d theory in \({}^{L}G\), where \({}^{L}q=-\frac{1}{n_{\mathfrak{g}}q}\to 0\). In this limit, the RHS of (6.2) can be identified with the category \(\mathcal{D}_{-{}^{L}h^{\vee}}^{\mathbf{c}}\text{-mod}\bigl{(}0,\operatorname{ Bun}_{{}^{L}G_{\mathbb{C}}}\bigr{)}\) of critically-twisted \(D\)-modules on \(\operatorname{Bun}_{{}^{L}G_{\mathbb{C}}}(C)\), giving us \[\operatorname{Cat}_{\text{coh}}\bigl{(}\mathcal{M}_{\text{flat}}^{G_{\mathbb{C }}}(C)\bigr{)}\longleftrightarrow\mathcal{D}_{-{}^{L}h^{\vee}}^{\mathbf{c}} \text{-mod}\Bigl{(}0,\,\operatorname{Bun}_{{}^{L}G_{\mathbb{C}}}\Bigr{)} \tag{6.3}\] This is a classical geometric Langlands correspondence for \(G_{\mathbb{C}}\) with complex curve \(C\)[5, eqn. (6.4)]. ## 7. Categorification and a Novel Web of Mathematical Relations The mathematical procedure of categorification is realized in our physical framework, where the VW invariant is a number, the VW Floer homology is a vector (space), and the \(A\)-branes span a category of objects. Categorification can be physically understood as flattening a direction and then ending it on a boundary or boundaries. Explicitly in our case, the first step of categorification involves flattening a direction in \(M_{4}\) and then ending it on an \(M_{3}\) boundary, while the second step involves flattening a direction in \(M_{3}\) and then ending it on two \(C\) boundaries. Therefore, one can also understand the procedure of categorifying as computing relative invariants12 - computing the relative invariant of \(\mathcal{Z}_{\rm VW}\) give us \({\rm HF}^{\rm VW}_{*}\), and further computing the relative invariant of \({\rm HF}^{\rm VW}_{*}\) gives us \({\rm Cat}_{A\text{-branes}}\). Footnote 12: A relative invariant is an invariant of an open manifold which was originally defined for a closed manifold. One could continue to further categorify the \({\rm VW}\) invariant of \(M_{4}\) by flattening a direction along \(C\) and ending it on \(S^{1}\) boundaries, i.e., let \(C=I^{\prime}\times S^{1}\). This should give us a 2-category, 2-Cat, consisting of objects, morphisms between these objects, and 2-morphisms between these morphisms. We thus have13 Footnote 13: This perspective of categorifying topological invariants by successively introducing boundaries to the manifold was first pointed out in [11]. \[\begin{array}{rcl}{\rm VW\ theory\ on\ }M_{4}&\leadsto&{\rm number}& \mathcal{Z}_{\rm VW}\\ {\rm VW\ theory\ on\ }\mathbb{R}^{+}\times M_{3}&\leadsto&{\rm vector}&{\rm HF }^{\rm VW}_{*}\\ {\rm VW\ theory\ on\ }\mathbb{R}^{+}\times I\times C&\leadsto&1\text{- category}&{\rm Cat}_{A\text{-branes}}\\ {\rm VW\ theory\ on\ }\mathbb{R}^{+}\times I\times I^{\prime}\times S^{1}& \leadsto&2\text{-category}&2\text{-Cat}\\ {\rm VW\ theory\ on\ }\mathbb{R}^{+}\times I\times I^{\prime}\times[0,1]& \leadsto&3\text{-category}&3\text{-Cat}.\end{array} \tag{7.1}\] As we go down the list, the categories get assigned to \(M_{3},C,\dots,\) and are determined by the category of boundaries of the effective 1d, 2d,... theory on \(\mathbb{R}^{+},\mathbb{R}^{+}\times I,\dots.\) Therefore, the 2-category will be determined by the category of 2d boundaries of the 3d theory on \(\mathbb{R}^{+}\times I\times I^{\prime}\) given by \({\rm VW}\) theory compactified on \(S^{1}\), that is assigned to \(S^{1}\). These are surface defects that can be interpreted as objects; loop defects on the surface running around \(I\times I^{\prime}\) can be interpreted as morphisms between these objects; while opposing pairs of point defects on the loops can be interpreted as 2-morphisms between these morphisms. Also, note that the 3d TQFT in question is a 3d gauged \(A\)-model described in [10, sect. 7],14 and for abelian \(G\) and \({\rm Re}(\tau)=0\), the 2-category of surface defects have been explicitly determined in _loc. cit._ to be the 2-category \(2\text{-Cat}_{\text{mod-cat}}\big{(}{\rm FF}\text{-cat}(T^{2})\big{)}\) of module categories over the Fukaya-Floer category of \(T^{2}\).15 Footnote 14: In [10, sect. 7], the GL theory at \(t=0\) was considered, but it was shown in [10, sect. 5.2-5.3] that this theory compactified on \(S^{1}\) is the same as \({\rm VW}\) theory compactified on \(S^{1}\). Hence, their results are applicable to us. 4d \(S\)-duality also gives us a Langlands duality of the 2-category 2-Cat. According to [10, sect. 7.4.1], 4d \(S\)-duality, which maps abelian \(G\) to its Langlands dual that is itself, will transform the symplectic area \(\mathcal{A}\) of \(T^{2}\) as \[\mathcal{A}\to{}^{L}\mathcal{A}=\frac{4\pi^{2}}{\mathcal{A}}, \tag{7.2}\] where \({}^{L}\mathcal{A}\) is the symplectic area of a torus \({}^{L}T^{2}\) that can be obtained from \(T^{2}\) by inverting the radii of its two circles from \(R\to\alpha^{\prime}/R\) for some constant \(\alpha^{\prime}\). In other words, \({}^{L}T^{2}\) is the \(T\)-dual torus to \(T^{2}\), and \({\rm FF}\text{-cat}(T^{2})\), which is realized by a 2d open \(A\)-model with target \(T^{2}\), will be invariant under \(T\)-duality of the target, i.e., \({\rm FF}\text{-cat}(T^{2})\cong{\rm FF}\text{-cat}({}^{L}T^{2})\). Thus, we have \[2\text{-Cat}_{\text{mod-cat}}\big{(}{\rm FF}\text{-cat}(T^{2})\big{)}\long 2 \text{-Cat}_{\text{mod-cat}}\big{(}{\rm FF}\text{-cat}({}^{L}T^{2})\big{)}. \tag{7.3}\] The last step to further categorify the VW invariant of \(M_{4}\) is to flatten \(S^{1}\), ending it on point boundaries, i.e., let \(S^{1}=[0,1]\). This should give us a 3-category, 3-Cat, consisting of objects, morphisms between these objects, 2-morphisms between these morphisms, and 3-morphisms between these 2-morphisms, giving us a 3-category of 3d boundary conditions of VW theory along \(\mathbb{R}^{+}\times I\times I^{\prime}\) which is assigned to a point. These 3d boundary conditions can be realized by domain walls. From the duality relations (5.1), (5.2), (5.4), (5.6), the correspondences (6.1), (6.2), (6.3), and the identifications (2.17), (3.6), (4.6), we will get Fig. 3 below.
2307.12686
Weak production of $η$ mesons induced by $ν_μ(\barν_μ)$ at MicroBooNE energies
We have studied neutral and charged current (anti)neutrino induced $\eta$ production off the free nucleon target at MicroBooNE energies, in the light of recent results reported by the MicroBooNE collaboration for the total $\eta$ production cross section. This study has been made using a theoretical model in which the weak hadronic current receives contribution from the nonresonant Born terms as well as from the resonance excitations. The Born terms are obtained using the SU(3) symmetric chiral model, used earlier in the study of $K-$meson production. The contribution from the resonance terms is considered from the excitation of five nucleon resonances viz. $S_{11}(1535)$, $S_{11}(1650)$, $P_{11}(1710)$, $P_{11}(1880)$, and $S_{11}(1895)$. To fix the parameters of the vector current interaction, this model is first used to study the electromagnetic production of $\eta$ mesons induced by real and virtual photons, and the theoretical results have been compared with the data from the MAINZ and JLab experiments. The partially conserved axial-vector current hypothesis and generalized Goldberger-Treiman relation are used to fix the parameters of the axial-vector current interaction. The results are presented for the total cross section for the neutral and charged current induced $\eta$ production, ratio of the cross sections for the charged current to neutral current, MicroBooNE flux averaged cross section $\langle \sigma \rangle$, $\left \langle \frac{d\sigma}{dQ^2} \right\rangle$ and $\left\langle \frac{d\sigma}{dp_\eta} \right\rangle$, which may be useful in the future analysis of MicroBooNE as well as other accelerator and atmospheric neutrino experiments being performed in the ${\cal O}$(1)~GeV energy region.
A. Fatima, M. Sajjad Athar, S. K. Singh
2023-07-24T11:02:15Z
http://arxiv.org/abs/2307.12686v2
# Weak production of \(\eta\) mesons induced by \(\nu_{\mu}(\bar{\nu}_{\mu})\) at MicroBooNE energies ###### Abstract We have studied neutral and charged current (anti)neutrino induced \(\eta\) production off the free nucleon target at MicroBooNE energies, in the light of recent results reported by the MicroBooNE collaboration for the total \(\eta\) production cross section. This study has been made using a theoretical model in which the weak hadronic current receives contribution from the nonresonant Born terms as well as from the resonance excitations. The Born terms are obtained using the SU(3) symmetric chiral model, used earlier in the study of \(K-\)meson production. The contribution from the resonance terms is considered from the excitation of five nucleon resonances viz. \(S_{11}(1535)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(P_{11}(1880)\), and \(S_{11}(1895)\). To fix the parameters of the vector current interaction, this model is first used to study the electromagnetic production of \(\eta\) mesons induced by real and virtual photons, and the theoretical results have been compared with the data from the MAINZ and JLab experiments. The partially conserved axial-vector current hypothesis and generalized Goldberger-Treiman relation are used to fix the parameters of the axial-vector current interaction. The results are presented for the total cross section for the neutral and charged current induced \(\eta\) production, ratio of the cross sections for the charged current to neutral current, MicroBooNE flux averaged cross section \(\left\langle\sigma\right\rangle\), \(\left\langle\frac{d\sigma}{dQ^{2}}\right\rangle\) and \(\left\langle\frac{d\sigma}{d\bar{\nu}_{\eta}}\right\rangle\), which may be useful in the future analysis of MicroBooNE as well as other accelerator and atmospheric neutrino experiments being performed in the \(\mathcal{O}(1)\) GeV energy region. pacs: 25.30.Pt,13.15.+g,12.15.-y,12.39.Fe ## I Introduction The study of the weak production of mesons induced by both the charged and neutral currents in the inelastic sector of the (anti)neutrino-nucleon interactions has historically been centered around the weak production of pions, which is dominated by the excitation of \(\Delta\) resonance and its subsequent decay producing pions [1; 2]. In recent years, the weak production of single pions induced by the charged and neutral currents in the (anti)neutrino reactions has attracted considerable interest as it plays very important role in modeling the weak (anti)neutrino-nucleon cross section in the analysis of neutrino oscillation experiments in the sub-GeV and few GeV energy regions. However, in the GeV energy region of current neutrino oscillation experiments with accelerator neutrinos like MicroBooNE [3], SBND [4], T2K [5], T2HyperK [6], and DUNE [7] as well as with the atmospheric neutrinos like HyperK [8], JUNO [9], and INO [10], the weak production of heavier mesons like \(K^{\pm}\), \(K^{0}(\bar{K}^{0})\), and \(\eta\) could also become relevant and would play significant role in modeling the neutrino-nucleon cross sections in the inelastic sector of neutrino reactions [1]. Since these heavy mesons are produced by the weak excitation of higher resonances in the strange and nonstrange sectors and their subsequent decays into baryons and mesons, in addition to, the nonresonant direct production of mesons, the study of the weak production of heavy mesons provides useful information about the electroweak properties of the higher resonances like \(S_{11}(1535)\), \(D_{13}(1520)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(\Lambda(1405)\), \(\Sigma(1385)\), etc. In this context while there have been quite a few studies of the weak single and associated production of \(K^{\pm}\) and \(K^{0}(\bar{K}^{0})\) mesons in recent years [1; 11; 12; 13], there exists very little work on the weak production of \(\eta\) mesons. Theoretically, the early work by Dombey [14], was followed much later by Alam et al. [15] and Nakamura et al. [16]. We have recently studied, in some detail, the weak production of \(\eta\) mesons induced by the charged current in the neutrino and antineutrino reactions off the nucleon in the energy region of \(E_{\nu(\vec{\nu})}\leq 2\) GeV [17]. Experimentally, the first results on the weak production of \(\eta\) mesons induced by neutrinos and antineutrinos were reported by the BEBC collaboration [18] and later by the ICARUS collaboration [19]. Recently, the MicroBooNE collaboration [20] has reported the results for \(\eta\) production in neutrino interaction on argon by observing two photons through the \(\eta\to 2\gamma\) decay (\(\sim 40\%\) B.R.) in the final state with a cross section \(\sigma_{\nu\to 1\eta+X-2\gamma+0\pi^{0}+X}=1.27\pm 0.33\pm 0.34\times 10^{-41}\) cm\({}^{2}\)/nucleon, implying a total cross section \(\sigma_{\nu\to 1\eta+X}=3.22\pm 0.84\pm 0.86\times 10^{-41}\) cm\({}^{2}\)/nucleon. Since no charged leptons are observed in the final state, this \(\eta\) production cross section includes the weak production of \(\eta\) mesons induced by the charged as well as the neutral current interactions. Further analyses are being done by the MicroBooNE collaboration to isolate the events with a charged lepton in the final state so that the weak \(\eta\) production induced by charged and neutral currents could be studied separately [20]. Moreover, the \(\nu_{\mu}\) beam at MicroBooNE has contamination by the other neutrino flavors, i.e., \(\nu_{\mu}\) being 93.7%, with 5.8% of \(\bar{\nu}_{\mu}\), 0.5% of \(\nu_{e}\), and 0.05% of \(\bar{\nu}_{e}\). It is, therefore, important to theoretically estimate the \(\nu_{l}(\bar{\nu}_{l})\) (\(l=e,\mu)-nucleon cross section for \(\eta\) production induced by the other neutrino flavors in this energy region. Keeping this in mind, we have studied the weak production of \(\eta\) mesons induced by the charged and the neutral weak currents (anti)neutrino-nucleon reactions for all the (anti)neutrino flavors, i.e., \(\nu_{\mu}\), \(\bar{\nu}_{\mu}\), \(\nu_{e}\), and \(\bar{\nu}_{e}\). These studies will also be helpful for the future neutrino oscillation programs like DUNE [7] and SBND [4], in particular, and the other accelerator and atmospheric neutrino experiments being performed in the few GeV energy region, in general. In our earlier study [17], we presented the results for the charged current \(\nu_{\mu}(\bar{\nu}_{\mu})\) induced \(\eta\) production off the nucleon for \(E_{\nu_{\mu}(\bar{\nu}_{\mu})}\leq 2\) GeV, by taking into account the contribution of the direct nonresonant production and the resonant production due to the excitation and decay of low lying \(S_{11}(1535)\), \(S_{11}(1650)\), and \(P_{11}(1710)\) resonances. In this work, we extend our earlier work on the charged current induced \(\eta\) production to higher energies by considering the contribution from additional resonances viz. \(P_{11}(1880)\) and \(S_{11}(1895)\), and also include the weak \(\eta\) production due to the neutral current. This model is then applied to understand the experimental results from the MicroBooNE collaboration. The inclusion of the contribution from the higher resonances is needed because the (anti)neutrino flux at the MicroBooNE has a long tail in energy, and the flux decreases by two orders of magnitude only beyond \(E_{\nu(\bar{\nu})}\geq 2.5\) GeV. Therefore, the flux averaged cross section gets a non-negligible contribution even for \(E_{\nu}=2-3\) GeV. It is important to mention that the MicroBooNE flux peaks around \(E_{\nu}=0.5-0.6\) GeV with the average energy of the dominant component (\(\nu_{\mu}\)) flux at \(E_{\nu_{\mu}}=823\) MeV, while the threshold for the \(\nu_{\mu}\) induced charged (neutral) current \(\eta\) production is 880 MeV (710 MeV). The theoretical calculations have been done using the interaction Lagrangian predicted by the standard model [21; 22] for the charged and the neutral current weak interaction of (anti)neutrinos with nucleons. The contributions from the direct \(\eta\) production due to the nonresonant Born diagrams are calculated using a microscopic model based on the SU(3) chiral Lagrangian assuming \(\eta\) belonging to the octet representation of SU(3), thus, neglecting the \(\eta-\eta^{\prime}\) mixing. The SU(3) Lagrangian has earlier been used to study the weak production of kaons [1]. The contribution from the resonant diagrams due to the excitation of various resonances \(R\) like \(S_{11}(1535)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(P_{11}(1880)\), and \(S_{11}(1895)\) and their decays into nucleon and \(\eta\) through \(R\to N\eta\) mode are calculated using phenomenological Lagrangians where the \(\eta\) particle has been treated as the physical meson. In the resonance sector, the various parameters appearing in the vector current sector are fixed by first applying this model to study the photon and electron induced eta production from the free nucleon. We have fitted the coupling strength at the strong \(R\to N\eta\) vertex and the electromagnetic coupling strength of the \(N-R\) transition using the eta photoproduction data on the total cross section available from the MAMI collaboration [23; 24] for \(W\leq 2\) GeV. Then the \(Q^{2}\) dependence of the electromagnetic \(N-R\) transition form factors has been obtained by fitting the data of the electron induced \(\eta\) production off the proton target for the total cross section at different values of \(Q^{2}\) (\(Q^{2}<1.4\) GeV\({}^{2}\)) available from the CLAS collaboration [25]. In the axial-vector sector, the axial-vector couplings have been calculated using the partially conserved axial-vector current (PCAC) hypothesis and the generalized Goldberger-Treiman (GT) relation with inputs from the experimentally determined strong \(R\to N\pi\) couplings. In the neutral current induced \(\eta\) production, the isospin structure of the neutral currents predicted by the standard model has been used with the experimental values of the electromagnetic form factors of the nucleon and \(N-R\) transition form factors in the electromagnetic sector to determine the weak vector form factors. Using this model, we have obtained the results for the total scattering cross section for the charged and neutral current \(\nu_{\mu}(\bar{\nu}_{\mu})\) and \(\nu_{e}(\bar{\nu}_{e})\) induced scattering off the nucleon target, the ratio of the total cross section for the charged current to neutral current reactions, and finally the MicroBooNE flux averaged \(Q^{2}\) distribution i.e. \(\left\langle\frac{d\sigma}{dQ^{2}}\right\rangle\) vs. \(Q^{2}\), eta momentum distribution i.e. \(\left\langle\frac{d\sigma}{dp_{\eta}}\right\rangle\) vs. \(p_{\eta}\), and the flux averaged total scattering cross section \(\left\langle\sigma\right\rangle\). In Sec. II, we present the formalism for the photon and the electron induced eta production. In Sec. III, the formalism for the charged as well as the neutral current \(\nu_{l}(\bar{\nu}_{l})\) (\(l=e,\mu\)) induced eta production has been presented. The results and discussions are presented in Sec. IV, and Sec. V concludes our findings. ## II Electromagnetic production of \(\eta\) mesons ### \(\eta\) production induced by photons The differential cross section for the photoproduction of \(\eta\) mesons off the free nucleon, i.e., \[\gamma(q)+N(p)\longrightarrow N(p^{\prime})+\eta(p_{\eta}),\qquad\qquad N=p,n \tag{1}\] is written as [17]: \[\frac{d\sigma}{d\Omega}\bigg{|}_{CM}=\frac{1}{64\pi^{2}s}\frac{|\vec{p}\,^{\prime }|}{|\vec{p}|}\overline{\sum_{r}}\sum_{spin}|\mathcal{M}^{r}|^{2}, \tag{2}\] where the quantities in the parentheses of Eq. (1) represent the four momenta of the corresponding particles. The CM energy \(\sqrt{s}\) is expressed as \(s=W^{2}=(q+p)^{2}=M^{2}+2ME_{\gamma}\), with \(E_{\gamma}\) being the energy of the incoming photon in the laboratory frame. \(\overline{\sum}\sum|\mathcal{M}^{r}|^{2}\) is the square of the transition matrix element \(\mathcal{M}^{r}\), for the photon polarization state \(r\), averaged and summed over the initial and final spin states, where \(\mathcal{M}^{r}\) for reaction (1) is written in terms of the real photon polarization vector \(\mathfrak{c}^{r}_{\mu}\), as \[\mathcal{M}^{r}=ee^{r}_{\mu}(q)J^{\mu}, \tag{3}\] with \(e\) being the electromagnetic coupling constant and \(J^{\mu}=\langle N(p^{\prime})\eta(p_{\eta})|\,J^{\mu}_{EM}\,|N(p)\rangle\) being the matrix element of the electromagnetic current (\(J^{\mu}_{EM}\)) taken between the hadronic states \(|N\rangle\) and \(|N\eta\rangle\). The hadronic matrix element receives contribution from the nonresonant Born terms and the terms corresponding to the resonance excitations and their subsequent decay to \(N\eta\) mode, which diagrammatically are shown in Fig. 1. The hadronic currents for the nonresonant Born terms are obtained using the nonlinear sigma model and the total hadronic current \(J^{\mu}\) is obtained by adding the currents corresponding to the nonresonant and resonance terms coherently. For the detailed description of the formalism, readers are referred to Refs. [1; 17]. The expressions of the hadronic currents for \(s\)- and \(u\)- channels of the \(\eta\) photoproduction processes, corresponding to the Feynman diagrams shown in Fig. 1 (left panel), are obtained as [1; 17]: \[J^{\mu}|_{sN} = -A_{s}\ F_{s}(s)\bar{u}(p^{\prime})\not{p}_{\eta}\gamma_{5}\frac{ \not{p}+\not{q}+M}{s-M^{2}}\left(\gamma^{\mu}e_{N}+i\frac{\kappa_{N}}{2M} \sigma^{\mu\nu}q_{\nu}\right)u(p), \tag{4}\] \[J^{\mu}|_{uN} = -A_{u}\ F_{u}(u)\bar{u}(p^{\prime})\left(\gamma^{\mu}e_{N}+i\frac {\kappa_{N}}{2M}\sigma^{\mu\nu}q_{\nu}\right)\frac{\not{p}^{\prime}-\not{q}+M} {u-M^{2}}\not{p}_{\eta}\gamma_{5}u(p), \tag{5}\] where \(N\) stands for a proton or a neutron in the initial and final states, \(u=(p^{\prime}-q)^{2}\), and the strong coupling strengths of \(s\) and \(u\) channel; \(A_{s}=A_{u}=\left(\frac{D-3F}{2\sqrt{3}f_{\eta}}\right)\) are obtained using the nonlinear sigma model [2], assuming the nucleons Figure 1: Feynman diagrams corresponding to the nonresonant Born terms (left panel) and resonance excitations (right panel) for the process \(W(q)+N(p)\longrightarrow\eta(p_{\eta})+N^{\prime}(p^{\prime})\). Diagrams shown in the top panel are the nucleon pole diagrams, while the one shown in the bottom panel corresponds to the cross nucleon pole diagrams. In the case of electromagnetic interactions, \(W=\gamma,\gamma^{*}\) and \(N^{\prime}=N=p,n\), while in the case of charged current induced weak interactions, \(W=W^{\pm}\) and \(N^{\prime}\) and \(N\) corresponds to the different nucleons depending upon the charge conservation, and for the neutral current induced reactions, \(W=Z\) and \(N^{\prime}=N=p,n\). The quantities in the parentheses represent the four momenta of the corresponding particles. and the \(\eta\) meson belonging, respectively, to the octet baryon and meson representation of the SU(3) representation, thus, neglecting the \(\eta-\eta^{\prime}\) mixing, which is found to be quite small [26]. \(D\) and \(F\) are the axial-vector couplings of the baryon octet and \(f_{\eta}=105\) MeV [27] is the \(\eta\) decay constant. In order to take into account the hadronic structure of the nucleons, the form factors \(F_{s}(s)\), and \(F_{u}(u)\), are introduced at the strong vertex. We use the most general form of the hadronic form factor which is taken to be of the dipole form [11]: \[F_{x}(x)=\frac{\Lambda_{B}^{4}}{\Lambda_{B}^{4}+(x-M^{2})^{2}}, x=s,u \tag{6}\] where \(\Lambda_{B}\) is the cut-off parameter taken to be the same for the s- and u-channel nonresonant Born terms, and \(x\) represents the Mandelstam variables \(s,\ u\). The value of \(\Lambda_{B}\) is fitted to the experimental data for the proton and neutron targets and the best fitted value is \(\Lambda_{B}=0.75\) GeV and \(0.72\) GeV, respectively. In the case of Born terms, the gauge invariance is automatically implemented for the \(\eta\) production processes. In the present work, we have taken into account the resonances, which have mass \(M_{R}<2\) GeV and a significant branching ratio to the \(N\eta\) decay mode reported in PDG [28]. Specifically, we have considered five spin \(\frac{1}{2}\) resonances viz. \(S_{11}(1535)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(P_{11}(1880)\), and \(S_{11}(1895)\). The general properties of these resonances like mass, decay width, spin, etc. are given in Table 1, where we see that \(S_{11}(1535)\) resonance dominates the coupling to the \(N\eta\) channel. The most general form of the hadronic currents for the \(s-\) and \(u-\) channel processes where a resonance state \(R_{\frac{1}{2}}\) is produced and decays to a \(\eta\) and a nucleon in the final state, are written as [2]: \[j^{\mu}\big{|}_{s} = F_{s}^{*}(s)\ \frac{g_{RN\eta}}{f_{\eta}}\bar{u}(p\,^{\prime})p\!\!\!/_{ \eta}\gamma_{5}\Gamma_{s}\left(\frac{p\!\!\!/\!\!\!\!\!/\!\!\!\!/\!\!\!\!\!/\!\! \!\!/\!\!\!\!\!/\!\!\!\!/\!\!\!\!/\!\!\!\!/\!\!\!\!/\!\!\!\!/\!\!\!/\!\!\!/\!\! \!\!/\!\!\!/\!\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\! \!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\! \!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\! \!\!/\!\!\!/\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\!\!\!/\!\! \!/\!\!\!/\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!\!/\! \!/\!\!/\!\!\!/\!\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!\!/\! \!/\!\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\! \!/\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\! \!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!/\!\!/\!\!\!/\!\! \!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!/\!\!/\!\!/\!\!\!/\!\!/\!\!\!/\!\! \!/\!\!/\!\!\!/\!\!/\!\!/\!\! where \(\Lambda_{R}\) is the cut-off parameter whose value is fitted to the experimental data. In general, \(\Lambda_{R}\) would be different from \(\Lambda_{B}\), however, in the case of \(\eta\) production by photons, it happens that the same value of \(\Lambda_{R}\) as that of \(\Lambda_{B}\) i.e. \(\Lambda_{R}=\Lambda_{B}=0.75\) GeV for the proton target and \(\Lambda_{R}=\Lambda_{B}=0.72\) GeV for the neutron target gives the best fit to the experimental data. To determine the value of the strong \(RN\eta\) coupling, we start by writing the most general form of \(RN\eta\) Lagrangian [17]: \[{\cal L}_{RN\eta}=\frac{g_{RN\eta}}{f_{\eta}}\bar{\Psi}_{R}\;\Gamma_{s}^{\mu} \;\partial_{\mu}\eta\,\Psi, \tag{12}\] where \(\Psi\) is the nucleon field, \(\Psi_{R}\) is the resonance field, and \(\eta\) is the eta field. The interaction vertex \(\Gamma_{s}^{\mu}=\gamma^{\mu}\gamma^{5}\) (\(\gamma^{\mu}\)) stands for positive (negative) parity resonance states. Using the above Lagrangian, one obtains the expression for the decay width in the resonance rest frame as [1]: \[\Gamma_{R\to N\eta}=\frac{{\cal C}}{4\pi}\left(\frac{g_{RN\eta}}{f_{\eta}} \right)^{2}(M_{R}\pm M)^{2}\,\frac{E_{N}\mp M}{M_{R}}|\vec{p}_{\eta}^{\rm cm}|, \tag{13}\] where the upper (lower) sign represents the positive (negative) parity resonance, \({\cal C}=1\) for \(\eta\) production processes, and \(|\vec{p}_{\eta}^{\rm cm}|\) is the outgoing eta momentum measured in the resonance rest frame and is given by, \[|\vec{p}_{\eta}^{\rm cm}|=\frac{\sqrt{(M_{R}^{2}-M_{\eta}^{2}-M^{2})^{2}-4M_{ \eta}^{2}M^{2}}}{2M_{R}} \tag{14}\] and \(E_{N}\), the outgoing nucleon energy is \[E_{N}=\frac{M_{R}^{2}+M^{2}-M_{\eta}^{2}}{2M_{R}}. \tag{15}\] In Fig. 2, we have presented the results for the total scattering cross section \(\sigma\) as a function of \(W\) for \(\gamma+p\longrightarrow p+\eta\) and \(\gamma+n\longrightarrow n+\eta\) processes in the region of \(W\) from \(\eta\) production threshold to \(W=1.9\) GeV. We have compared our theoretical results with the experimental data obtained by McNicoll et al. [23] for the MAMI crystal ball collaboration on the proton target and the quasifree neutron data from Werthmuller et al. [24] for the MAMI A2 collaboration. It may be observed from the figure that in the case of \(\eta\) production from the proton and neutron targets, our results, with a very few free parameters viz. \(\Lambda_{B}\) and \(\Lambda_{R}\), are in a very good agreement with the available experimental data. ### Electroproduction of eta meson The electron induced \(\eta\) production off the nucleon target is given by the reaction \[e^{-}(k)+N(p)\longrightarrow e^{-}(k^{\prime})+N(p^{\prime})+\eta(p_{\eta})\,, \tag{16}\] \begin{table} \begin{tabular}{c c c c c c c} Resonance \(\rightarrow\) & \(S_{11}(1535)\) & \(S_{11}(1650)\) & \(P_{11}(1710)\) & \(P_{11}(1880)\) & \(S_{11}(1895)\) \\ Parameters \(\downarrow\) & & & & & & \\ \hline \(M_{R}\) (GeV) & \(1.510\pm 0.01\) & \(1.655\pm 0.015\) & \(1.700\pm 0.02\) & \(1.860\pm 0.04\) & \(1.910\pm 0.02\) \\ \hline \(\Gamma_{R}\) (GeV) & \(0.130\pm 0.02\) & \(0.135\pm 0.035\) & \(0.120\pm 0.04\) & \(0.230\pm 0.05\) & \(0.110\pm 0.03\) \\ \hline \(I(J^{P})\) & \(\frac{1}{2}(\frac{1}{2}^{-})\) & \(\frac{1}{2}(\frac{1}{2}^{-})\) & \(\frac{1}{2}(\frac{1}{2}^{+})\) & \(\frac{1}{2}(\frac{1}{2}^{+})\) & \(\frac{1}{2}(\frac{1}{2}^{-})\) \\ \hline \multirow{4}{*}{Branching ratio (in \%)} & \(N\pi\) & \(32-52\) (43) & \(50-70\) (60) & \(5-20\) (16) & \(3-31\) (34) & \(2-18\) (23) \\ & \(N\eta\) & \(30-55\) (40) & \(15-35\) (25) & \(10-50\) (20) & \(1-55\) (20) & \(15-45\) (30) \\ & \(K\Lambda\) & \(-\) & \(5-15\) (10) & \(5-25\) (15) & \(1-3\) (2) & \(3-23\) (13) \\ & \(N\pi\pi\) & \(4-31\) (17) & \(20-58\) (5) & \(14-48\) (49) & \(>32\) (44) & \(17-74\) (34) \\ \hline \(|g_{RN\pi}|\) & & \(0.1019\) & \(0.0915\) & \(0.0418\) & \(0.0466\) & \(0.0229\) \\ \hline \(|g_{RN\eta}|\) & & \(0.3696\) & \(0.1481\) & \(0.1567\) & \(0.1369\) & \(0.0877\) \\ \hline \end{tabular} \end{table} Table 1: Properties of the spin \(\frac{1}{2}\) resonances available in the PDG [28], with Breit-Wigner mass \(M_{R}\), the total decay width \(\Gamma_{R}\), isospin \(I\), spin \(J\), parity \(P\), the branching ratio full range available from PDG (used in the present calculations) into different meson-baryon channels like \(N\pi\), \(N\eta\), \(K\Lambda\), and \(N\pi\pi\), and the strong coupling constant \(g_{RN\pi}\) and \(g_{RN\eta}\). where the four-momentum for each particle is indicated in the parentheses. The four-momentum of the virtual photon exchanged in electroproduction is given by \(q=k-k^{\prime}\). The differential scattering cross section for the electroproduction of \(\eta\) mesons in the hadronic CM frame is given by [17] \[\frac{d^{5}\sigma}{dE_{l}\ d\Omega_{l}d\Omega_{qp_{\eta}}}=\frac{1}{32(2\pi)^{ 5}}\frac{|\vec{k}^{\prime}||\vec{p}_{\eta}|}{E_{e}MW}\overline{\sum}\sum|{\cal M }|^{2}, \tag{17}\] where \(E_{e}(E_{l})\) is the energy of the incoming (outgoing) electron; \(\overline{\sum}\sum|{\cal M}|^{2}\) is the square of the transition amplitude averaged (summed) over the spins of the initial (final) states with the transition matrix element being written in terms of the leptonic (\(l_{\mu}\)) and the hadronic (\(j^{\mu}\)) currents as \[{\cal M}=\frac{e^{2}}{q^{2}}\,l_{\mu}j^{\mu}. \tag{18}\] The leptonic current is given as \[l_{\mu}=\bar{u}(k^{\prime})\gamma_{\mu}u(k), \tag{19}\] and \(j^{\mu}\) is the sum of the hadronic currents corresponding to the Born terms and resonance excitations, which will be discussed later in this section. The five-fold differential cross section (Eq. 17) for the electroproduction can also be expressed as [29; 30; 31]: \[\frac{d\sigma}{d\Omega_{l}\,dE_{l}\,d\Omega_{qp_{\eta}}}=\Gamma\,\frac{d \sigma_{\rm v}}{d\Omega_{qp_{\eta}}}\,, \tag{20}\] with the flux of the virtual photon given by \[\Gamma=\frac{\alpha}{2\pi^{2}}\,\frac{E_{l}}{E_{e}}\,\frac{K}{Q^{2}}\,\frac{1 }{1-\varepsilon}\,. \tag{21}\] In the above equation, \(K=(W^{2}-M^{2})/2M\) denotes the "photon equivalent energy", the laboratory energy necessary for a real photon to excite a hadronic system with CM energy \(W\) and \(\varepsilon\) is the transverse polarization parameter of Figure 2: Total cross section \(\sigma\) vs. \(W\) for \(\gamma p\longrightarrow\eta p\) (solid line) and \(\gamma n\longrightarrow\eta n\) (dashed line) processes using the full model. The experimental points for the proton target (solid circle) are obtained from MAMI crystal ball collaboration [23], and for the neutron target (solid diamond) we have used the quasifree neutron data from MAMI A2 collaboration [24]. the virtual photon, given as \[\varepsilon=\left(1+2\frac{|\hat{q}|^{2}}{Q^{2}}\tan^{2}\frac{\theta_{l}}{2} \right)^{-1}\,, \tag{22}\] with \(Q^{2}=-q^{2}=-(k-k^{\prime})^{2}\). The hadronic currents corresponding to the nucleon Born terms exchanged in the \(s\)- and \(u\)-channels for the electroproduction of eta mesons, depicted in Fig. 1, are obtained using the nonlinear sigma model and are written as [17]: \[J^{\mu}|_{s(N)} = F_{s}(s)\ \frac{D-3F}{2\sqrt{3}f_{\eta}}\bar{u}(p^{\prime})\not{p}_{ \eta}\gamma^{5}\frac{\not{p}+\not{q}+M}{(p+q)^{2}-M^{2}}\mathcal{O}_{N}^{\mu}u(p)\] \[J^{\mu}|_{u(N)} = F_{u}(u)\ \frac{D-3F}{2\sqrt{3}f_{\eta}}\bar{u}(p^{\prime}) \mathcal{O}_{N}^{\mu}\frac{\not{p}-\not{p}_{\eta}+M}{(p-p_{\eta})^{2}-M^{2}} \not{p}_{\eta}\gamma^{5}u(p), \tag{23}\] where the \(\gamma NN\) vertex operator \(\mathcal{O}_{N}^{\mu}\) is expressed in terms of the \(Q^{2}\) dependent nucleon form factors as, \[\mathcal{O}_{N}^{\mu} = F_{1}^{N}(Q^{2})\gamma^{\mu}+F_{2}^{N}(Q^{2})i\sigma^{\mu\nu} \frac{q_{\nu}}{2M}. \tag{24}\] The Dirac and Pauli form factors of the nucleon viz. \(F_{1}^{p,n}(Q^{2})\) and \(F_{2}^{p,n}(Q^{2})\), respectively, are expressed in terms of the Sach's electric (\(G_{E}^{p,n}(Q^{2})\)) and magnetic (\(G_{M}^{p,n}(Q^{2})\)) form factors of the nucleons, for which various parameterizations are available in the literature. In the present work we have taken the parameterization of these form factors from Bradford et al. [32] also known as BBBA05 parameterization. For details, see Ref. [17]. The general expression of the hadronic current for the resonance excitation in the s- and u- channels, corresponding to the Feynman diagrams shown in Fig. 1 (right panel), is written as, \[j^{\mu}\big{|}_{s} = F_{s}^{*}(s)\ \frac{g_{RN\eta}}{f_{\eta}}\bar{u}(p\,^{\prime}) \not{p}_{\eta}\gamma_{5}\Gamma_{s}\left(\frac{\not{p}+\not{q}+M_{R}}{s-M_{R}^ {2}+iM_{R}\Gamma_{R}}\right)\Gamma_{\frac{1}{2}\pm}^{\mu}u(p\,),\] \[j^{\mu}\big{|}_{u} = F_{u}^{*}(u)\ \frac{g_{RN\eta}}{f_{\eta}}\bar{u}(p\,^{\prime}) \Gamma_{\frac{1}{2}\pm}^{\mu}\left(\frac{\not{p}^{\prime}-\not{q}+M_{R}}{u-M_ {R}^{2}+iM_{R}\Gamma_{R}}\right)\not{p}_{\eta}\gamma_{5}\Gamma_{s}u(p\,). \tag{25}\] The vertex function \(\Gamma_{\frac{1}{2}\pm}^{\mu}\) for the positive and negative parity resonances is given in Eq. (8), where the vector current \(V_{\frac{1}{2}}^{\mu}\) in the case of electroproduction processes is expressed in terms of the \(Q^{2}\) dependent form factors \(F_{1,2}^{R^{+},R^{0}}(Q^{2})\) as: \[V_{\frac{1}{2}}^{\mu} = \frac{F_{1}^{R}(Q^{2})}{(2M)^{2}}(\not{q}q^{\mu}+Q^{2}\gamma^{\mu })+\frac{F_{2}^{R}(Q^{2})}{2M}i\sigma^{\mu\nu}q_{\nu},\qquad R=R^{+},R^{0}. \tag{26}\] The electromagnetic transition form factors for the charged (\(F_{1,2}^{R^{+}}(Q^{2})\)) and neutral (\(F_{1,2}^{R^{0}}(Q^{2})\)) states are then related to the helicity amplitudes given by the following relations [1]: \[A_{\frac{1}{2}} = \sqrt{\frac{2\pi\alpha}{K_{R}}}\left\langle R,J_{Z}=\frac{1}{2} \right|\epsilon_{\mu}^{+}J_{i}^{\mu}\left|N,J_{Z}=\frac{-1}{2}\right\rangle\zeta\] \[S_{\frac{1}{2}} = -\sqrt{\frac{2\pi\alpha}{K_{R}}}\frac{|\vec{q}|}{\sqrt{Q^{2}}} \left\langle R,J_{Z}=\frac{1}{2}\right|\epsilon_{\mu}^{0}J_{i}^{\mu}\left|N,J_ {Z}=\frac{-1}{2}\right\rangle\zeta \tag{27}\] where in the resonance rest frame, \[K_{R} = \frac{M_{R}^{2}-M^{2}}{2M_{R}},\hskip 28.452756pt|\vec{q}|^{2}= \frac{(M_{R}^{2}-M^{2}-Q^{2})^{2}}{4M_{R}^{2}}+Q^{2},\] \[\epsilon_{\pm}^{\mu} = \mp\frac{1}{\sqrt{2}}(0,1,\pm i,0),\hskip 42.679134pt\epsilon_{0}^{ \mu}=\frac{1}{\sqrt{Q^{2}}}(|\vec{q}|,1,0,q^{0}). \tag{28}\] The parameter \(\zeta\) is model dependent which is related to the sign of \(R\to N\pi\), and for the present calculation is taken as \(\zeta=1\). Using Eq. (28) in Eq. (27), the helicity amplitudes \(A_{\frac{1}{2}}(Q^{2})\) and \(S_{\frac{1}{2}}(Q^{2})\) in terms of the electromagnetic form factors \(F_{1}^{R^{+},R^{0}}\) and \(F_{2}^{R^{+},R^{0}}\) are obtained as [17]: \[A_{\frac{1}{2}}^{p,n}(Q^{2}) = \sqrt{\frac{2\pi\alpha}{M}\frac{(M_{R}\mp M)^{2}+Q^{2}}{M_{R}^{2} -M^{2}}}\left(\frac{Q^{2}}{4M^{2}}F_{1}^{R^{+},R^{0}}(Q^{2})+\frac{M_{R}\pm M }{2M}F_{2}^{R^{+},R^{0}}(Q^{2})\right)\] \[S_{\frac{1}{2}}^{p,n}(Q^{2}) = \mp\sqrt{\frac{\pi\alpha}{M}\frac{(M_{R}\pm M)^{2}+Q^{2}}{M_{R}^ {2}-M^{2}}}\frac{(M_{R}\mp M)^{2}+Q^{2}}{4M_{R}M}\left(\frac{M_{R}\pm M}{2M}F_ {1}^{R^{+},R^{0}}(Q^{2})-F_{2}^{R^{+},R^{0}}(Q^{2})\right), \tag{29}\] where upper (lower) sign corresponds to positive (negative) parity resonances. The \(Q^{2}\) dependence of the helicity amplitudes (Eq. (29)) is generally parameterized as [33]: \[{\cal A}_{\alpha}(Q^{2})={\cal A}_{\alpha}(0)(1+\alpha Q^{2})\,e^{-\beta Q^{2}}, \tag{30}\] where \({\cal A}_{\alpha}(Q^{2})\) are the helicity amplitudes; \(A_{\frac{1}{2}}(Q^{2})\) and \(S_{\frac{1}{2}}(Q^{2})\) and parameters \({\cal A}_{\alpha}(0)\) are generally determined by a fit to the photoproduction data of the corresponding resonance. In the present work, the values of \(A_{\frac{1}{2}}(0)\) are taken from the PDG [28]. While the parameters \(\alpha\) and \(\beta\) are obtained by fitting the electroproduction data on the total cross section at different \(Q^{2}\) available from the CLAS experiment [25], and the values of these parameters for the different nucleon resonances are tabulated in Table 2. We obtain the total cross section \(\sigma_{\rm v}\) for \(\gamma^{*}p\to\eta p\) process by integrating the angular distribution (\(\frac{d\sigma_{\rm v}}{d\Omega_{\rm v}p_{\eta}}\)) given in Eq. (20) over the polar and azimuthal angles, which is presented in Fig. 3 as a function of CM energy \(W\) at different values of \(Q^{2}\) ranging from \(Q^{2}=0.165\) GeV\({}^{2}\) to \(1.3\) GeV\({}^{2}\). The theoretical calculations are presented for the full model, which receives contribution from the nonresonant Born terms as well as from the \(S_{11}(1535)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(P_{11}(1880)\) and \(S_{11}(1895)\) resonance excitations. We have compared our theoretical calculations with the experimental data available from the CLAS experiment [25] and found a very good agreement between the experimental and theoretical results at all values of \(Q^{2}\), including \(Q^{2}>1\) GeV\({}^{2}\). ## III Weak production of \(\eta\) mesons ### Charged current induced reactions The charged current (CC) (anti)neutrino induced single \(\eta\) production off the nucleon target (Fig. 1) are given by the following reactions \[\nu_{\mu}(k)+n(p) \longrightarrow \mu^{-}(k^{\prime})+\eta(p_{\eta})+p(p^{\prime}), \tag{31}\] \[\bar{\nu}_{\mu}(k)+p(p) \longrightarrow \mu^{+}(k^{\prime})+\eta(p_{\eta})+n(p^{\prime}), \tag{32}\] where the quantities in the parentheses are the four momenta of the particles. Figure 3: Integrated cross section \(\sigma_{\rm v}\) vs \(W\) at different \(Q^{2}\) for \(\gamma^{*}p\to\eta p\) process. The experimental points are the CLAS 2007 data [25]. Solid line shows the result of the full model which receives contribution from the nonresonant Born terms as well as from the nucleon resonance excitations. The double differential scattering cross section \(\frac{d^{2}\sigma}{dQ^{2}dW}\), for the reactions shown in Eqs. (31) and (32), in the laboratory frame, is expressed as [17] \[\frac{d^{2}\sigma}{dQ^{2}dW}=\int_{0}^{2\pi}d\phi_{qp_{\eta}}\int_{E_{\eta}^{min }}^{E_{\eta}^{max}}dE_{\eta}\frac{1}{(2\pi)^{4}}\frac{1}{64E_{\nu}^{2}M^{2}} \frac{W}{|\vec{q}\,|}\overline{\sum}\sum|\mathcal{M}|^{2}. \tag{33}\] The transition matrix element \(\mathcal{M}\), in the case of weak charged current induced process, is given by \[\mathcal{M}=\frac{G_{F}}{\sqrt{2}}\cos\theta_{C}l_{\mu}J^{\mu}, \tag{34}\] with \(G_{F}\) being the Fermi coupling constant and \(\theta_{C}\) being the Cabibbo mixing angle. The leptonic current \(l_{\mu}\) is given \[l_{\mu}=\bar{u}(k^{\prime})\gamma_{\mu}(1\mp\gamma_{5})u(k) \tag{35}\] where \(-(+)\) stands for neutrino (antineutrino) induced reactions and \(J^{\mu}=J^{\mu}_{NR}+J^{\mu}_{R}\) is the weak hadronic current, which receives contribution from both the nonresonant Born terms as well as the resonance excitations. The hadronic currents for the Born diagrams (s- and u-channels) with nucleon poles are given in Eq. (23), except for the fact that \(\mathcal{O}_{N}\) is now replaced by \(\mathcal{O}_{V}\), where \(\mathcal{O}_{V}=V^{\mu}-A^{\mu}\) is the weak vertex factor. \(V^{\mu}\) and \(A^{\mu}\) are defined in terms of the weak vector and axial-vector form factors as \[V^{\mu} =f_{1}^{V}(Q^{2})\gamma^{\mu}+\frac{f_{2}^{V}(Q^{2})}{2M}i\sigma^ {\mu\nu}q_{\nu}, \tag{36}\] \[A^{\mu} =\left[g_{1}(Q^{2})\gamma^{\mu}+\frac{g_{3}(Q^{2})}{M}q^{\mu} \right]\gamma_{5}, \tag{37}\] where \(f_{1,2}^{V}(Q^{2})\) are, respectively, the isovector vector form factors, and \(g_{1}(Q^{2})\) and \(g_{3}(Q^{2})\) are the axial-vector and pseudoscalar form factors. The two isovector form factors \(f_{1,2}^{V}(Q^{2})\) are expressed in terms of the Dirac (\(F_{1}^{p,n}(Q^{2})\)) and Pauli (\(F_{2}^{p,n}(Q^{2})\)) form factors, discussed in Section II.2, for the proton and the neutron, using the relationships \[f_{1,2}^{V}(Q^{2})=F_{1,2}^{p}(Q^{2})-F_{1,2}^{n}(Q^{2}). \tag{38}\] These electromagnetic form factors may be rewritten in terms of the electric (\(G_{E}^{N}(Q^{2})\)) and magnetic (\(G_{M}^{N}(Q^{2})\)) Sachs' form factors. The axial-vector form factor \(g_{1}(Q^{2})\) is parameterized as \[g_{1}(Q^{2})=g_{A}(0)\ \left[1+\frac{Q^{2}}{M_{A}^{2}}\right]^{-2}, \tag{39}\] where \(g_{A}(0)=1.267\) is the axial-vector charge and \(M_{A}\) is the axial dipole mass, which in the numerical calculations is taken as the world average value i.e. \(M_{A}=1.026\) GeV [34]. On the other hand pseudoscalar form factor \(g_{3}(Q^{2})\) is expressed in terms of \(g_{1}(Q^{2})\) using the PCAC hypothesis and Goldberger-Treiman relation as \[g_{3}(Q^{2})=\frac{2M^{2}g_{1}(Q^{2})}{m_{\pi}^{2}+Q^{2}}, \tag{40}\] with \(m_{\pi}\) being the pion mass. Next, we discuss the positive and negative parity resonance excitation mechanism for the weak interaction induced \(\eta\) production. The general expression of the hadronic current for the \(s-\) and \(u-\) channel resonance excitations and their subsequent decay to \(N\eta\) mode are given in Eq. (25), where the vertex factor \(\Gamma_{\frac{1}{2}\pm}^{\mu}\) is now written as \[\Gamma_{\frac{1}{2}^{+}}^{\mu}=V_{\frac{1}{2}}^{\mu}-A_{\frac{1}{2}}^{\mu}, \tag{41}\] for the positive parity resonance, and as \[\Gamma_{\frac{1}{2}^{-}}^{\mu}=\left(V_{\frac{1}{2}}^{\mu}-A_{\frac{1}{2}}^{ \mu}\right)\gamma_{5}, \tag{42}\] for the negative parity resonance. The vector and axial-vector vertex factors for the weak charged current interaction processes are given by \[V_{\frac{1}{2}}^{\mu} = \frac{f_{1}^{CC}(Q^{2})}{(2M)^{2}}\left(Q^{2}\gamma^{\mu}+\not{q}q^ {\mu}\right)+\frac{f_{2}^{CC}(Q^{2})}{2M}i\sigma^{\mu\alpha}q_{\alpha}, \tag{43}\] \[A_{\frac{1}{2}}^{\mu} = \left[g_{1}^{CC}(Q^{2})\gamma^{\mu}+\frac{g_{3}^{CC}(Q^{2})}{M}q^ {\mu}\right]\gamma_{5}, \tag{44}\] where \(f_{i}^{CC}(Q^{2})\) (\(i=1,2\)) are the isovector \(N-R\) transition form factors which, in turn, are expressed in terms of the charged (\(F_{i}^{R+}(Q^{2})\)) and neutral (\(F_{i}^{R0}(Q^{2})\)) electromagnetic \(N-R\) transition form factors as: \[f_{i}^{CC}(Q^{2})=F_{i}^{R+}(Q^{2})-F_{i}^{R0}(Q^{2}),\qquad i=1,2 \tag{45}\] Further, these form factors are related to the helicity amplitudes as discussed in Section II.2. The axial-vector current consists of two form factors viz. \(g_{1}^{CC}(Q^{2})\) and \(g_{3}^{CC}(Q^{2})\), which are determined assuming the PCAC hypothesis and pion pole dominance of the divergence of the axial-vector current through the generalized GT relation for \(N-R\) transition [1]. The axial-vector coupling \(g_{1}^{CC}\) at \(Q^{2}=0\) is obtained as [17] \[g_{1}^{CC}(0)=2g_{RN\pi}, \tag{46}\] with \(g_{RN\pi}\) being the coupling strength for \(R\to N\pi\) decay, which has been determined by the partial decay width of the resonance and tabulated in Table 1. Since no information about the \(Q^{2}\) dependence of the axial-vector form factor is known experimentally, therefore, a dipole form is assumed: \[g_{1}^{CC}(Q^{2})=\frac{g_{1}^{CC}(0)}{\left(1+\frac{Q^{2}}{M_{A}^{2}}\right)^ {2}}, \tag{47}\] with \(M_{A}=1.026\) GeV, and the pseudoscalar form factor \(g_{3}^{CC}(Q^{2})\) is given by \[g_{3}^{CC}(Q^{2})=\frac{(MM_{R}\pm M^{2})}{m_{\pi}^{2}+Q^{2}}g_{1}^{CC}(Q^{2}), \tag{48}\] where \(+(-)\) sign is for positive (negative) parity resonances. However, the contribution of \(g_{3}^{CC}(Q^{2})\) being directly proportional to the lepton mass squared is almost negligible. ### Neutral current induced reactions The neutral current (NC) (anti)neutrino induced single \(\eta\) production off the nucleon target (Fig. 1) are given by the following reactions \[\nu_{l}(k)+N(p) \longrightarrow \nu_{l}(k^{\prime})+\eta(p_{\eta})+N(p^{\prime}), \tag{49}\] \[\bar{\nu}_{l}(k)+N(p) \longrightarrow \bar{\nu}_{l}(k^{\prime})+\eta(p_{\eta})+N(p^{\prime}),\qquad\qquad N =n,p. \tag{50}\] The expression for the double differential scattering cross section \(\frac{d^{2}\sigma}{dQ^{2}dW}\) is given in Eq. (33), where the transition matrix element \(\mathcal{M}\), in the case of neutral current induced process, is given by \[\mathcal{M}=\frac{G_{F}}{\sqrt{2}}l_{\mu}J^{\mu}, \tag{51}\] with the leptonic current being the same as defined in Eq. (35). The structure of the total hadronic current \(J^{\mu}\) remains the same as in charged current reactions, i.e., \(J^{\mu}=J_{NR}^{\mu}\;{}^{NC}+J_{R}^{\mu}{}^{NC}\), however, the individual hadronic currents for the nonresonant Born terms and the resonance excitations are now expressed in terms of the neutral current form factors, which are discussed briefly in this section. For details, the readers are referred to Ref. [1]. The hadronic currents for the Born diagrams (s- and u-channels) with nucleon poles are given in Eq. (23), however, in the case of NC reactions \({\cal O}_{N}\) is replaced by \({\cal O}_{V}^{NC}\), the weak neutral current vertex, where \({\cal O}_{V}^{NC}=V^{\mu NC}-A^{\mu NC}\) with \(V^{\mu NC}\) and \(A^{\mu NC}\) defined in terms of the neutral current form factors as [1; 2]: \[V^{\mu NC} = \tilde{f}_{1}(Q^{2})\gamma^{\mu}+\frac{\tilde{f}_{2}(Q^{2})}{2M} i\sigma^{\mu\nu}q_{\nu}, \tag{52}\] \[A^{\mu NC} = \left[\tilde{g}_{1}(Q^{2})\gamma^{\mu}+\frac{\tilde{g}_{3}(Q^{2} )}{M}q^{\mu}\right]\gamma_{5}, \tag{53}\] where \(\tilde{f}_{1,2}(Q^{2})\) are the neutral current vector form factors and are expressed in terms of both the isovector and isoscalar components, and \(\tilde{g}_{1}(Q^{2})\) and \(\tilde{g}_{3}(Q^{2})\) are the axial-vector and pseudoscalar form factors. The vector form factors \(\tilde{f}_{1,2}(Q^{2})\) are expressed in terms of the Dirac (\(F_{1}^{p,n}(Q^{2})\)) and Pauli (\(F_{2}^{p,n}(Q^{2})\)) form factors of the nucleon, discussed in Section II.2, using the relationships, \[\tilde{f}_{i}^{p}(Q^{2}) = \left(\frac{1}{2}-2\sin^{2}\theta_{W}\right)F_{i}^{p}(Q^{2})- \frac{1}{2}F_{i}^{n}(Q^{2}), \tag{54}\] \[\tilde{f}_{i}^{n}(Q^{2}) = \left(\frac{1}{2}-2\sin^{2}\theta_{W}\right)F_{i}^{n}(Q^{2})- \frac{1}{2}F_{i}^{p}(Q^{2}),\qquad\qquad i=1,2 \tag{55}\] where \(\theta_{W}\) is the Weinberg angle. The axial-vector form factor, \(\tilde{g}_{1}(Q^{2})\) is expressed as \[\tilde{g}_{1}(Q^{2})=\pm\frac{1}{2}g_{1}(Q^{2}), \tag{56}\] where \(+\) (\(-\)) stands for proton (neutron) target and \(g_{1}(Q^{2})\) is defined in Eq. (39). The contribution of the pseudoscalar form factor to the transition matrix element is proportional to the lepton mass squared, and therefore does not contribute in the case of NC reactions. Next, we discuss the resonance excitation mechanism for the neutral current induced \(\eta\) production. The general expression of the hadronic current for the \(s-\) and \(u-\) channel resonance excitations and their subsequent decay to \(N\eta\) mode are given in Eq. (25), with the vertex factor \(\Gamma_{\frac{1}{2}\pm}^{\mu}\) defined in Eqs. (41) and (42) for the positive and negative parity resonances, respectively, with the modifications \(V_{\frac{1}{2}}^{\mu}\to V_{\frac{1}{2}}^{\mu NC}\) and \(A_{\frac{1}{2}}^{\mu}\to A_{\frac{1}{2}}^{\mu\ NC}\) in the case of NC induced reactions. The vector and axial-vector vertex factors for the NC induced processes are given by \[V_{\frac{1}{2}}^{\mu NC} = \frac{f_{1}^{NC}(Q^{2})}{(2M)^{2}}\left(Q^{2}\gamma^{\mu}+ \not{q}q^{\mu}\right)+\frac{f_{2}^{NC}(Q^{2})}{2M}i\sigma^{\mu\alpha}q_{\alpha}, \tag{57}\] \[A_{\frac{1}{2}}^{\mu\ NC} = \left[g_{1}^{NC}(Q^{2})\gamma^{\mu}+\frac{g_{3}^{NC}(Q^{2})}{M}q^ {\mu}\right]\gamma_{5}, \tag{58}\] where \(f_{i}^{NC}(Q^{2})\) (\(i=1,2\)) are the neutral current \(N-R\) transition form factors which, in analogy with the nucleon form factors, are expressed in terms of the charged (\(F_{i}^{R+}(Q^{2})\)) and neutral (\(F_{i}^{R0}(Q^{2})\)) electromagnetic \(N-R\) transition form factors as: \[f_{i}^{NC}(Q^{2}) = \left(\frac{1}{2}-2\sin^{2}\theta_{W}\right)F_{i}^{R+}(Q^{2})- \frac{1}{2}F_{i}^{R0}(Q^{2}),\qquad\qquad\mbox{for proton target} \tag{59}\] \[f_{i}^{NC}(Q^{2}) = \left(\frac{1}{2}-2\sin^{2}\theta_{W}\right)F_{i}^{R0}(Q^{2})- \frac{1}{2}F_{i}^{R+}(Q^{2}),\qquad\qquad\mbox{for neutron target}. \tag{60}\] The axial-vector neutral current form factor \(g_{1}^{NC}(Q^{2})\) is expressed in terms of \(g_{1}^{CC}(Q^{2})\) as \[g_{1}^{NC}(Q^{2})=\pm\frac{1}{2}g_{1}^{CC}(Q^{2}), \tag{61}\] where \(+\) (\(-\)) stands for proton (neutron) target, \(g_{1}^{CC}(Q^{2})\) is defined in Eq. (47). ## IV Results and discussion ### Total and differential scattering cross sections In Fig. 4, we present the results for the total cross section \(\sigma\) vs. \(E_{\nu_{l}(\bar{\nu}_{l})}\) (\(l=e,\mu\)) for the neutrino and the antineutrino charged current induced \(\eta\) production processes. These results are presented both for electron and muon type (anti)neutrinos, by taking into account the contribution from \(S_{11}(1535)\) resonance only, and the full model, which includes contribution from the nonresonant Born terms as well as from the resonance excitations. In the present work, Figure 4: (Left panel) Total scattering cross section \(\sigma\) for \(\nu_{\mu}\) (solid line), \(\bar{\nu}_{\mu}\) (dash-dotted line), \(\nu_{e}\) (dashed line), and \(\bar{\nu}_{e}\) (double-dash-dotted line) CC induced \(\eta\) production off the nucleon target as a function of (anti)neutrino energy (\(E_{\nu}\)), using the full model that receives the contributions from the nonresonant Born terms as well as from the resonance diagrams including \(S_{11}(1535)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(P_{11}(1880)\) and \(S_{11}(1895)\). The lines with solid circles show the contribution only from \(S_{11}(1535)\) resonance. (Right panel) Same results but for \(E_{\nu}\) from threshold to 1.5 GeV. Figure 5: Total scattering cross section for the CC induced \(\eta\) production i.e. \(\nu_{\mu}+n\longrightarrow\mu^{-}+\eta+p\) (solid line) and \(\bar{\nu}_{\mu}+p\longrightarrow\mu^{+}+\eta+n\) (dashed line) using only the contribution from \(S_{11}(1535)\) resonance. Dashed-dotted, double-dashed-dotted, and double-dotted-dashed lines, respectively, show the results for only vector contribution, only axial-vector contribution, and vector axial-vector interference of the weak hadronic current. we have considered five spin half resonances viz. \(S_{11}(1535)\), \(S_{11}(1650)\), \(P_{11}(1710)\), \(P_{11}(1880)\), and \(S_{11}(1895)\). It may be observed from the figure that there is a dominance of \(S_{11}(1535)\) resonance, which is more pronounced in the case of neutrinos than antineutrinos. For example, at \(E_{\nu}=1.5\) GeV, the contribution of \(S_{11}(1535)\) is 98% (96%), which becomes 92%(89%) at \(E_{\nu}=3\) GeV for neutrino (antineutrino) induced processes. The total contribution of the nonresonant terms is less than 2% in the energy range \(E_{\nu_{\mu}}=1-3\) GeV for the (anti)neutrino induced \(\eta\) production processes. In view of the small contribution of the nonresonant terms, the assumption of neglecting \(\eta-\eta^{\prime}\) mixing in the evaluation of the nonresonant terms is therefore justified. In view of the accelerator experiments like MicroBooNE, T2K, SBND, etc., and the atmospheric experiments for the sub-GeV energy region, where there is considerable flux of (anti)neutrinos at lower energies (\(E_{\nu}\leq 1.5\) GeV), we have explicitly shown the dominance of \(S_{11}(1535)\) resonance, in the right panel of Fig. 4, by presenting the results of \(\sigma\) as a function of (anti)neutrino energy from threshold up to \(E_{\nu}=1.5\) GeV. The results obtained in our model are in agreement with the results reported by Nakamura et al. [16], in the case of \(\nu_{\mu}+n\rightarrow\mu^{-}+p+\eta\) reaction, using the DCC model and also with our earlier work (see Fig. 11 of Ref. [17]). Since the \(\eta\) production cross sections are dominated by \(S_{11}(1535)\) resonance, therefore, we have also considered individually the contribution from the vector and axial-vector components of the weak hadronic current due to the \(N-S_{11}(1535)\) transition. These results are shown in Fig. 5 for \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) induced processes. It may be observed from the figure that the contribution of the vector part of the hadronic current dominates, for example, it has 76% contribution at 1.5 GeV, which becomes 78% at 3 GeV. This dominance of the vector contribution is also reported by the very old calculation of Dombey [14], who finds the ratio of vector to axial-vector contribution to be 2.7:1 at very higher energy, which may be compared with our result of 6.25:1 at \(E_{\nu_{\mu}}=4\) GeV. Since we have fixed the parameters of the vector part of the weak hadronic current by fitting the photo and electro-production data, therefore, any uncertainty in the cross section for the (anti)neutrino induced processes arises mainly due to the uncertainty in the axial-vector part of the weak hadronic current. Moreover, the dominant contribution is from the vector current, therefore, the theoretical uncertainty in the total cross section due to the uncertainty in the axial-vector contribution is quite small. Quantitatively, to understand this uncertainty, we have varied the strong coupling (\(g_{RN\pi}\)), determined by the partial decay width of \(R\to N\pi\) mode, maximally allowed by the PDG and found that a 15% variation in the strong coupling strength results in a change of 3-5% in the neutrino induced cross section, which is found to be even smaller in the antineutrino induced charged current reactions. The other uncertainty is due to the axial dipole mass \(M_{A}\), the value of which is taken to be equal to \(M_{A}=1.026\) GeV. A change of 10% in \(M_{A}\) results a change of 4-6% in the cross section in the energy range of 1.5 GeV to 3 GeV. In Fig. 6, we have presented the results for \(\sigma\) vs. \(E_{\nu(\bar{\nu})}\) for the neutral current induced (anti)neutrino scattering off proton and neutron targets. These results are presented by taking the contribution from the full model, and from \(S_{11}(1535)\) resonance only. It may be noticed that the total cross section from neutron target is more than the proton target both in the neutrino and antineutrino induced reactions. We find that for neutrino induced reaction from the neutron, at \(E_{\nu}=1.5\) GeV, the contribution from \(S_{11}(1535)\) resonance is about 95%, which becomes about 90% at \(E_{\nu}=3\) GeV. Similar observation for the \(S_{11}(1535)\) resonance dominance has been made in the case of neutrino induced \(\eta\) production from the proton target. However, in the case of antineutrino induced reaction on the proton target, the contribution from \(S_{11}(1535)\) resonance is about 88% at \(E_{\bar{\nu}}=1.5\) GeV, which becomes 84% at \(E_{\bar{\nu}}=3\) GeV, while in the case of antineutrino induced reaction off the neutron target, the contribution from \(S_{11}(1535)\) resonance is about 94% at \(E_{\bar{\nu}}=1.5\) GeV, which becomes 85% at \(E_{\bar{\nu}}=3\) GeV. Figure 8: \(Q^{2}\) distribution (left panel) and \(\eta\)-momentum distribution (right panel) for the charged current induced \(\nu_{\mu}+n\longrightarrow\mu^{-}+p+\eta\) process at \(E_{\nu_{\mu}(\bar{\nu}_{\mu})}=1\) GeV (solid line), 1.5 GeV (dashed-dotted line) and 4 GeV (dashed line) using the full model calculation. To understand the relative magnitude of the total cross section induced by the charged and neutral current reactions, in Fig. 7, we have shown the results for the ratio of the total cross section for the charged current induced \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) scattering on neutron and proton targets, respectively, to the cross section of the corresponding neutral current reactions on the isoscalar nucleon target, i.e., \(\frac{\sigma_{\nu(\mu)p}+\sigma_{\nu(\mu)n}}{\sigma_{\nu}}\). It may be noticed that this ratio increases with energy until \(E_{\nu}=2\) GeV, after which it saturates to 8.25 (8.5) for neutrino (antineutrino) reactions. Therefore, a constant factor for \(\sigma(CC):\sigma(NC)\) ratio should not be considered for the (anti)neutrino experiments, where the average energy lies in the sub-GeV region like at MicroBooNE, T2K, etc. In Fig. 8, we have presented the results for the \(Q^{2}\) distribution (i.e. \(\frac{d\sigma}{dQ^{2}}\)) vs \(Q^{2}\) and \(p_{\eta}\) distribution (i.e. \(\frac{d\sigma}{dp_{\eta}}\)) vs. \(p_{\eta}\) using the full model, for the charged current \(\nu_{\mu}\) induced \(\eta\) production from the free neutron target at \(E_{\nu_{\mu}}=1\), 1.5 and 4 GeV. Notice that different scale factors for \(Q^{2}\) and \(p_{\eta}\) distributions have been used to depict the results at \(E_{\nu_{\mu}}=1\) GeV. ### Flux averaged cross section To explicitly see the \(Q^{2}\) and \(\eta\)-momentum distribution at MicroBooNE energies, we have obtained the flux averaged differential and total scattering cross sections by folding it over the MicroBooNE flux [35]. For this we define \[\left\langle\frac{d\sigma}{dQ^{2}}\right\rangle = \frac{\int\frac{d\sigma}{dQ^{2}}\Phi(E_{\nu})dE_{\nu}}{\int\Phi( E_{\nu})dE_{\nu}},\hskip 28.452756pt\left\langle\frac{d\sigma}{dp_{\eta}} \right\rangle=\frac{\int\frac{d\sigma}{dp_{\eta}}\Phi(E_{\nu})dE_{\nu}}{\int \Phi(E_{\nu})dE_{\nu}}, \tag{62}\] and \[\left\langle\sigma\right\rangle = \frac{\int\sigma(E_{\nu})\Phi(E_{\nu})dE_{\nu}}{\int\Phi(E_{\nu} )dE_{\nu}} \tag{63}\] where \(\Phi(E_{\nu})\) is the MicroBooNE \(\nu_{\mu}\) flux [35]. The results obtained for the flux averaged \(Q^{2}\) and \(\eta\)-momentum distributions (using Eq. (62)) for the charged current induced \(\eta\) production by \(\nu_{\mu}\) are shown in Fig. 9. Using Eq. (63), we obtain the charged current \(\nu_{\mu}\) induced total cross section averaged over the MicroBooNE flux to be \(\left\langle\sigma_{CC}\right\rangle=1.68\times 10^{-41}\) cm\({}^{2}\). We have also obtained the flux averaged cross section for the neutral current induced reactions \(\nu p\rightarrow\nu p\eta\) and \(\nu n\rightarrow\nu n\eta\), for which the results are found to be \(\left\langle\sigma_{NC(\nu p)}\right\rangle=0.18\times 10^{-41}\) cm\({}^{2}\) and \(\left\langle\sigma_{NC(\nu n)}\right\rangle=0.26\times 10^{-41}\) cm\({}^{2}\), respectively, which corresponds to an average NC cross section for an isoscalar nucleon target to be \(\langle\sigma_{NC(\nu N)}\rangle=0.22\times 10^{-41}\) cm\({}^{2}\). As discussed earlier, the main source of uncertainty in the theoretical prediction of (anti)neutrino cross section off the nucleon target is due to the uncertainty in the axial-vector form factor. This arises due to the large uncertainty in the branching fraction of the resonance to \(N\pi\) decay mode, and the choice of axial dipole mass \(M_{A}\). For example, a 15% variation from the central value in the strong coupling strength \(g_{RN\pi}\) for \(S_{11}(1535)\) resonance leads to an uncertainty of about 4% in the total flux averaged cross section, and a 10% variation in \(M_{A}\) leads to a variation of about 5% in the flux averaged cross section, which leads to a total uncertainty of \(0.11\times 10^{-41}\) cm\({}^{2}\) (\(0.014\times 10^{-41}\) cm\({}^{2}\)) in the flux averaged cross section for the charged (neutral) current induced \(\eta\) production from the free nucleon target. The MicroBooNE collaboration has reported the results for \(\langle\sigma\rangle=3.22\pm 0.84\pm 0.86\times 10^{-41}\) cm\({}^{2}\)/nucleon in argon nuclear target, where nuclear medium and final state interaction of \(\eta\) mesons with the residual nucleus effects are also important. This needs to be taken into account which has been shown to be important in the case of photo- and electro- production of \(\eta\) mesons [36; 37; 38]. This work is in progress and would be reported in future communication. ## V Summary and Conclusions We have studied the charged and neutral current \(\nu_{l}(\bar{\nu}_{l})\); \((l=e,\mu)\) induced \(\eta\) production off the nucleons and presented the results for the total scattering cross section \(\sigma(E_{\nu_{l}(\bar{\nu}_{l})})\), \(Q^{2}\)-distribution \(\left(\frac{d\sigma}{dQ^{2}}\right)\) and the momentum distribution \(\left(\frac{d\sigma}{d\eta_{\nu}}\right)\) for the \(\eta\) mesons, in a model in which the contribution from the nonresonant Born terms and the resonant terms are calculated in an effective Lagrangian approach. We have applied this model to obtain the flux averaged differential and total scattering cross sections for the MicroBooNE \(\nu_{\mu}\) flux. We find that: 1. Weak charged current production of \(\eta\) mesons induced by \(\nu_{l}\) and \(\bar{\nu}_{l}\) (\(l=e,\mu\)) from the free nucleon target is dominated by the excitation of \(S_{11}(1535)\) resonance and its subsequent decay into \(\eta\) through \(S_{11}(1535)\to N\eta\) decay, similar to the observations made in the case of electromagnetic production of \(\eta\) mesons. 2. This dominance of \(S_{11}(1535)\) resonance contribution in the weak production of \(\eta\) occurs in the charged as well as the neutral current induced reactions. However, at higher neutrino energies (\(E_{\nu}>2\) GeV), the contribution from the higher resonances becomes non-negligible. 3. The charged as well as neutral current productions of \(\eta\) meson are dominated by the vector current contribution. 4. Weak charged current production cross section of \(\eta\) meson is larger for the neutron target than the proton target. This is expected because \(\eta\) production from neutron is induced by neutrinos while on the proton target, it is induced by the antineutrinos. 5. In the case of neutral current induced \(\eta\) production, the cross section is larger from the neutron as compared to the proton target. This is due to the isospin structure of the neutral current in the standard model. 6. The charged current production cross section of the \(\eta\) meson is larger than the neutral current production cross section. The enhancement factor is neutrino energy dependent. For example, this ratio is 4:1 at \(E_{\nu_{\mu}}=1\) GeV and becomes 8:1 at \(E_{\nu_{\mu}}=2\) GeV. Similar observation has also been made in the case of antineutrino induced reactions. 7. The total scattering cross section folded over the MicroBooNE \(\nu_{\mu}\) flux is obtained to be \(1.68\times 10^{-41}\) cm\({}^{2}\) and \(0.22\times 10^{-41}\) cm\({}^{2}\), respectively, for the charged and neutral current induced \(\eta\) production from the free nucleon. To conclude, the results presented, in this work, for the neutral and charged current induced (anti)neutrino scattering cross section from the free nucleon, ratio of the cross sections for the charged current to neutral current, and the flux averaged total cross section \(\langle\sigma\rangle\), differential cross sections \(\langle\frac{d\sigma}{dQ^{2}}\rangle\) and \(\langle\frac{d\sigma}{d\eta_{\nu}}\rangle\) integrated over the MicroBooNE \(\nu_{\mu}\) spectrum may be useful in the future analysis of MicroBooNE as well as other accelerator and atmospheric neutrino experiments like T2K, NOvA, DUNE, HyperK, etc. being performed in the few GeV energy region. ## Acknowledgements We are thankful to D. Caratelli for many useful discussions regarding the \(\eta\) production analysis being done at the MicroBooNE experiment. AF and MSA are thankful to the Department of Science and Technology (DST), Government of India for providing financial assistance under Grant No. SR/MF/PS-01/2016-AMU.
2310.03344
Generalized Benders Decomposition with Continual Learning for Hybrid Model Predictive Control in Dynamic Environment
Hybrid model predictive control (MPC) with both continuous and discrete variables is widely applicable to robotic control tasks, especially those involving contact with the environment. Due to the combinatorial complexity, the solving speed of hybrid MPC can be insufficient for real-time applications. In this paper, we proposed a hybrid MPC solver based on Generalized Benders Decomposition (GBD) with continual learning. The algorithm accumulates cutting planes from the invariant dual space of the subproblems. After a short cold-start phase, the accumulated cuts provide warm-starts for the new problem instances to increase the solving speed. Despite the randomly changing environment that the control is unprepared for, the solving speed maintains. We verified our solver on controlling a cart-pole system with randomly moving soft contact walls and show that the solving speed is 2-3 times faster than the off-the-shelf solver Gurobi.
Xuan Lin
2023-10-05T06:50:11Z
http://arxiv.org/abs/2310.03344v2
Generalized Benders Decomposition with Continual Learning for Hybrid Model Predictive Control in Dynamic Environment ###### Abstract Hybrid model predictive control (MPC) with both continuous and discrete variables is widely applicable to robotic control tasks, especially those involving contact with the environment. Due to the combinatorial complexity, the solving speed of hybrid MPC can be insufficient for real-time applications. In this paper, we proposed a hybrid MPC solver based on Generalized Benders Decomposition (GBD) with continual learning. The algorithm accumulates cutting planes from the invariant dual space of the subproblems. After a short cold-start phase, the accumulated cuts provide warm-starts for the new problem instances to increase the solving speed. Despite the randomly changing environment that the control is unprepared for, the solving speed maintains. We verified our solver on controlling a cart-pole system with randomly moving soft contact walls and show that the solving speed is 2-3 times faster than the off-the-shelf solver Gurobi. ## I Introduction Hybrid model predictive control (MPC) with both continuous and discrete variables is widely applicable to robotic control tasks, especially those involving contact with the environment. However, discontinuous variables are oftentimes computed offline for Hybrid MPC [1, 2, 3, 4] due to their combinatorial complexities. These include gaits for legged robots and contact sequences for manipulation tasks. Several models with mixed discrete-continuous variables were proposed including mixed-logic dynamic systems (MLDs) [5], linear complementary models (LCs) [6], and piece-wise affine systems (PWAs) [7]. Their conditional equivalences were established in [8] (for example, LCs are equivalent to MLDs provided that the complementary variables are bounded). Despite several recent works that solve MPC on these systems [9, 10, 11], the problems addressed in those papers are demonstrate in static environments. In real robotic applications, it is beneficial to further increase the solving speed to reduce the control error since the models are never accurate. In this paper, we propose a novel hybrid MPC solver based on Generalized Benders decomposition (GBD) [12] to solve problems including MLD constraints under changing environments. Benders decomposition separates the problem into a master problem which solves part of the variables named complicating variables, and a subproblem which solves the rest of the variables. It uses a delayed constraint generation technique that builds up representations of the feasible region and optimal cost function for complicating variables inside the master problem. These representations are constructed as cutting planes based on the dual solutions of the subproblem. The key idea we propose is to accumulate the cuts as more problem instances are solved since the dual feasible set is invariant under the changing environments. The accumulated cuts then provide warm-starts for new problem instances. As a result, the solving speed keeps on increasing. Under the fastest situation, GBD only needs to solve one master problem and one subproblem to get a globally optimal solution. The proposed solver is compared against the recent works on warm-started Branch-Bound solvers and the commercialized solver Gurobi. We list the contributions below: 1. We propose a novel solver based on Generalized Benders Decomposition for hybrid MPCs, where the accumulated cuts provide warm-starts for the new problem instances leading to faster solving speeds despite changing environments. 2. We tested our solver on controlling a cart-pole system with randomly moving soft contact walls, a more challenging test than cart-pole with static walls prevail in previous literature [9, 11, 13, 14]. We show that our GBD solver runs faster on average than warm-started Branch and Bound solvers and the off-the-shelf solver Gurobi. _Notations_ Vectors are bold lowercase; matrices are bold uppercase; sets are script or italicized uppercase. The real number set is \(\mathbb{R}\). For \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\), \(\mathbf{x}\leq\mathbf{y}\) indicates element-wise inequality. For \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbf{B}\in\mathbb{R}^{m\times m}\), \(\text{diag}(\mathbf{A},\mathbf{B})\in\mathbb{R}^{(n+m)\times(n+m)}\) denotes the block diagonal matrix with diagonal blocks \(\mathbf{A}\) and \(\mathbf{B}\), and zeros otherwise. \(\mathbf{I}_{n}\) denotes an identity matrix of dimension \(n\). The open ball \(B_{\epsilon}(\mathbf{p})\) denotes \(\{\mathbf{q}:||\mathbf{p}-\mathbf{q}||<\epsilon\}\). ## II Related Works ### _Mixed-Logic Dynamic Models (MLDs)_ In [5], the authors proposed mixed-logic dynamics systems as a general modeling tool for control systems incorporating physical laws, logic rules and operating constraints. MLD is rich enough to incorporate dynamic systems such as finite state machines, nonlinear systems that can be written as piece-wise linear form. MLDs have been widely used to model energy storage systems [15], transportation systems [16], temperature management systems [17], to name a few. Recently, MLDs and its equivalent models such as linear complementary models are introduced into the robotics locomotion and manipulation community to model real-time control involving contacts [9, 10, 11, 18]. MLDs incorporate states and inputs that can be a mixture of continuous and discrete variables, and quadratic objective functions. Since MLDs incorporate a mixture of discrete and continuous variables, solving it online for fast motion planning and model-predictive control demands an efficient MIQP solver. Several methods have been proposed including explicit MPC [19], Branch-and-Bound [11], ADMM [9], Lagrange relaxation [20], elastic modes [21], and generalized Benders decomposition [22]. We briefly go over them below. Explicit MPC solves the problem completely or partially offline, such that the solver only picks out solutions online. [23] building polyhedron regions of invariant optimal control function for LQR problems. Explicitly building polytope regions is computationally expensive hence limited to simple problems. [24] stored all solutions and used a K-nearest-neighbor approach to pick out binary solutions. [19] explored a partial explicit approach that combines offline solved binary solutions with online convex computation. Related to this work is the library building approach where the problem is solved offline and recorded into a dataset. The solver then picks out data points and uses them as warm-starts. The online data selection approach can be a K-nearest neighbor classifier [25, 26], or a learned neural-network [18]. However, this approach generally has difficulty facing out-of-distribution scenarios from the dataset. Except for [11], the works mentioned above only use offline optimal solutions. The infeasible or suboptimal solutions are not used. Branch and bound is a common approach to solve mixed-integer programs used by off-the-shelf solvers such as Gurobi. This approach first relaxes the integer programming problem into a convex programming on the root node where all binaries are extended to continuous variables between zero and one. It then fixes the binary variables one-by-one, and generate branches from the root to a solution on the leaf node where all binary variables are fixed. Previous studies tried to use information from previous solve to warm-start the new solve online. [27] studied propagating the path from root to leaf from the previous iteration to the next one for warm-start, such that a number of parent nodes do not need to be resolved. [11] further explored propagating complete B&B tree to warm-start the next problem. Even with the proposed techniques, B&B can still be slow as it has too many subproblems to keep track of, particularly under noise and model inaccuracies. Another approach to solve mixed-integer programs is through alternating direction method of multipliers (ADMM). ADMM solves two or multiple problems iteratively until they reach a consensus through a penalty in the objective function. Computer vision and operation research community has used ADMM to solve large scale MIP problems [28]. In the robotics community, ADMM has been implemented for solving complementary control problems [9] at a fast speed. Despite [9] does not discuss it, ADMM allows for easy warm-start [29], using previous solution to accelerate the solving of the next solution. On the other hand, ADMM does not have convergence guarantee for MIP problems unless special assumptions are made such as [28]. ### _Benders Decomposition_ Benders decomposition [30] can be regarded as Danzig-Wolfe decomposition [31] applied to the dual. Both of them use delayed column or constraint generation techniques. Benders decomposition identifies the complicating variables and defines a subproblem such that those variables are fixed. For this technique to work well, the subproblem should be much easier to solve than the complete problem. For mixed-integer programming, the subproblem is the convex part with complicating variables being the discrete variables [22]. As subproblems are solved, cutting planes are added to the master problem to build the feasible set and optimal cost function for the subproblem. Benders decomposition was originally proposed to solve linear duals e.g. MILPs. In [12], the author proposed Generalized Benders decomposition (GBD) that extends the theory to nonlinear duals. Several authors have investigated solving MIQPs using GBD [32, 33, 22, 34]. In [35, 36], the authors propose logic-based Benders decomposition which further generalized the theory to so-called inference dual, which is a logic combination of propositions. This method extends the application of BD to planning and scheduling problems such as satisfiability of 0-1 programming problems whose dual is not a traditional linear or nonlinear programming problem. Using this idea, [37] proposed a formulation of combinatorial Benders feasibility cut for MILPs that does not depend on the big-M constant. As Benders decomposition involves master-subproblem structure, it suits the large-scale distributed problems, or problems with a large number of possible scenarios like stochastic programs [38]. For applications such as distributed control [39], the subproblems can be decoupled into multiple smaller-scale problems and solved in parallel to reduce the computation demand. As pointed out by the review paper [40], many authors report over 90% solving time spent on the master problem. Therefore, a number of previous work investigated on how to speed up the master problem, or use its results more efficiently. Examples include local branching heuristics [41], heuristic master problem solutions [42], generating pareto-optimal cuts [43], cut initialization [44], valid inequalities [45], etc. See [40] for a comprehensive review of these methods. [46] points out that classic Benders feasibility cuts do not carry objective function value leading to convergence issues. They proposed additional feasibility cuts to resolve this issue. GBD can also be used to learn objective functions. This has been applied to dual dynamic programming for MPC over long-term problems [47, 48]. Previous work [49, 50] uses Benders cuts to construct lower bounds for infinitely long objective functions using Bellman operators for both nonlinear and mixed-integer linear systems. Through learning Benders cuts from offline dataset, one avoids hand-tuning terminal cost of objective function. Despite a more optimal objective being learned, the online solving speed of MIP is invariant of objective functions. ## III Problem Model We develop MPC control laws for Mixed Logic Dynamic (MLD) systems as proposed by [5]: \[\mathbf{x}_{k+1}=\mathbf{E}\mathbf{x}_{k}+\mathbf{F}\mathbf{u}_{k}+\mathbf{G}\mathbf{\delta}_{k}+ \mathbf{n}_{k} \tag{1a}\] \[\mathbf{H}_{1}\mathbf{x}_{k}+\mathbf{H}_{2}\mathbf{u}_{k}+\mathbf{H}_{3}\mathbf{\delta}_{k }\leq\mathbf{h}(\mathbf{\theta}) \tag{1b}\] At time \(k\), \(\mathbf{x}_{k}\in\mathbb{R}^{n_{x}}\) is the continuous state. \(\mathbf{u}_{k}\in\mathbb{R}^{n_{u}}\) denotes the continuous input. \(\mathbf{\delta}_{k}\in\{0,1\}^{n_{\delta}}\) is the binary input. \(\mathbf{n}_{k}\in\mathbb{R}^{n_{x}}\) is the disturbance input. Matrices representing system dynamics are \(\mathbf{E}\in\mathbb{R}^{n_{x}\times n_{x}}\), \(\mathbf{F}\in\mathbb{R}^{n_{x}\times n_{u}}\), \(\mathbf{G}\in\mathbb{R}^{n_{x}\times n_{\delta}}\). \(\mathbf{H}_{1}\in\mathbb{R}^{n_{c}\times n_{x}}\). \(\mathbf{H}_{2}\in\mathbb{R}^{n_{c}\times n_{u}}\). \(\mathbf{H}_{3}\in\mathbb{R}^{n_{c}\times n_{\delta}}\). The submatrices are of appropriate dimensions. The right-hand side of the constraint (1b) is \(\mathbf{h}\in\mathbb{R}^{n_{x}}\) where \(n_{c}\) is the number of inequality constraints. \(\mathbf{\theta}\) parameterizes \(\mathbf{h}\) to represent the changing environments. We assume that matrices \(\mathbf{E}\), \(\mathbf{F}\), \(\mathbf{G}\), \(\mathbf{H}_{1}\), \(\mathbf{H}_{2}\), \(\mathbf{H}_{3}\) are independent of \(\mathbf{\delta}_{k}\) and \(\mathbf{\theta}\). This makes \(\mathbf{\delta}_{k}\) and \(\mathbf{\theta}\) as inputs to the system while the inherent physics of the system is invariant. _Remark_. The goal of parameter \(\mathbf{\theta}\) is to represent a sudden change in the environment that the controller is uninformed of and cannot prepare for it down the MPC horizon. Note this is different from the time-varying system investigated by [11] where the controller is well-informed of the change in advance (\(\mathbf{\theta}_{t},t=0,...,T\) is known). We formulate a hybrid MPC for this system. The MPC formulation solves an optimization problem to get a sequence of control inputs. However, only the first one is used. It then takes the sensor feedback and resolves the problem. If this could be done fast enough on the hardware, the robot can reject disturbances. The MPC formulation is: \[\underset{\mathbf{x}_{k}\in X_{k},\ \mathbf{u}_{k},\ \mathbf{\delta}_{k}}{ \text{minimize}} \sum_{k=0}^{N-1}\mathbf{x}_{k}^{T}\mathbf{Q}_{k}\mathbf{x}_{k}+\mathbf{u}_{k}^{T }\mathbf{R}_{k}\mathbf{u}_{k}+\mathbf{x}_{N}^{T}\mathbf{Q}_{N}\mathbf{x}_{N}\] (7) s.t. \[\mathbf{x}_{0}=\mathbf{x}_{ini}\] \[\mathbf{x}_{k+1}=\mathbf{E}\mathbf{x}_{k}+\mathbf{F}\mathbf{u}_{k}+\mathbf{G}\mathbf{\delta}_{k}\] \[\mathbf{H}_{1}\mathbf{x}_{k}+\mathbf{H}_{2}\mathbf{u}_{k}+\mathbf{H}_{3}\mathbf{\delta}_{k }\leq\mathbf{h}(\mathbf{\theta})\] \[\mathbf{\delta}_{k}\in\{0,1\}^{n_{\delta}},\ k=0,...,N-1\] The matrices \(\mathbf{Q}_{k}\) and \(\mathbf{Q}_{N}\) are positive definite matrices. \(X_{k}\) is the domain of \(\mathbf{x}_{k}\). The system is written into a more compact form: \[\underset{\mathbf{x}\in X,\ \mathbf{\delta}}{\text{minimize}} \mathbf{x}^{T}\mathbf{Q}\mathbf{x}\] (8) s.t. \[\mathbf{A}\mathbf{x}=\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})\] \[\mathbf{C}\mathbf{x}\leq\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\] \[\mathbf{\delta}_{k}\in\{0,1\}^{n_{\delta}}\] Let \(n_{xu}=n_{x}+n_{u}\). The matrices and vectors have structures: \[\mathbf{x}=\begin{bmatrix}\mathbf{x}_{0}^{T}&\mathbf{u}_{0}^{T}&\cdots&\mathbf{x}_{N-1}^{T}& \mathbf{u}_{N-1}^{T}&\mathbf{x}_{N}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{Nn_{xu}+n_{x}} \tag{9}\] \[\mathbf{\delta}=\begin{bmatrix}\mathbf{\delta}_{0}^{T}&\cdots&\mathbf{\delta}_{N}^{T} \end{bmatrix}^{T}\in\mathbb{R}^{(N+1)n_{\delta}} \tag{10}\] Hence domain of \(\mathbf{x}\) is \(X=X_{k}\times\mathbb{R}^{n_{u}}\times\cdots\times\mathbb{R}^{n_{u}}\times X_{k}\). \[\mathbf{A}=\begin{bmatrix}\mathbf{I}_{n_{x}}&\mathbf{0}\\ -\mathbf{E}&-\mathbf{F}&\mathbf{I}_{n_{x}}&\mathbf{0}\\ &-\mathbf{E}&-\mathbf{F}&\ddots\\ &&\ddots&\mathbf{I}_{n_{x}}&\mathbf{0}\\ &&-\mathbf{E}&-\mathbf{F}&\mathbf{I}_{n_{x}}\end{bmatrix} \tag{11}\] \[\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})=\begin{bmatrix}\mathbf{x}_{ini}^{T}&(\mathbf{G}\mathbf{ \delta}_{0})^{T}&\cdots&(\mathbf{G}\mathbf{\delta}_{N-1})^{T}\end{bmatrix}^{T} \tag{12}\] \[\in\mathbb{R}^{(N+1)n_{x}}\] \[\mathbf{C}= \begin{bmatrix}\mathbf{H}_{1}&\mathbf{H}_{2}\\ &\mathbf{H}_{1}&\mathbf{H}_{2}\\ &&\ddots\\ &&&\mathbf{H}_{1}&\mathbf{H}_{2}&\mathbf{0}\end{bmatrix} \tag{13}\] \[\in\mathbb{R}^{Nn_{c}\times(Nn_{xu}+n_{x})}\] \[\mathbf{d}(\mathbf{\theta},\mathbf{\delta})=\begin{bmatrix}(\mathbf{h}(\mathbf{\theta})-\mathbf{H}_{3 }\mathbf{\delta}_{0})^{T}&\cdots&(\mathbf{h}(\mathbf{\theta})-\mathbf{H}_{3}\mathbf{\delta}_{N-1 })^{T}\end{bmatrix}^{T} \tag{14}\] \[\in\mathbb{R}^{Nn_{c}}\] \[\mathbf{Q}=\text{diag}(\mathbf{Q}_{k},\mathbf{R}_{k})\in\mathbb{R}^{(Nn_{xu}+n_{x}) \times(Nn_{xu}+n_{x})} \tag{15}\] Problem (2) is an MIQP and can be solved through an off-the-shelf mixed-integer convex programming solver based on Branch and Bound, such as Gurobi. However, Branch and Bound algorithms keep track of a large number of subproblems that relax the binary constraints in different ways. Despite the MPC warm-start scheme such as shifting contact sequence can be used [11], many subproblems still need to be solved for a new problem instance. For applications that require extremely fast solving speed, this can be insufficient. In this paper, we propose to use generalized Benders decomposition to solve the problem several times faster than Gurobi. ## IV Benders decomposition formulation In this section, we apply Benders decomposition to our hybrid MPC problem (III). Benders decomposition deals with the problem of the following form: \[\underset{\mathbf{x},\mathbf{y}}{\text{minimize}} f(\mathbf{x},\mathbf{y})\] (16) s.t. \[\mathbf{G}(\mathbf{x},\mathbf{y})\leq 0\] \[\mathbf{x}\in X,\mathbf{y}\in Y\] where \(\mathbf{y}\) is a vector of complicating variables. If \(\mathbf{y}\) is fixed, the optimization problem is much easier to solve. Benders decomposition partitions the problem into a master problem by projecting onto the \(\mathbf{y}\) space: \[\underset{\mathbf{y}}{\text{minimize}} \quad v(\mathbf{y})\] (10) s.t. \[\quad\mathbf{y}\in Y\cap V\] The function \(v(\mathbf{y})\) is defined to provide the best objective function with fixed complicating variable \(\mathbf{y}\): \[v(\mathbf{y})=\underset{\mathbf{x}}{\text{infimum}} \quad f(\mathbf{x},\mathbf{y})\] (11) s.t. \[\quad\mathbf{G}(\mathbf{x},\mathbf{y})\leq 0\] \[\quad\mathbf{x}\in X\] \(V\) contains all \(\mathbf{y}\)'s such that problem (11) is feasible: \[V=\{\mathbf{y}:\mathbf{G}(\mathbf{x},\mathbf{y})\leq 0,\;\;\exists\mathbf{x}\in X\} \tag{12}\] For our hybrid MPC, we define the complicating variable \(\mathbf{y}\) as the binary variable \(\mathbf{\delta}\), the initial condition \(\mathbf{x}_{ini}\), and the parameter \(\mathbf{\theta}\). The subproblem is: \[v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})=\underset{\mathbf{x}\in X}{ \text{minimize}} \quad\mathbf{x}^{T}\mathbf{Q}\mathbf{x}\] (13) s.t. \[\quad\mathbf{A}\mathbf{x}=\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})\] \[\quad\mathbf{C}\mathbf{x}\leq\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\] Given fixed \(\mathbf{x}_{ini}\), \(\mathbf{\theta}\), \(\mathbf{\delta}\), (13) is a quadratic programming and can be solved through off-the-shelf QP solvers. The master problem is: \[\underset{\mathbf{\delta}}{\text{minimize}} \quad v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})\] s.t. \[\quad\mathbf{\delta}_{k}\in\{0,1\} \tag{14}\] \[\quad\mathbf{\delta}\in V\coloneqq\{\mathbf{\delta}:\mathbf{A}\mathbf{x}=\mathbf{b}( \mathbf{x}_{ini},\mathbf{\delta})\] \[\quad\mathbf{C}\mathbf{x}\leq\mathbf{d}(\mathbf{\theta},\mathbf{\delta}),\exists\mathbf{x }\in X\}\] The essential issue with solving (14) is that function \(v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})\) and set \(V\) are only implicitly known through their definitions. Benders decomposition is a process that iteratively solves problem (10) and (11) to build approximations of \(v\) and \(V\) in the problem (10). We will constantly work with the dual of problem (13), given the advantage that the dual is invariant with respect to the complicating variables. We derive the dual for reference. Recall the definition of Lagrangian for the subproblem (13): \[\mathcal{L}(\mathbf{x},\mathbf{\nu},\mathbf{\lambda};\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}) =f_{obj}(\mathbf{x})+\mathbf{\nu}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b}(\mathbf{x}_{ini}, \mathbf{\delta})) \tag{15}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \mathbf{\lambda}^{T}(\mathbf{C}\mathbf{x}-\mathbf{d}(\mathbf{\theta},\mathbf{\delta}))\] where \(\mathbf{\nu}\in\mathbb{R}^{(N+1)n_{x}}\), \(\mathbf{\lambda}\in\mathbb{R}^{Nn_{c}}\) are the dual variables associated with \(\mathbf{A}\mathbf{x}=\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})\), \(\mathbf{C}\mathbf{x}\leq\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\), respectively. The Lagrange dual function \(g\) is: \[g(\mathbf{\nu},\mathbf{\lambda};\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})=\underset{ \mathbf{x}\in X}{\text{minimize}} \quad\mathcal{L}(\mathbf{x},\mathbf{\nu},\mathbf{\lambda};\mathbf{x}_{ini},\mathbf{ \theta},\mathbf{\delta}) \tag{16}\] where \(f_{obj}=\mathbf{x}^{T}\mathbf{Q}\mathbf{x}\). Let \(\mathbf{x}^{0}\) be the unconstrained minimizer of \(\mathcal{L}\) (in general, \(\mathbf{x}^{0}\) different from the optimdal primal solution \(\mathbf{x}^{*}\)). By taking derivative, we have: \[\mathbf{x}^{0}=-\frac{1}{2}\mathbf{Q}^{-1}(\mathbf{A}^{T}\mathbf{\nu}+\mathbf{C}^{T}\mathbf{\lambda}) \tag{17}\] Hence the Lagrange dual problem is: \[\underset{\mathbf{\nu},\;\mathbf{\lambda}}{\text{maximize}} -\frac{1}{4}||\mathbf{A}^{T}\mathbf{\nu}+\mathbf{C}^{T}\mathbf{\lambda}||_{\mathbf{Q} ^{-1}}^{2} \tag{18}\] \[-\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})^{T}\mathbf{\nu}-\mathbf{d}(\mathbf{\theta}, \mathbf{\delta})^{T}\mathbf{\lambda}\] s.t. \[\quad\mathbf{\lambda}\geq\mathbf{0}\] As the feasibility of (13) is independent of the objective function, (13) is feasible if and only if the following problem is feasible: \[\underset{\mathbf{x}\in X}{\text{minimize}} \quad\mathbf{0}\] (19) s.t. \[\quad\mathbf{A}\mathbf{x}=\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})\] \[\quad\mathbf{C}\mathbf{x}\leq\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\] Problem (19) has the dual: \[\underset{\mathbf{\nu},\;\mathbf{\lambda}}{\text{maximize}} -\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})^{T}\mathbf{\nu}-\mathbf{d}(\mathbf{\theta}, \mathbf{\delta})^{T}\mathbf{\lambda}\] (20) s.t. \[\quad\mathbf{A}^{T}\mathbf{\nu}+\mathbf{C}^{T}\mathbf{\lambda}=\mathbf{0}\] \[\quad\mathbf{\lambda}\geq\mathbf{0}\] ### _Feasibility cuts_ If at iteration \(p\), the subproblem is infeasible under the given \(\mathbf{\delta}_{p}\), this \(\mathbf{\delta}_{p}\) needs to be removed from the master problem. This can be achieved by adding a cutting plane. Since the problem (III) is linearly constrained, the Farkas certificates can be used to add feasibility cuts. They can be discovered by solving (19) with a dual simplex solver ([51], Chapter 6.5). The theorem of alternatives for (19) is: **Lemma 1**.: _Given \(\mathbf{A}\in\mathbb{R}^{l\times n}\), \(\mathbf{b}\in\mathbb{R}^{l}\), \(\mathbf{C}\in\mathbb{R}^{m\times n}\), \(\mathbf{d}\in\mathbb{R}^{m}\), exactly one of the following statements is true:_ 1. _There exists an_ \(\mathbf{x}\in\mathbb{R}^{n}\) _that satisfies_ \(\mathbf{A}\mathbf{x}=\mathbf{b},\mathbf{C}\mathbf{x}\leq\mathbf{d}\)_._ 2. _There exist_ \(\mathbf{y}\in\mathbb{R}^{l}\)_,_ \(\mathbf{z}\in\mathbb{R}^{m}\) _that satisfy_ \(\mathbf{z}\geq\mathbf{0}\)_,_ \(\mathbf{A}^{T}\mathbf{y}+\mathbf{C}^{T}\mathbf{z}=\mathbf{0}\)_,_ \(\mathbf{b}^{T}\mathbf{y}+\mathbf{d}^{T}\mathbf{z}<0\)_._ Proof.: See Appendix I. If (19) is infeasible for \(\mathbf{\delta}_{p}\). Then we can add a cutting plane to the master problem to remove a set of \(\mathbf{\delta}\)'s including \(\mathbf{\delta}_{p}\). Farkas lemma guarantees the existence of \(\tilde{\mathbf{\nu}}_{p}\in\mathbb{R}^{(N+1)n_{x}}\), \(\tilde{\mathbf{\lambda}}_{p}\in\mathbb{R}^{Nn_{c}}\) such that: \[\tilde{\mathbf{\lambda}}_{p}\geq\mathbf{0} \tag{21}\] \[\quad\mathbf{A}^{T}\tilde{\mathbf{\nu}}_{p}+\mathbf{C}^{T}\tilde{\mathbf{\lambda }}_{p}=\mathbf{0}\] \[\quad\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta}_{p})^{T}\tilde{\mathbf{\nu}}_{p}+ \mathbf{d}(\mathbf{\theta},\mathbf{\delta}_{p})^{T}\tilde{\mathbf{\lambda}}_{p}<0\] To prevent the master problem from giving this \(\mathbf{\delta}_{p}\), a constraint to defeat the Farkas infeasible proof is added to the master problem: \[\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})^{T}\tilde{\mathbf{\nu}}_{p}+\mathbf{d}(\mathbf{\theta}, \mathbf{\delta})^{T}\tilde{\mathbf{\lambda}}_{p}\geq 0 \tag{22}\] We note that this cutting plane will not remove any feasible \(\mathbf{\delta}\) from the subproblem. We state this as a lemma. **Lemma 2**.: _For given \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\), any \(\mathbf{\delta}\) that contradicts (22) proves infeasibility for (19)._ Proof.: As \(\tilde{\mathbf{\nu}}_{p}\), \(\tilde{\mathbf{\lambda}}_{p}\) discovered by the dual simplex solver satisfy the first two conditions of (21), they are feasible for the dual problem (20). Let \(a\in\mathbb{R}^{+}\) be an arbitrary positive value, \((a\tilde{\mathbf{\nu}}_{p},a\tilde{\mathbf{\lambda}}_{p})\) are also feasible for (20). Let \(\mathbf{\delta}\) be any value that contradicts (22), we have \(-a\tilde{\mathbf{\nu}}_{p}^{T}\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})-a\tilde{\mathbf{\lambda} }_{p}^{T}\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\rightarrow+\infty\) as \(a\rightarrow+\infty\), hence the dual problem is unbounded which proves that the primal problem (19) is infeasible (from Corollary 4.1 of [51]). Since hybrid MPC needs to be solved fast online, it is important to maximize the usage of computations so the number of iteration to find a feasible solution is reduced. Many previous works added one feasibility cut each iteration. Some previous works [43, 52, 53] propose adding multiple cutting planes each iteration, or re-formulate the problem such that stronger cuts can be generated. However, the subproblem structure has not been explored by those papers. We propose an innovative technique to add multiple feasibility cuts to the master problem via subproblem recursive structure. The online computation time prevents us to solve any additional optimization problems (even convex ones), but those cutting planes can be retrieved without any additional computation given the planes we already have. Define \(\tilde{\mathbf{\nu}}_{p}^{m}\), \(\tilde{\mathbf{\lambda}}_{p}^{m}\), \(m=1,...,N-1\) such that: \[\tilde{\mathbf{\nu}}_{p,k}^{m}=\begin{cases}\tilde{\mathbf{\nu}}_{p,k+m}&\forall k+m \leq N\\ \mathbf{0}&\forall k+m>N\end{cases} \tag{23}\] \[\tilde{\mathbf{\lambda}}_{p,k}^{m}=\begin{cases}\tilde{\mathbf{\lambda}}_{p,k+m}& \forall k+m\leq N-1\\ \mathbf{0}&\forall k+m>N-1\end{cases}\] For each \(m\), we add an additional cutting planes: \[\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})^{T}\tilde{\mathbf{\nu}}_{p}^{m}+\mathbf{d}(\mathbf{\theta },\mathbf{\delta})^{T}\tilde{\mathbf{\lambda}}_{p}^{m}\geq 0,\ \ m=1,...,N-1 \tag{24}\] The addition of cuts (24) works in two ways. First, the solutions that contradicts (24) has good optimality as predicted by the optimality cuts, hence may be selected by the master problem as the next trial solution. Therefore, addition of (24) eliminates those trials and accelerates the master problem to find a feasible solution, especially in the cold start stage when the master problem is almost empty. Second, cuts (24) predicts the future infeasible used by shifting the current infeasible cases into the future, such that the future solves do not need to re-discover them. Similar to \(\tilde{\mathbf{\nu}}_{p}\) and \(\tilde{\mathbf{\lambda}}_{p}\), we present: **Corollary 2.1**.: _Any \(\mathbf{\delta}\) that contradicts (24) proves infeasibility for (19) with given \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\)._ Proof.: We can verify that \(\tilde{\mathbf{\nu}}_{p}^{m}\), \(\tilde{\mathbf{\lambda}}_{p}^{m}\) are dual feasible for any \(m\) if \(\tilde{\mathbf{\nu}}_{p}\), \(\tilde{\mathbf{\lambda}}_{p}\) are dual feasible by simply plugging them into the dual feasibility constraints. This is a simple extension of Lemma 2. For our problem, \(\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})\), \(\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\) depend on \(\mathbf{\delta}\) linearly, it is interesting to realize that from an infeasible subproblem with one \(\mathbf{\delta}\), we construct a plane that may remove a set of infeasible \(\mathbf{\delta}\)'s. This contributes to the efficacy of Benders decomposition as it takes usage of infeasible samples which are usually thrown away by the methods that learn binary solutions offline [18, 19, 26]. ### _Optimality cuts_ If at iteration \(q\), the sub-problem is solved to optimal under given \(\mathbf{\delta}_{q}\), we want to add a cutting plane as a lower bound that approaches \(v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})\) from below. This can be realized through duality theory. For any \(\mathbf{\nu}\) and \(\mathbf{\lambda}\geq\mathbf{0}\), \(g(\mathbf{\nu},\mathbf{\lambda};\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}_{q})\leq v(\mathbf{x} _{ini},\mathbf{\theta},\mathbf{\delta}_{q})\). Since the subproblem is convex and we assume there exists a strictly feasible solution (Slater's), strong duality is achieved and the best lower bound is tight: \[v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}_{q})=\underset{\mathbf{\nu},\mathbf{\lambda} \geq\mathbf{0}}{\text{maximize}}\ \ g(\mathbf{\nu},\mathbf{\lambda};\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}_{q}) \tag{25}\] Therefore, we add the best lower bound as a cutting plane to the master problem. Let \(\mathbf{x}_{0}^{0}\) be the unconstrained minimizer of \(\mathcal{L}\) at iteration \(q\), and \(\mathbf{\nu}_{q}^{*},\mathbf{\lambda}_{q}^{*}\) be the maximizer of \(g(\mathbf{\nu},\mathbf{\lambda};\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}_{q})\), the cutting plane takes the form: \[v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}) \geq\mathcal{L}(\mathbf{x}_{q}^{0},\mathbf{\nu}_{q}^{*},\mathbf{\lambda}_{q}^{ *};\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}) \tag{26}\] \[=f_{obj}(\mathbf{x}_{0}^{0})+\mathbf{\nu}_{q}^{*T}\mathbf{A}\mathbf{x}_{q}^{0}+ \mathbf{\lambda}_{q}^{*T}\mathbf{C}\mathbf{x}_{q}^{0}\] \[-\mathbf{\nu}_{q}^{*T}\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta})-\mathbf{\lambda}_{ q}^{*T}\mathbf{d}(\mathbf{\theta},\mathbf{\delta})\triangleq\mathcal{L}^{*}(\mathbf{x}_{ini},\mathbf{ \theta},\mathbf{\delta})\] Note that \(\mathbf{\nu}_{q}^{*},\mathbf{\lambda}_{q}^{*}\) depend on \(\mathbf{\delta}_{q},\mathbf{x}_{ini}\). We make one key observation: **Proposition 3**.: \(\mathbf{x}_{q}^{0}\) _depends on \(\mathbf{\nu}_{q}^{*},\mathbf{\lambda}_{q}^{*}\) but does not have explicit dependency on \(\mathbf{\delta}_{q},\mathbf{x}_{ini}\), \(\mathbf{\theta}\)._ Proof.: This is true given (17). With Proposition 3, when \(\mathbf{\delta}_{q},\mathbf{x}_{ini}\), \(\mathbf{\theta}\) change, \(\mathbf{x}_{q}^{0}\) is still the unconstrained minimizer of \(\mathcal{L}\) as long as we do not swap \(\mathbf{\nu}_{q}^{*},\mathbf{\lambda}_{q}^{*}\). However, \(\mathbf{\nu}_{g}^{*},\mathbf{\lambda}_{q}^{*}\) are no longer the maximizer of \(g\). Hence, \(\mathcal{L}^{*}(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})\) only provides a loose lower bound for \(\mathbf{\delta}\) other than the current \(\mathbf{\delta}_{q}\) used to generate this optimality cut. The subscript \(q\) is dropped in (26) indicating that the inequality is valid for general \(\mathbf{\delta}\). As \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\) take the same position as \(\mathbf{\delta}_{q}\) in \(\mathcal{L}\), the same argument applies when \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\) are updated. This will be used to construct warm-starts for hybrid MPC. If the solver used for the subproblem does not return the unconstrained optimizer \(\mathbf{x}_{q}^{0}\), we can leverage on strong duality to avoid computing \(\mathbf{x}_{q}^{0}\). Since \(v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}_{q})=f_{obj}(\mathbf{x}_{q}^{*})\triangleq f _{obj,q}^{*}=\mathcal{L}^{*}(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta}_{q})\), the cutting plane takes the form: \[\begin{split} v(\mathbf{x}_{ini},\mathbf{\theta},\mathbf{\delta})\geq f _{obj,q}^{*}&+\mathbf{\nu}_{q}^{*T}(\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta}_{q})- \mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta}))\\ &+\mathbf{\lambda}_{q}^{*T}(\mathbf{d}(\mathbf{\theta},\mathbf{\delta}_{q})-\mathbf{d} (\mathbf{\theta},\mathbf{\delta}))\end{split} \tag{27}\] ### _The Benders master problem_ The final form of the master problem (14) is: \[\underset{\boldsymbol{\delta}}{\text{minimize}}\ \ z_{0}\] (28) \[\text{s.t.}\ \ \boldsymbol{\delta}_{k}\in\{0,1\}\] \[for\ p=1,...,\text{current \# of infeasible subproblem:}\] \[\boldsymbol{b}(\boldsymbol{x}_{ini},\boldsymbol{\delta})^{T} \tilde{\boldsymbol{\nu}}_{p}+\boldsymbol{d}(\boldsymbol{\theta},\boldsymbol{ \delta})^{T}\tilde{\boldsymbol{\lambda}}_{p}\geq 0\] \[\boldsymbol{b}(\boldsymbol{x}_{ini},\boldsymbol{\delta})^{T} \tilde{\boldsymbol{\nu}}_{p}^{m}+\boldsymbol{d}(\boldsymbol{\theta}, \boldsymbol{\delta})^{T}\tilde{\boldsymbol{\lambda}}_{p}^{m}\geq 0,\ \ m=1,...,N-1\] \[for\ q=1,...,\text{current \# of optimal subproblem:}\] \[\ \ solutions. This avoids completely constructing the problem-solution mappings such as explicit MPC [23], especially when the matrices \(\mathbf{A}\), \(\mathbf{C}\) are undetermined before the solver begins (for example, the robot may have an unknown payload until it is hand over). In this paper, we extend this idea to continual learning with dynamic environment represented by \(\mathbf{\theta}\) shifting online. Only a small number of extreme rays and covers are added for a given \(\mathbf{\theta}\). The solver continuously generates more rays and covers as \(\mathbf{\theta}\) shifts. The new rays and covers are retained, while the duplicated ones are discarded. Once the solution is retained, it is never removed. The retained solutions provide increasingly better warm-starts for the incoming new problem instances. Since the number of extreme rays and covers is finite, this process will terminate and the master problem does not grow infinitely large. This approach bares similarity of the continual learning framework*. Footnote *: A large number of literature continual learning is based on tasks ([55, 56, 57], to name a few). On the other hand, this work does not define tasks, hence more in-line with the task-free continual learning such as [58]. As MPC proceeds online, problem (III) needs to be constantly resolved with different initial conditions \(\mathbf{x}_{ini}\) and parameter \(\mathbf{\theta}\). Since \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\) take the same position in the subproblem (13) as \(\mathbf{\delta}\), all the optimality cuts (26) that construct lower bounds for \(\mathbf{\delta}\) are also valid lower bounds for changing \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\). In addition, the feasibility cuts (22) can also be used for new initial condition as \(\tilde{\mathbf{\nu}}_{p}\), \(\tilde{\mathbf{\lambda}}_{p}\) are independent of \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\). Assume we have feasible and optimality cuts as listed in (28). When new initial condition \(\mathbf{x}^{\prime}_{ini}\) and different parameter \(\mathbf{\theta}^{\prime}\) come in, we update the cutting planes: \[for\ p=1,...,\text{current \# of infeasible subproblem:} \tag{30}\] \[\mathbf{b}(\mathbf{x}^{\prime}_{ini},\mathbf{\delta})^{T}\tilde{\mathbf{\nu}}_{p} +\mathbf{d}(\mathbf{\theta}^{\prime},\mathbf{\delta})^{T}\tilde{\mathbf{\lambda}}_{p}\geq 0\] \[for\ q=1,...,\text{current \# of optimal subproblem}\] \[z_{0}\geq f^{*}_{obj,q}+\mathbf{\nu}^{\star T}_{q}(\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta}_{q})-\mathbf{b}(\mathbf{x}^{\prime}_{ini},\mathbf{\delta})) \tag{31}\] \[+\mathbf{\lambda}^{\star T}_{q}(\mathbf{d}(\mathbf{\theta},\mathbf{\delta}_{q})- \mathbf{d}(\mathbf{\theta}^{\prime},\mathbf{\delta}))\] **Corollary 3.1**.: _For given \(\mathbf{x}^{\prime}_{ini}\) and \(\mathbf{\theta}^{\prime}\), any \(\mathbf{\delta}\) that contradicts (30) proves infeasibility for (19)._ Proof.: A simple result from Lemma (2) given \(\tilde{\mathbf{\nu}}_{p}\) and \(\tilde{\mathbf{\lambda}}_{p}\) are independent of \(\mathbf{x}_{ini}\) and \(\mathbf{\theta}\). **Corollary 3.2**.: \[f^{*}_{obj,q}+\mathbf{\nu}^{\star T}_{q}(\mathbf{b}(\mathbf{x}_{ini},\mathbf{\delta}_{q})-\mathbf{ b}(\mathbf{x}^{\prime}_{ini},\mathbf{\delta}))+\mathbf{\lambda}^{\star T}_{q}(\mathbf{d}(\mathbf{ \theta},\mathbf{\delta}_{q})-\mathbf{d}(\mathbf{\theta}^{\prime},\mathbf{\delta}))\] _gives a lower bound of \(v(\mathbf{x}^{\prime}_{ini},\mathbf{\theta}^{\prime},\mathbf{\delta})\)._ Proof.: A simple result from that \(\mathbf{x}_{ini}\), \(\mathbf{\theta}\) and \(\mathbf{\delta}\) take the same position in (26). With this technique, when a new initial condition is received, we run the updated master problem first. The master problem automatically provides a good warm-start using knowledge of previously accumulated cuts, reducing the number of iterations. The modified MPC algorithm with continual learning is provided in Algorithm 2. ``` Input:\(\mathbf{x}_{ini},\ \mathbf{\theta},\ G_{a},\ \epsilon\) 1 Initialization\(LB:=-\infty\), \(UB:=\infty\), iteration \(i:=0\) 2 Update \(\mathbf{x}_{ini}\), \(\mathbf{\theta}\) in all the existing feasibility and optimality cuts in master problem (28) 3while\(|UB-LB|/|UB|\geq G_{a}\)do 4 Solve master problem (28) to get \(\mathbf{\delta}\) and \(m^{*}_{obj,i}\) 5 Let \(LB:=m^{*}_{obj,i}\) 6 Solve problem (19) with \(\mathbf{\delta}\) using dual simplex ifFeasiblethen 7 Solve (13) with solutions from (19) as warm-starts 8 Let the optimal cost be \(f^{*}_{obj,i}\), optimal dual variables be \(\mathbf{\nu}^{*}_{i},\mathbf{\lambda}^{*}_{i}\) 9if\((\mathbf{\nu}^{*}_{i},\mathbf{\lambda}^{*}_{i})\notin B_{\epsilon}(\mathbf{\nu}^{*}_{i}, \mathbf{\lambda}^{*}_{i})\)then 10 Add constraint (26) to master problem (28) 11if\(f^{*}_{obj,i}\)<\(UB\)then 12 Let \(UB:=f^{*}_{obj,i}\), \(\mathbf{u}^{*}:=\mathbf{u}\) 13 14else Infeasible 15 Add constraint (22) to master problem (28) 16if\(\tilde{\mathbf{\nu}}^{m}_{i},\ \tilde{\mathbf{\lambda}}^{m}_{i}\) is not equivalent to any existing \(\tilde{\mathbf{\nu}}\), \(\tilde{\mathbf{\lambda}}\) 17 Add constraint (24) to master problem (28) 18\(i:=i+1\) 19return\(\mathbf{u}^{*}\) ``` **Algorithm 2**Benders MPC with continual learning ## VI Experiment We test our Benders MPC to control the inverted pendulum with moving soft walls. This is also presented as a verification problem in previous works [9, 11, 18], except that we additionally randomize the wall motion. #### Vi-1 Problem setup The setup is shown in Fig. 1. Let the nonlinear pendulum dynamics be \(\dot{\mathbf{x}}=f(\mathbf{x},\mathbf{u})+\mathbf{n}\). \(x_{1}\) is the position of the cart, \(x_{2}\) is the angle of the pole, and \(x_{3}\), \(x_{4}\) are their derivatives. The control input \(\mathbf{u}\) is the horizontal actuation force to push the cart. \(\mathbf{n}\) is a random disturbance torque acting on the pole. The moving elastic pads are located to the right of the origin at a distance of Fig. 1: Cart-pole system with moving soft contact walls. \(d_{1}\), and to the left at a distance of \(d_{2}\). Let \(l\) be the length of the pole. When the pole penetrates (\(x_{1}-los(x_{2})\geq d_{1}\) or \(x_{1}-los(x_{2})\leq-d_{2}\)), additional contact force is generated at the tip of the pole. Let the parameter \(\mathbf{\theta}=\begin{bmatrix}d_{1}&d_{2}\end{bmatrix}\). We linearize the pendulum model around \(x_{2}=0\) and use a linear elastic law for the wall contact. The linear model is: \[\begin{split}\dot{x}_{1}&=x_{3}\\ \dot{x}_{2}&=x_{4}\\ \dot{x}_{3}&=\frac{gm_{p}}{m_{c}}x_{2}+\frac{u}{m_{c}}\\ \dot{x}_{4}&=\frac{g(m_{c}+m_{p})}{lm_{c}}x_{2}+\frac{u}{lm_{c}}+ \frac{\lambda_{1}}{lm_{p}}-\frac{\lambda_{2}}{lm_{p}}\end{split} \tag{32}\] Where \(m_{p}\) is the mass of the pole, \(m_{c}\) is the mass of the cart, \(\lambda_{1}\), \(\lambda_{2}\) are contact forces from the right and the left walls, respectively. They are both assumed to be positive. \(g\) is the gravitational acceleration. We define the control input \(\mathbf{u}=\begin{bmatrix}u&\lambda_{1}&\lambda_{2}\end{bmatrix}^{T}\). If penetration happens, there is a non-zero contact force. This can be modeled as mixed-integer linear constraints: \[\begin{split}\delta_{i}=0&\Rightarrow\lambda_{i}=0,\ a_{i}( lx_{2}-x_{1})+\frac{\lambda_{i}}{k_{i}}+d_{i}\geq 0\\ \delta_{i}=1&\Rightarrow\lambda_{i}\geq 0,\ a_{i}(lx_{2}-x_{1})+ \frac{\lambda_{i}}{k_{i}}+d_{i}=0\end{split} \tag{33}\] Where \(i=1,2\). \(a_{1}=1\) and \(a_{2}=-1\). \(k_{1}\) and \(k_{2}\) are elastic coefficients to model the right and left wall contacts. These logic laws are enforced as mixed-integer linear constraints using the standard big-M approach [59], where the maximal distance possible from pole to wall, \(D_{max}\), and maximal possible contact force, \(\lambda_{max}\), are used as big-M constants. We also define the maximal and minimal cart position limits \(d_{min}\) and \(d_{max}\), and angle limits to be \(\pm\frac{\pi}{2}\). The velocity limits of the cart, angular velocity limits of the pole, and control limits \(u_{max}\) are also defined accordingly. This problem has \(n_{x}=4\), \(n_{u}=3\), \(n_{z}=2\), \(n_{c}=20\) (including variable limits). The matrices (variable limits are excluded) after discretization are defined such that: \[\mathbf{E}=\mathbf{I}_{4}+dt\begin{bmatrix}0&0&1&0\\ 0&0&0&1\\ 0&gm_{p}/m_{c}&0&0\\ 0&g(m_{c}+m_{p})/(lm_{c})&0&0\end{bmatrix} \tag{34}\] \[\mathbf{F}=dt\begin{bmatrix}0&0&0\\ 0&0&0\\ 1/m_{c}&0&0\\ 1/(lm_{c})&1/(lm_{p})&-1/(lm_{p})\end{bmatrix} \tag{35}\] \[\mathbf{H}_{1}=\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ -1&l&0&0\\ 1&-l&0&0\\ 1&-l&0&0\\ -1&l&0&0\end{bmatrix}\ \mathbf{H}_{2}=\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&1/k_{1}&0\\ 0&-1/k_{1}&0\\ 0&0&1/k_{2}\\ 0&0&-1/k_{2}\end{bmatrix} \tag{36}\] \[\mathbf{H}_{3}=\begin{bmatrix}-\lambda_{max}&0\\ 0&-\lambda_{max}\\ D_{max}&0\\ 0&D_{max}\\ 0&D_{max}\\ 0&0\end{bmatrix}\ \mathbf{h}(\mathbf{\theta})=\begin{bmatrix}0\\ 0\\ -d_{1}+D_{max}\\ d_{1}\\ -d_{2}+D_{max}\\ d_{2}\end{bmatrix} \tag{37}\] Other matrices are zeros. The objective function penalizes the control efforts, the velocities, and tries to regulate the pole to the zero position. We choose \(\mathbf{Q}_{k}=\mathbf{I}_{4}\), \(\mathbf{R}_{k}=\mathbf{I}_{2}\). The terminal cost \(\mathbf{Q}_{N}\) is obtained by solving a discrete algebraic Ricatti equation. In the actual experiment, we choose \(m_{c}=1.0kg\), \(m_{p}=0.4kg\), \(l=0.6m\), \(k_{1}=k_{2}=50N/m\), \(u_{max}=20N\). The discretization step size \(dt=0.02s\) and planning horizon \(N=10\). #### Iii-B2 Monte-Carlo experiment We implement Algorithm 2 to solve this problem. We choose \(gap=0.1\), which is identical among all the benchmark methods. The \(\epsilon\) is chosen properly to reduce the number of optimality cuts. We use off-the-shelf solver Gurobi to solve both the master problems (MIPs) and the subproblems (QPs). The controller is coded in Python, and tested inside a pybullet environment [60] on a 12th Gen Intel Core i7-12800H x 20 laptop with 16GB memory. At the beginning of each test episode, the pendulum begins from a perturbed angle of \(x_{2}=10^{\circ}\) such that it will bump into the wall to regain balance. For the rest of each episode, the persistent random disturbance torque \(\mathbf{n}\) is generated from a Gaussian distribution with zero mean and a standard deviation \(\mathbf{\sigma}=8Nm\). The system is constantly disturbed and has to frequently touch the wall for re-balance. The wall motion is generated by a random walk on top of a base sinusoidal motion with a constant offset \(d_{\text{off},i}\): \[d_{i}=d_{\text{off},i}+Asin(wt+\theta_{1})+m_{i},\ i=1,2 \tag{38}\] Where \(m_{i}\) is the integration of a Gaussian noise. We conduct statistical analysis for 10 feasible trajectories under different disturbance torque \(\mathbf{n}\) and wall motion \(m_{i}\). The data is collected from solved problems where at least one contact is planned, removing the cases when contact is not involved. The disturbance torque is unknown to the controller. The wall motions are provide to controller only at run-time. The following methods are also used for benchmark: 1. Warm-started Branch and Bound. We implemented the Branch and Bound algorithm in Python with warm-start as described by [11]. 2. Off-the-shelf solver. We implemented the off-the-shelf solver Gurobi. The problem is setup only once and solved iteratively such that warm-starts are automatically used to minimize the solving time. The default setting is used to optimize the performance. 3. GBD without warm-start. We implemented Algorithm 1 such that previous cuts are not used to warm-start the next problem. #### Iii-B3 Results Fig. 2 gives the histogram result showing the number of iterations to solve the problem and their frequencies. Thanks to the continual learning, 99.2% of problem instances are solved within 5 iterations by GBD, except for a few problems during the cold-start phase taking more than 10 iterations. This 99.2% problem instances have an average solving speed of \(500-600Hz\). This solving time represents a complete procedure of Algorithm 2, where the time spend in solving master and subproblems account for 60% of the total time. Note that previous work [40] reported over 90% solving time spent on the master problem. On the contrary, we report that the master problem is oftentimes solved within the presolve stage, since hundreds of cuts are accumulated after a few problem instances. This accounts for less than 30% of the total solving time. On the other hand, Branch and Bound algorithm relies on subproblems. The warm-start scheme reduces the number of solved subproblems by 50%. However, the BB solver still goes through more than 10\(\times\) subproblems to converge compared to the GBD solver, from our averaged data. The gist of this warm-start scheme is to shift the covers in time. Ideally, the computations up till \(k=N-2\) can be reused and the algorithm only needs to compute the binary input at \(k=N-1\). However, this scheme becomes less effective as the amplitude of wall motion \(A\) increases. The reason is that the current contact sequence cannot simply be shifted and has to be recomputed before \(k=N-1\). Consequently, additional covers need to be refined. Fig. 3 shows the solving speed in Hz during the beginning of an episode from our data. The solver begins from cold-start but has to plan contact ever since \(t=0\). After one iteration of from cold-start (taking \(200ms\) in Fig. 3), the cuts accumulate to provide warm-start for the next iteration, and solving speed increases over Gurobi (\(200-300Hz\)). Without warm-start, the solving speed remains on average \(25Hz\). Due to the fast cold-start, even if the system dynamics are only known at run-time or completely change, the time cost to learn new dynamics for our problem is at the scale of hundreds of milliseconds. This is much faster than training neural-network-based policies [18]. If global optimal solutions are not required in the beginning, the learning time can be further reduced. ## VII Conclusion, Discussion and Future Work In this paper, we proposed a hybrid MPC solver based on Generalized Benders decomposition with continual learning. The algorithm accumulates cutting planes from the invariant dual space of the subproblems under than randomly changing environment. After a cold-start phase at the scale of hundreds of milliseconds, the accumulated cuts provide warm-starts for the new problem instances to increase the solving speed. This leads to solving speeds that are 2-3 times faster than the commercialized solver Gurobi in controlling the cart-pole system with randomly moving soft walls. There are several theoretical analyses and hardware experiments that can make this preliminary results more thorough. For example, analyzing the generalizability of the learned cuts to the new problem instances, or testing the scalability of this algorithm to more complex problems. We can also combine Branch and Bound with Benders cuts to leverage both of their strengths. Although we already have results to some of the questions above, they do not fit into the current paper and will be included in the future journal version. ## Appendix A Proof of lemma 1 We present a proof of Lemma 1. Recall the Farkas' lemma (Theorem 4.6 in [51]): **Theorem 4**.: _Let \(\tilde{\mathbf{A}}\in\mathbb{R}^{m\times n}\) and \(\tilde{\mathbf{b}}\in\mathbb{R}^{m}\). Then, exactly one of the two following alternatives holds:_ 1. _There exists some_ \(\tilde{\mathbf{x}}\geq\mathbf{0}\) _such that_ \(\tilde{\mathbf{A}}\tilde{\mathbf{x}}=\tilde{\mathbf{b}}\)__ 2. _There exists some vector_ \(\tilde{\mathbf{p}}\) _such that_ \(\tilde{\mathbf{p}}^{T}\tilde{\mathbf{A}}\geq\mathbf{0}^{T}\) _and_ \(\tilde{\mathbf{p}}^{T}\tilde{\mathbf{b}}<0\)_._ For any vector \(\mathbf{x}\), there exists \(\mathbf{y}\geq\mathbf{0}\), \(\mathbf{z}\geq\mathbf{0}\) such that \(\mathbf{x}=\mathbf{y}-\mathbf{z}\). The inequality constraint \(\mathbf{C}\mathbf{x}\leq\mathbf{d}\) is equivalent to \(\mathbf{C}\mathbf{x}+\mathbf{\delta}=\mathbf{d},\exists\mathbf{\delta}\geq\mathbf{0}\). Hence the first condition of Lemma 1 is equivalent to the existence of \(\tilde{\mathbf{x}}=\begin{bmatrix}\mathbf{y}^{T}&\mathbf{z}^{T}&\mathbf{\delta}^{T}\end{bmatrix} ^{T}\geq 0\) such that: \[\underbrace{\begin{bmatrix}\mathbf{A}&-\mathbf{A}&\mathbf{0}\\ \mathbf{C}&-\mathbf{C}&\mathbf{I}\end{bmatrix}}_{\tilde{\mathbf{A}}}\underbrace{\begin{bmatrix} \mathbf{y}\\ \mathbf{z}\\ \mathbf{\delta}\end{bmatrix}}_{\tilde{\mathbf{x}}}=\underbrace{\begin{bmatrix}\mathbf{b} \\ \mathbf{d}\end{bmatrix}}_{\tilde{\mathbf{b}}} \tag{39}\] Fig. 3: A case of solving procedure from cold-start when the pole bumps into the moving elastic wall. x-axis is time and y-axis is the solving speed in Hz. Left: Comparison of solving speed between GBD with continual learning for warm-start, GBD without any warm-start, and off-the-shelf solver Gurobi. Right: The number of cuts accumulated during the solving procedure. Fig. 2: Comparison of number of solver iterations for different problems of \((\mathbf{x}_{ini},\mathbf{\theta})\). x-axis is the range of solver iterations. y axis is the count of problem instances from the collected trajectories. Left: The proposed GBD with continual learning. Right: Branch and Bound with warm-start [11]. By Theorem 4, this condition is alternative to the existence of \(\tilde{\mathbf{p}}=\begin{bmatrix}\mathbf{y}^{T}&\mathbf{z}^{T}\end{bmatrix}^{T}\) such that \(\tilde{\mathbf{p}}^{T}\tilde{\mathbf{A}}\geq\mathbf{0}^{T}\) and \(\tilde{\mathbf{p}}^{T}\tilde{\mathbf{b}}<0\), which gives the second condition of Lemma 1. _Acknowledgements_ The author would like to thank Zehui Lu, Shaoshuai Mou, and Yan Gu for their helpful discussions and suggestions.
2305.07394
Remarks on sums of reciprocals of fractional parts
The Diophantine sums $\sum_{n=1}^N \| n \alpha \|^{-1}$ and $\sum_{n=1}^N n^{-1} \| n \alpha \|^{-1}$ appear in many different areas including the ergodic theory of circle rotations, lattice point counting and random walks, often in connection with Fourier analytic methods. Beresnevich, Haynes and Velani gave estimates for these and related sums in terms of the Diophantine approximation properties of $\alpha$ that are sharp up to a constant factor. In the present paper, we remove the constant factor gap between the upper and the lower estimates, and thus find the precise asymptotics for a wide class of irrationals. Our methods apply to sums with the fractional part instead of the distance from the nearest integer function, and to sums involving shifts $\| n \alpha + \beta \|$ as well. We also comment on a higher dimensional generalization of these sums.
Bence Borda
2023-05-12T11:41:47Z
http://arxiv.org/abs/2305.07394v1
###### Abstract ###### Abstract The Diophantine sums \(\sum_{n=1}^{N}\|n\alpha\|^{-1}\) and \(\sum_{n=1}^{N}n^{-1}\|n\alpha\|^{-1}\) appear in many different areas including the ergodic theory of circle rotations, lattice point counting and random walks, often in connection with Fourier analytic methods. Beresnevich, Haynes and Velani gave estimates for these and related sums in terms of the Diophantine approximation properties of \(\alpha\) that are sharp up to a constant factor. In the present paper, we remove the constant factor gap between the upper and the lower estimates, and thus find the precise asymptotics for a wide class of irrationals. Our methods apply to sums with the fractional part instead of the distance from the nearest integer function, and to sums involving shifts \(\|n\alpha+\beta\|\) as well. We also comment on a higher dimensional generalization of these sums. **Remarks on sums of reciprocals of fractional parts** **Bence Borda** Graz University of Technology Steyrergasse 30, 8010 Graz, Austria Email: [email protected] **Keywords: Diophantine approximation, continued fraction,** small fractional parts, Diophantine sum, metric number theory **Mathematics Subject Classification (2020): 11J54, 11J71, 11J83** ## 1 Introduction The subject of this paper is the asymptotic behavior of the Diophantine sums \[\sum_{n=1}^{N}\frac{1}{\|n\alpha\|}\qquad\text{ and }\qquad\sum_{n=1}^{N}\frac{1}{n\|n \alpha\|} \tag{1}\] for various irrational \(\alpha\), where \(\|\cdot\|\) denotes the distance from the nearest integer function. These sums appear in many different fields such as uniform distribution theory [21, 28], multiplicative Diophantine approximation [4, 13], lattice point counting in polygons [2, 9, 22, 23, 31], dynamical systems [10, 15, 16, 17] and random walks [6, 7, 8, 33]. We refer to [3, 4] for a comprehensive survey. The behavior of the more general sum \(\sum_{n=1}^{N}n^{-p}\|n\alpha\|^{-q}\) is highly sensitive to the value of the exponents \(p,q\geq 0\)[27]. Sharp estimates in the case \(p=q=2\) were given in [2], and in the case \(p=q>1\) in [11]. The case \(p>q\) leads to convergent series for certain irrationals [12]. For higher dimensional generalizations of the sums in (1), see [1, 19, 20, 29]. Hardy and Littlewood [22, 23, 24], Haber and Osgood [21], Kruse [27] and more recently Beresnevich, Haynes and Velani [4] gave estimates for the sums in (1) in terms of the Diophantine approximation properties of \(\alpha\) that are sharp up to a constant factor. The main goal of the present paper is to remove the constant factor gap between the upper and the lower estimates, thereby establishing the precise asymptotics. Recall that an irrational \(\alpha\) is called badly approximable if \(\inf_{n\in\mathbb{N}}n\|n\alpha\|>0\). The best known estimates for such an \(\alpha\) are1[4] Footnote 1: For the sake of readability, with a slight abuse of notation in all error terms \(\log x:=\log\max\{e,x\}\). \[N\log N\leq\sum_{n=1}^{N}\frac{1}{\|n\alpha\|}\ll N\log N\qquad\text{and} \qquad\frac{1}{2}(\log N)^{2}\leq\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|}\leq 33( \log N)^{2}+O(\log N).\] Our first result improves these. **Theorem 1**.: _Let \(\alpha\) be a badly approximable irrational. For any \(N\geq 1\),_ \[\sum_{n=1}^{N}\frac{1}{\|n\alpha\|}=2N\log N+O(N)\qquad\text{and}\qquad\sum_{n= 1}^{N}\frac{1}{n\|n\alpha\|}=(\log N)^{2}+O(\log N)\] _with implied constants depending only on \(\alpha\)._ The only previously known precise asymptotic result (without a constant factor gap between the upper and the lower estimates) for the sums in (1) appeared in an obscure paper of Erdos from 1948 [18], in which he showed that \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq 1/(2N)\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\sim 2N\log N \qquad\text{and}\qquad\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|}\sim(\log N)^{2} \qquad\text{for a.e. }\alpha. \tag{2}\] This result seems to have escaped the attention of several later authors who subsequently proved weaker estimates in the metric setting. We learned about (2) from the recent historical survey [3], and it served as the starting point of our investigations. Our next result improves (2) by finding the precise order of the error term. **Theorem 2**.: _Let \(c>0\) be an arbitrary constant, and let \(\varphi\) be a positive nondecreasing function on \((0,\infty)\). If \(\sum_{k=1}^{\infty}1/\varphi(k)<\infty\), then for a.e. \(\alpha\),_ \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|} =2N\log N+O(N\varphi(\log N)^{1/2}),\] \[\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|} =(\log N)^{2}+O(\varphi(\log N)+\log N\log\log N)\] _with implied constants depending only on \(c\), \(\varphi\) and \(\alpha\). If \(\sum_{k=1}^{\infty}1/\varphi(k)=\infty\), then for a.e. \(\alpha\) the sets_ \[\Bigg{\{}N\in\mathbb{N}\,:\,\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\geq 2N\log N+N \varphi(\log N)^{1/2}\Bigg{\}},\] \[\Bigg{\{}N\in\mathbb{N}\,:\,\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|} \geq(\log N)^{2}+\varphi(\log N)\Bigg{\}}\] _have upper asymptotic density 1._ Recall that the upper asymptotic density of a set \(A\subseteq\mathbb{N}\) is defined as \(\limsup_{N\to\infty}|A\cap[1,N]|/N\). In particular, for a.e. \(\alpha\) we have \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|} =2N\log N+O(N(\log N)^{1/2}(\log\log N)^{1/2+\varepsilon}),\] \[\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|} =(\log N)^{2}+O(\log N(\log\log N)^{1+\varepsilon})\] with any \(\varepsilon>0\), but these fail with \(\varepsilon=0\). Note that the cutoff \(\|n\alpha\|\geq c/N\) in the first sum is necessary, as for a.e. \(\alpha\) we have \(\|n\alpha\|<(n\log n\log\log n)^{-1}\) for infinitely many \(n\in\mathbb{N}\). Our main results, Theorems 3 and 4 on the sums in (1) with a general irrational \(\alpha\) are presented in Section 2. We discuss closely related sums involving fractional parts and shifts \(\|n\alpha+\beta\|\), and comment on a higher dimensional generalization in Section 3. Main estimates For the rest of the paper, \(\alpha\) is an irrational number with continued fraction \(\alpha=[a_{0};a_{1},a_{2},\ldots]\) and convergents \(p_{k}/q_{k}=[a_{0};a_{1},a_{2},\ldots,a_{k}]\). Set \(s_{K}=\sum_{k=1}^{K}a_{k}\). We refer to [26] for a general introduction to continued fractions. ### General irrationals In this section, we prove our main estimates for the sums in (1) with a general irrational \(\alpha\). **Theorem 3**.: _Let \(c>0\) be an arbitrary constant. For any \(K\geq 0\) and \(q_{K}\leq N<q_{K+1}\),_ \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}=2N\log N+O\left(( a_{K+1}^{1/2}+\log s_{K+1})N\right).\] _If in addition \(4(ca_{K+1})^{1/2}q_{K}\leq N\), then_ \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\geq 2N\log N+q_{K+1}-O(( \log s_{K+1})N).\] _The implied constants depend only on \(c\)._ **Proof.** If \(N\leq 4c\), then \[0\leq\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\leq\frac{N^{2}}{c }\ll 1,\] and the claims trivially follow. We may thus assume that \(N>4c\). Summing the identity \(\|n\alpha\|^{-1}=\int_{0}^{\infty}t^{-2}\mathds{1}_{\{\|n\alpha\|\leq t\}} \,\mathrm{d}t\) over all \(1\leq n\leq N\) such that \(\|n\alpha\|\geq c/N\) yields \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}=\int_{c/N}^{\infty }\frac{1}{t^{2}}\left|\{1\leq n\leq N\,:\,c/N\leq\|n\alpha\|\leq t\}\right|\, \mathrm{d}t. \tag{3}\] Let \(R=\frac{1}{8}(s_{K}+N/q_{K})+c\), and note that \(c/N\leq R/N\leq 1/2\). The latter inequality follows from the assumption \(c<N/4\) and the general estimate \(s_{K}\leq q_{K}\), which can be easily seen e.g. by induction on \(K\). We will estimate the integral in (3) on the intervals \([c/N,R/N]\), \([R/N,1/2]\) and \([1/2,\infty)\) separately. The integral on \([1/2,\infty)\) is negligible: \[\int_{1/2}^{\infty}\frac{1}{t^{2}}\left|\{1\leq n\leq N\,:\,c/N\leq\|n\alpha \|\leq t\}\right|\,\mathrm{d}t\leq\int_{1/2}^{\infty}\frac{N}{t^{2}}\,\mathrm{ d}t=2N. \tag{4}\] Consider now the integral on \([R/N,1/2]\). Let \(D_{N}(\alpha)\) denote the discrepancy of the point set \(\{n\alpha\}\), \(1\leq n\leq N\). That is, \[D_{N}(\alpha)=\sup_{I\subseteq[0,1]}\left|\sum_{n=1}^{N}\mathds{1}_{I}(\{n \alpha\})-\lambda(I)N\right|,\] where the supremum is over all intervals \(I\subseteq[0,1]\), and \(\lambda\) is the Lebesgue measure. In particular, \[\left|\{1\leq n\leq N\,:\,c/N\leq\|n\alpha\|\leq t\}\right|=2\left(t-\frac{c} {N}\right)N+O(D_{N}(\alpha))\quad\text{uniformly in }t\in[R/N,1/2].\] A classical discrepancy estimate [28, p. 126] states that \(D_{N}(\alpha)\leq 2(s_{K}+N/q_{K})\ll R\). Therefore \[\begin{split}\int_{R/N}^{1/2}\frac{1}{t^{2}}\left|\{1\leq n\leq N \,:\,c/N\leq\|n\alpha\|\leq t\}\right|\,\mathrm{d}t&=\int_{R/N}^ {1/2}\frac{2Nt-2c+O(D_{N}(\alpha))}{t^{2}}\,\mathrm{d}t\\ &=2N\log N+O(N\log R).\end{split} \tag{5}\] Finally, consider the integral on \([c/N,R/N]\). We will show that \[\int_{c/N}^{R/N}\frac{1}{t^{2}}\left|\{1\leq n\leq N\,:\,\|n\alpha\|\leq t\} \right|\,\mathrm{d}t\ll(a_{K+1}^{1/2}+\log R)N. \tag{6}\] For any \(1\leq n<n^{\prime}\leq N\) we have \(1\leq n^{\prime}-n<q_{K+1}\), hence by the best rational approximation property of continued fraction convergents, \(\|n^{\prime}\alpha-n\alpha\|\geq\|q_{K}\alpha\|\geq 1/(2q_{K+1})\). The pigeonhole principle thus shows that \[|\{1\leq n\leq N\,:\,\|n\alpha\|\leq t\}|\leq 4q_{K+1}t+1,\] therefore \[\int_{c/N}^{R/N}\frac{1}{t^{2}}\left|\{1\leq n\leq N\,:\,\|n\alpha\|\leq t\} \right|\,\mathrm{d}t\ll q_{K+1}\log R. \tag{7}\] The previous formula implies (6) whenever \(N\gg q_{K+1}\). It remains to prove (6) when, say, \(N<q_{K+1}/4\). Let \(A=\{1\leq n\leq N\,:\,q_{K}\mid n\}\) and \(B=\{1\leq n\leq N\,:\,q_{K}\nmid n\}\). Consider an element \(n=jq_{K}\in A\) with some integer \(1\leq j\leq N/q_{K}\), and observe that \(\|jq_{K}\alpha\|\leq t\) implies that \(j\leq 2q_{K+1}t\). In particular, \[|\{n\in A\,:\,\|n\alpha\|\leq t\}|\leq\min\left\{\frac{N}{q_{K}},2q_{K+1}t \right\}.\] If \(N\leq(q_{K}q_{K+1})^{1/2}\), then \[\int_{c/N}^{R/N}\frac{1}{t^{2}}\left|\{n\in A\,:\,\|n\alpha\|\leq t\} \right|\,\mathrm{d}t\leq\int_{c/N}^{R/N}\frac{N/q_{K}}{t^{2}}\,\mathrm{d}t \leq\frac{N^{2}}{cq_{K}}\ll a_{K+1}^{1/2}N.\] If \(N>(q_{K}q_{K+1})^{1/2}\), then again \[\begin{split}\int_{c/N}^{R/N}\frac{1}{t^{2}}\left|\{n\in A\,:\, \|n\alpha\|\leq t\}\right|\,\mathrm{d}t&\leq\int_{c/N}^{cN/(q_{K }q_{K+1})}\frac{2q_{K+1}t}{t^{2}}\,\mathrm{d}t+\int_{cN/(q_{K}q_{K+1})}^{ \infty}\frac{N/q_{K}}{t^{2}}\,\mathrm{d}t\\ &\ll q_{K+1}\log\frac{N^{2}}{q_{K}q_{K+1}}+q_{K+1}\\ &\ll a_{K+1}^{1/2}N.\end{split}\] Consider now \(B_{j}=\{n\in B\,:\,jq_{K}<n<(j+1)q_{K}\}\) with some integer \(0\leq j<\lceil N/q_{K}\rceil\). Observe that \(|n\alpha-np_{K}/q_{K}|\leq n/(q_{K}q_{K+1})<1/q_{K}\), and that \(np_{K}\pmod{q_{K}}\) attains nonzero residue classes. Therefore the set \(\{n\alpha\}\), \(n\in B_{j}\) coincides with \(\{\ell/q_{K}\,:\,1\leq\ell\leq q_{K}-1\}\) (or a subset thereof in case \(j=\lceil N/q_{K}\rceil-1\)) up to an error of \(1/q_{K}\). Since \(\mathrm{sgn}(\alpha-p_{K}/q_{K})=(-1)^{K}\), for all \(n\in B_{j}\) with \(np_{K}\not\equiv(-1)^{K-1}\pmod{q_{K}}\), we have \(\|n\alpha\|\geq 1/q_{K}\). The identity \(q_{K}p_{K-1}-q_{K-1}p_{K}=(-1)^{K}\) shows that the unique \(n\in B_{j}\) with \(np_{K}\equiv(-1)^{K-1}\pmod{q_{K}}\) is \(n=jq_{K}+q_{K-1}\). The assumption \(N<q_{K+1}/4\) ensures that \[\|(jq_{K}+q_{K-1})\alpha\|=\|q_{K-1}\alpha\|-j\|q_{K}\alpha\|\geq\frac{1}{2q_ {K}}-\frac{N}{q_{K}}\cdot\frac{1}{q_{K+1}}>\frac{1}{4q_{K}}.\] In particular, \(\|n\alpha\|\geq 1/(4q_{K})\) for all \(n\in B\). Since \(\{n\alpha\}\), \(n\in B_{j}\) is well approximated by (a subset of) \(\{\ell/q_{K}\,:\,1\leq\ell\leq q_{K}-1\}\), we have \[|\{n\in B_{j}\,:\,\|n\alpha\|\leq t\}|\ll q_{K}t\quad\text{uniformly in }1/(4q_{K}) \leq t\leq 1/2,\] and by summing over \(0\leq j<\lceil N/q_{K}\rceil\), \[|\{n\in B\,:\,\|n\alpha\|\leq t\}|\ll Nt\quad\text{uniformly in }1/(4q_{K}) \leq t\leq 1/2.\] Therefore \[\int_{c/N}^{R/N}\frac{1}{t^{2}}\,|\{n\in B\,:\,\|n\alpha\|\leq t\}|\ \mathrm{d}t \ll\int_{1/(4q_{K})}^{R/N}\frac{Nt}{t^{2}}\,\mathrm{d}t\ll N\log R.\] Adding our estimates for \(n\in A\) and \(n\in B\) leads to (6). Formulas (3)-(6) yield \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}=2N\log N+O\left(( a_{K+1}^{1/2}+\log R)N\right)=2N\log N+O\left((a_{K+1}^{1/2}+\log s_{K+1})N \right),\] as claimed. Suppose now that \(4(ca_{K+1})^{1/2}q_{K}\leq N\), and let us prove the lower bound. We may assume that \(a_{K+1}\) is large in terms of \(c\). In particular, \(R/N>8c/N\), and formulas (3)-(5) yield \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\geq 2N\log N+\int_{c/ N}^{8c/N}\frac{1}{t^{2}}\,|\{1\leq n\leq N\,:\,c/N\leq\|n\alpha\|\leq t\}|\ \mathrm{d}t-O(N\log R).\] Let \(c/N\leq t\leq 8c/N\). Observe that \((q_{K}+q_{K+1})c/N\leq j\leq tq_{K+1}\) implies both \(c/N\leq\|jq_{K}\alpha\|\leq t\) and \(j\leq N/q_{K}\), the latter by the assumption \(4(ca_{K+1})^{1/2}q_{K}\leq N\). Therefore \[|\{1\leq n\leq N\,:\,c/N\leq\|n\alpha\|\leq t\}|\geq tq_{K+1}-(q_{K}+q_{K+1}) c/N-1,\] and so \[\int_{c/N}^{8c/N}\frac{1}{t^{2}}\,|\{1\leq n\leq N\,:\,c/N\leq\|n\alpha\|\leq t \}|\ \mathrm{d}t\geq(\log 8)q_{K+1}-\frac{7}{8}(q_{K}+q_{K+1})-\frac{N}{c} \geq q_{K+1}-O(N).\] Hence \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\geq 2N\log N+q_{K+1}-O(N \log R)=2N\log N+q_{K+1}-O((\log s_{K+1})N),\] as claimed. **Theorem 4**.: _For any \(K\geq 0\) and \(q_{K}\leq N<q_{K+1}\),_ \[\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|}=(\log N)^{2}+\frac{\pi^{2}}{6}s_{K}+a_{K +1}\sum_{1\leq j\leq N/q_{K}}\frac{1}{j^{2}}+O\left(\sum_{k=1}^{K+1}a_{k}^{1/2 }\log a_{k}+(\log s_{K+1})\log N\right)\] _with a universal implied constant._ **Proof.** We estimate the contribution of the terms with \(\|n\alpha\|<1/(2n)\) and \(\|n\alpha\|\geq 1/(2n)\) separately. Let \(0\leq k\leq K\), and consider the integers \(q_{k}\leq n<q_{k+1}\). If \(\|n\alpha\|<1/(2n)\), then by Legendre's theorem [26, p. 30] we have \(q_{k}\mid n\). A given multiple \(n=jq_{k}\) satisfies \(\|jq_{k}\alpha\|<1/(2jq_{k})\) if and only if \(j<(2q_{k}\|q_{k}\alpha\|)^{-1/2}\). Therefore \[\sum_{\begin{subarray}{c}q_{k}\leq n<q_{k+1}\\ \|n\alpha\|<1/(2n)\end{subarray}}\frac{1}{n\|n\alpha\|}=\sum_{1\leq j<(2q_{k} \|q_{k}\alpha\|)^{-1/2}}\frac{1}{jq_{k}\|jq_{k}\alpha\|},\] consequently \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|<1/(2n)\end{subarray}}^{N}\frac{1}{n\|n\alpha\|} =\sum_{k=0}^{K-1}\sum_{1\leq j<(2q_{k}\|q_{k}\alpha\|)^{-1/2}} \frac{1}{jq_{k}\|jq_{k}\alpha\|}+\sum_{1\leq j\leq\min\{(2q_{K}\|q_{K}\alpha\| )^{-1/2},N/q_{K}\}}\frac{1}{jq_{K}\|jq_{K}\alpha\|}\] \[=\sum_{k=0}^{K-1}\frac{1}{q_{k}\|q_{k}\alpha\|}\left(\frac{\pi^{2 }}{6}+O(a_{k+1}^{-1/2})\right)+\frac{1}{q_{K}\|q_{K}\alpha\|}\left(\sum_{1\leq j \leq N/q_{K}}\frac{1}{j^{2}}+O(a_{K+1}^{-1/2})\right)\] \[=\frac{\pi^{2}}{6}\sum_{k=1}^{K}a_{k}+a_{K+1}\sum_{1\leq j\leq N/q _{K}}\frac{1}{j^{2}}+O\left(\sum_{k=1}^{K+1}a_{k}^{1/2}\right).\] Letting \[T_{N}=\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq 1/(2N)\end{subarray}}^{N}\frac{1}{\|n\alpha\|},\] we have \[\frac{1}{n\|n\alpha\|}\mathds{1}_{\{\|n\alpha\|\geq 1/(2n)\}} =\frac{1}{n}\Bigg{(}\sum_{\begin{subarray}{c}\ell=1\\ \|\ell\alpha\|\geq 1/(2n)\end{subarray}}^{n}\frac{1}{\|\ell\alpha\|}-\sum_{ \begin{subarray}{c}\ell=1\\ \|\ell\alpha\|\geq 1/(2n)\end{subarray}}^{n-1}\frac{1}{\|\ell\alpha\|}\Bigg{)}\] \[=\frac{1}{n}\Bigg{(}T_{n}-T_{n-1}-\sum_{\begin{subarray}{c}\ell=1 \\ 1/(2n)\leq\|\ell\alpha\|<1/(2(n-1))\end{subarray}}^{n-1}\frac{1}{\|\ell\alpha\|} \Bigg{)}\] \[=\frac{T_{n}-T_{n-1}}{n}+O\left(\sum_{\ell=1}^{n-1}\mathds{1}_{\{ 1/(2n)\leq\|\ell\alpha\|<1/(2(n-1))\}}\right).\] Summation by parts resp. switching the order of summation leads to \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq 1/(2n)\end{subarray}}^{N}\frac{1}{n\|n\alpha\|}=\sum_{n=1}^{N-1} \frac{T_{n}}{n(n+1)}+\frac{T_{N}}{N}+O\left(\sum_{\ell=1}^{N-1}\mathds{1}_{\{ 1/(2N)\leq\|\ell\alpha\|<1/(2\ell)\}}\right).\] As above, Legendre's theorem shows that \(\sum_{q_{k}\leq\ell<q_{k+1}}\mathds{1}_{\{\|\ell\alpha\|<1/(2\ell)\}}\ll a_{k+1} ^{1/2}\), hence the error term in the previous formula is \(\sum_{\ell=1}^{N-1}\mathds{1}_{\{1/(2N)\leq\|\ell\alpha\|<1/(2\ell)\}}\ll\sum_ {k=1}^{K+1}a_{k}^{1/2}\). An application of Theorem 3 with \(c=1/2\) gives \[\sum_{n=1}^{N-1}\frac{T_{n}}{n(n+1)} =\sum_{n=1}^{N-1}\frac{2\log n}{n+1}+O\left(\sum_{k=0}^{K}\sum_{q_{ k}\leq n<q_{k+1}}\frac{a_{k+1}^{1/2}}{n}+\sum_{n=1}^{N-1}\frac{\log s_{K+1}}{n}\right)\] \[=(\log N)^{2}+O\left(\sum_{k=1}^{K+1}a_{k}^{1/2}\log a_{k}+(\log s _{K+1})\log N\right),\] and \(T_{N}/N\ll\log N+a_{K+1}^{1/2}+\log s_{K+1}\). Therefore \[\sum_{\begin{subarray}{c}n=1\\ \|na\|\geq 1/(2n)\end{subarray}}^{N}\frac{1}{n\|n\alpha\|}=(\log N)^{2}+O \left(\sum_{k=1}^{K+1}a_{k}^{1/2}\log a_{k}+(\log s_{K+1})\log N\right),\] which together with (8) proves the claim. ### Corollaries Theorems 3 and 4 establish the asymptotics of the sums in (1) for a large class of irrational \(\alpha\). For instance, we immediately obtain \[\sum_{\begin{subarray}{c}n=1\\ \|na\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\sim 2N\log N\quad\text{ if }a_{k}=o(k^{2}),\] \[\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|}\sim(\log N)^{2}\quad\text{ if }s_{k}=o(k^{2}).\] As a further example, consider Euler's number \(e=[2;1,2,1,\ldots,1,2n,1,\ldots]\). The convergent denominators grow at the rate \(\log q_{k}=(k/3)\log k+O(k)\), and \(s_{K}=K^{2}/9+O(K)\). Theorems 3 and 4 thus give \[\sum_{\begin{subarray}{c}n=1\\ \|na\|\geq c/N\end{subarray}}^{N}\frac{1}{\|ne\|} =2N\log N+O\left(\frac{N(\log N)^{1/2}}{(\log\log N)^{1/2}} \right),\] \[\sum_{n=1}^{N}\frac{1}{n\|ne\|} =(\log N)^{2}+\frac{\pi^{2}}{6}\cdot\frac{(\log N)^{2}}{(\log \log N)^{2}}+O\left(\frac{(\log N)^{3/2}}{(\log\log N)^{1/2}}\right).\] We will need certain basic facts from the metric theory of continued fractions in order to deduce the a.e. asymptotics from Theorems 3 and 4. Khinchin and Levy showed that \(\log q_{k}\sim\frac{\pi^{2}}{12\log 2}k\) for a.e. \(\alpha\), whereas Borel and Bernstein proved that given a positive function \(\varphi\), for a.e. \(\alpha\) we have \(a_{k}\geq\varphi(k)\) for infinitely many \(k\in\mathbb{N}\) if and only if \(\sum_{k=1}^{\infty}1/\varphi(k)=\infty\). A theorem of Diamond and Vaaler [14] on trimmed sums of partial quotients states that \[\lim_{K\to\infty}\frac{\sum_{k=1}^{K}a_{k}-\max_{1\leq k\leq K}a_{k}}{K\log K} =\frac{1}{\log 2}\qquad\text{for a.e. }\alpha. \tag{9}\] We refer to the monograph [25] for the proof of all these results and for more context. Proof of Theorem 2.: Assume first that \(\sum_{k=1}^{\infty}1/\varphi(k)<\infty\). By the Khinchin-Levy and Borel-Bernstein theorems mentioned above, we have \(a_{K+1}\leq\varphi(K/100)\leq\varphi(\log N)\) for all but finitely many \(K\), and also \(\log s_{K+1}\ll\log K\ll\log\log N\). Theorem 3 thus yields \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}=2N\log N+O(N \varphi(\log N)^{1/2}+N\log\log N).\] Since \(\varphi\) is nondecreasing and \(\sum_{k=1}^{\infty}1/\varphi(k)<\infty\), we have \(k/\varphi(k)\to 0\). In particular, the \(N\log\log N\) error term in the previous formula is negligible compared to \(N\varphi(\log N)^{1/2}\), as claimed. By the Diamond-Vaaler theorem (9), \[s_{K+1}\ll\max_{1\leq k\leq K+1}a_{k}+K\log K\ll\varphi(K/100)+K\log K\ll \varphi(\log N)+\log N\log\log N.\] Theorem 4 thus yields \[\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|}=(\log N)^{2}+O\left(\varphi(\log N)+\log N \log\log N\right),\] as claimed. Assume next that \(\sum_{k=1}^{\infty}1/\varphi(k)=\infty\). Fix a small \(\varepsilon>0\). The function \(\varphi^{*}(x)=4\varepsilon^{-2}\varphi(100x)+x\) is also positive and nondecreasing, and satisfies \(\sum_{k=1}^{\infty}1/\varphi^{*}(k)=\infty\). By the Borel-Bernstein theorem, for a.e. \(\alpha\) we have \(a_{K+1}\geq\varphi^{*}(K)\geq K\) for infinitely many \(K\). Theorem 3 gives that for infinitely many \(K\) and any \(4(ca_{K+1})^{1/2}q_{K}\leq N\leq\varepsilon^{-1}a_{K+1}^{1/2}q_{K}\), \[\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|} \geq 2N\log N+q_{K+1}-O((\log s_{K+1})N)\] \[\geq 2N\log N+\varepsilon a_{K+1}^{1/2}N-O(N\log\log N)\] \[\geq 2N\log N+\frac{\varepsilon}{2}a_{K+1}^{1/2}N\] \[\geq 2N\log N+N\varphi(\log N)^{1/2}.\] In particular, the upper asymptotic density of the set \[\left\{N\in\mathbb{N}\,:\,\sum_{\begin{subarray}{c}n=1\\ \|n\alpha\|\geq c/N\end{subarray}}^{N}\frac{1}{\|n\alpha\|}\geq 2N\log N+N \varphi(\log N)^{1/2}\right\}\] is at least \(1-4c^{1/2}\varepsilon\). Since \(\varepsilon>0\) was arbitrary, the upper asymptotic density is \(1\), as claimed. Similarly, for a.e. \(\alpha\) we have \(a_{K+1}=\max_{1\leq k\leq K+1}a_{k}\geq 2\varphi(100K)+K\log K\log\log K\) for infinitely many \(K\). Theorem 4 gives that for infinitely many \(K\) and any \(q_{K}\leq N<q_{K+1}\), \[\sum_{n=1}^{N}\frac{1}{n\left\|n\alpha\right\|} \geq(\log N)^{2}+a_{K+1}-O(a_{K+1}^{1/2}\log a_{K+1}+\log N\log \log N)\] \[\geq(\log N)^{2}+\frac{1}{2}a_{K+1}\geq(\log N)^{2}+\varphi(\log N).\] In particular, the set \[\left\{N\in\mathbb{N}\,:\,\sum_{n=1}^{N}\frac{1}{n\|n\alpha\|}\geq(\log N)^{ 2}+\varphi(\log N)\right\}\] has upper asymptotic density \(1\), as claimed. ### Badly approximable irrationals In the special case of a badly approximable \(\alpha\) Theorems 3 and 4 yield the value of the sums in (1) up to an error \(O(N\log\log N)\) resp. \(O(\log N\log\log N)\). We now show how to modify the proof to remove the factor \(\log\log N\) from the error terms. Proof of Theorem 1.: We have \(\|n\alpha\|\geq c/N\) for all \(1\leq n\leq N\) with a suitably small constant \(0<c<1/2\) depending only on \(\alpha\), hence \[\sum_{n=1}^{N}\frac{1}{\|n\alpha\|}=\int_{c/N}^{\infty}\frac{1}{t^{2}}|\{1\leq n \leq N\,:\,\|n\alpha\|\leq t\}|\,\mathrm{d}t.\] The contribution of the integral on \([1/2,\infty)\) is negligible: \[\int_{1/2}^{\infty}\frac{1}{t^{2}}|\{1\leq n\leq N\,:\,\|n\alpha\|\leq t\}|\, \mathrm{d}t\leq\int_{1/2}^{\infty}\frac{N}{t^{2}}\,\mathrm{d}t=2N.\] To estimate the integral on \([c/N,1/2]\), we use the local discrepancy estimates [30, Theorem 2] \[\max_{1\leq N<q_{K+1}}\left(|\{1\leq n\leq N\,:\,\{n\alpha\}\leq t \}|-tN\right)= \sum_{\begin{subarray}{c}k=1\\ k\text{ even}\end{subarray}}^{K}\left\{q_{k}t\right\}(a_{k+1}(1-\{q_{k}t\})+ \{q_{k+1}t\}-\{q_{k-1}t\})+O(1),\] \[\min_{1\leq N<q_{K+1}}\left(|\{1\leq n\leq N\,:\,\{n\alpha\}\leq t \}|-tN\right)= -\sum_{\begin{subarray}{c}k=1\\ k\text{ odd}\end{subarray}}^{K}\left\{q_{k}t\right\}(a_{k+1}(1-\{q_{k}t\})+ \{q_{k+1}t\}-\{q_{k-1}t\})+O(1)\] with universal implied constants, where \(\{\cdot\}\) is the fractional part function. In the special case of a badly approximable \(\alpha\) these estimates immediately show that for all \(1\leq N<q_{K+1}\), \[||\{1\leq n\leq N\,:\,\{n\alpha\}\leq t\}|-tN|\ll\sum_{k=1}^{K}\{q_{k}t\}+1 \ll\sum_{\begin{subarray}{c}k=1\\ q_{k}\leq 1/t\end{subarray}}^{K}q_{k}t+\sum_{\begin{subarray}{c}k=1\\ q_{k}>1/t\end{subarray}}^{K}1+1\ll\log(tN)+1.\] In the last step we used the fact that \(\sum_{j=1}^{k}q_{j}\leq 3q_{k}\), and that there are \(\ll\log(B/A)+1\) convergent denominators \(q_{k}\) that fall in any given interval \([A,B]\). Since \(-\alpha\) is also badly approximable, we can similarly estimate the number of \(1\leq n\leq N\) such that \(\{-n\alpha\}\leq t\), that is, \(\{n\alpha\}\geq 1-t\). In particular, \[|\{1\leq n\leq N\,:\,\|n\alpha\|\leq t\}|=2tN+O\left(\log(tN)+1\right)\quad \text{uniformly in }0<t\leq 1/2. \tag{10}\] We mention that (10) can also be easily deduced from an explicit formula for the local discrepancy due to T. Sos [32]. Hence \[\int_{c/N}^{1/2}\frac{1}{t^{2}}|\{1\leq n\leq N\,:\,\|n\alpha\|\leq t\}|\, \mathrm{d}t=2N\log N+O(N),\] and the claim \(\sum_{n=1}^{N}1/\|n\alpha\|=2N\log N+O(N)\) follows. Summation by parts then yields \(\sum_{n=1}^{N}1/(n\|n\alpha\|)=(\log N)^{2}+O(\log N)\), as claimed. Related Diophantine sums ### Sums with fractional parts Theorems 1, 2, 3, 4 have perfect analogues with the distance from the nearest integer function \(\|\cdot\|\) replaced by the fractional part function \(\{\cdot\}\). **Theorem 5**.: _Let \(c>0\) be an arbitrary constant. For any \(K\geq 0\) and \(q_{K}\leq N<q_{K+1}\),_ \[\sum_{\begin{subarray}{c}n=1\\ \{n\alpha\}\geq c/N\end{subarray}}^{N}\frac{1}{\{n\alpha\}}=N\log N+O\left(( \mathds{1}_{\{K+1\ odd\}}a_{K+1}^{1/2}+\log s_{K+1})N\right).\] _If in addition \(4(ca_{K+1})^{1/2}q_{K}\leq N\) and \(K+1\) is odd, then_ \[\sum_{\begin{subarray}{c}n=1\\ \{n\alpha\}\geq c/N\end{subarray}}^{N}\frac{1}{\{n\alpha\}}\geq N\log N+q_{K+ 1}-O((\log s_{K+1})N).\] _The implied constants depend only on \(c\). Further, for any \(K\geq 0\) and \(q_{K}\leq N<q_{K+1}\),_ \[\sum_{n=1}^{N}\frac{1}{n\{n\alpha\}}= \frac{1}{2}(\log N)^{2}+\frac{\pi^{2}}{6}\sum_{\begin{subarray}{ c}k=1\\ k\ odd\end{subarray}}^{K}a_{k}+\mathds{1}_{\{K+1\ odd\}}a_{K+1}\sum_{1\leq j \leq N/q_{K}}\frac{1}{j^{2}}\] \[+O\Bigg{(}\sum_{\begin{subarray}{c}k=1\\ k\ odd\end{subarray}}^{K+1}a_{k}^{1/2}\log a_{k}+(\log s_{K+1})\log N\Bigg{)}\] _with a universal implied constant._ Proof.: This is a straightforward modification of the proof of Theorems 3 and 4. The parity conditions follow from the fact that \(p_{2k}/q_{2k}<\alpha<p_{2k+1}/q_{2k+1}\) for all \(k\geq 0\). The same holds for the sums \[\sum_{\begin{subarray}{c}n=1\\ 1-\{n\alpha\}\geq c/N\end{subarray}}^{N}\frac{1}{1-\{n\alpha\}}\qquad\text{ and}\qquad\sum_{n=1}^{N}\frac{1}{n(1-\{n\alpha\})}\] with "odd" replaced by "even". Theorem 2 also remains true with \(\|n\alpha\|\) replaced either by \(\{n\alpha\}/2\) or by \((1-\{n\alpha\})/2\). The proof is identical to that of Theorem 2; note that the Borel-Bernstein theorem holds without any monotonicity assumption on \(\varphi\), so the parity conditions do not cause any difficulty. A straightforward modification of the proof of Theorem 1 similarly shows that for any badly approximable \(\alpha\), \[\sum_{n=1}^{N}\frac{1}{\{n\alpha\}}=N\log N+O(N)\qquad\text{and}\qquad\sum_{n =1}^{N}\frac{1}{n\{n\alpha\}}=\frac{1}{2}(\log N)^{2}+O(\log N),\] and the same holds with \(\{n\alpha\}\) replaced by \(1-\{n\alpha\}\). ### Shifted sums Some of our methods apply to shifted Diophantine sums as well. Here we only focus on the case of a badly approximable \(\alpha\), for which we find the precise asymptotics. All previously known results on shifted Diophantine sums have a constant factor gap between the upper and the lower estimates [4, 5]. **Theorem 6**.: _Let \(\alpha\) be a badly approximable irrational, and let \(\beta\in\mathbb{R}\). For any \(N\geq 1\),_ \[\sum_{\begin{subarray}{c}n=1\\ n\neq n_{0}\end{subarray}}^{N}\frac{1}{\|n\alpha+\beta\|}=2N\log N+O(N\log \log N)\] _with an implied constant depending only on \(\alpha\), where \(n_{0}=n_{0}(\alpha,\beta,N)\in[1,N]\) is an integer for which \(\min_{1\leq n\leq N}\|n\alpha+\beta\|=\|n_{0}\alpha+\beta\|\). If in addition \(\inf_{n\in\mathbb{N}}(n\log\log n)\|n\alpha+\beta\|>0\), then_ \[\sum_{n=1}^{N}\frac{1}{\|n\alpha+\beta\|}=2N\log N+O(N\log\log N),\qquad\sum_{ n=1}^{N}\frac{1}{n\|n\alpha+\beta\|}=(\log N)^{2}+O(\log N\log\log N)\] _with implied constants depending only on \(\alpha\) and \(\beta\)._ Proof.: For any integers \(1\leq n<n^{\prime}\leq N\), we have \(\|(n\alpha+\beta)-(n^{\prime}\alpha+\beta)\|\geq 2c/N\) with a suitably small constant \(c>0\) depending only on \(\alpha\). In particular, for all \(1\leq n\leq N\), \(n\neq n_{0}\) we have \(\|n\alpha+\beta\|\geq c/N\). One readily checks that formulas (3), (4), (5) and (7) in the proof of Theorem 3 remain true with an arbitrary shift \(\beta\), hence \[\sum_{\begin{subarray}{c}n=1\\ n\neq n_{0}\end{subarray}}^{N}\frac{1}{\|n\alpha+\beta\|}=2N\log N+O(N\log \log N),\] as claimed. Under the additional assumption \(\inf_{n\in\mathbb{N}}(n\log\log n)\|n\alpha+\beta\|>0\) the contribution of the \(n=n_{0}\) term is \(O(N\log\log N)\), thus \(\sum_{n=1}^{N}1/\|n\alpha+\beta\|=2N\log N+O(N\log\log N)\), as claimed. Summation by parts then yields \(\sum_{n=1}^{N}1/(n\|n\alpha+\beta\|)=(\log N)^{2}+O(\log N\log\log N)\), as claimed. Similarly, for any badly approximable \(\alpha\) and any \(\beta\in\mathbb{R}\), \[\sum_{\begin{subarray}{c}n=1\\ n\neq n_{0}^{\prime}\end{subarray}}^{N}\frac{1}{\{n\alpha+\beta\}}=N\log N+O(N \log\log N)\] with an implied constant depending only on \(\alpha\), where \(n_{0}^{\prime}=n_{0}^{\prime}(\alpha,\beta,N)\in[1,N]\) is an integer for which \(\min_{1\leq n\leq N}\{n\alpha+\beta\}=\{n_{0}^{\prime}\alpha+\beta\}\). If in addition \(\inf_{n\in\mathbb{N}}(n\log\log n)\{n\alpha+\beta\}>0\), then \[\sum_{n=1}^{N}\frac{1}{\{n\alpha+\beta\}}=N\log N+O(N\log\log N),\qquad\sum_{ n=1}^{N}\frac{1}{n\{n\alpha+\beta\}}=\frac{1}{2}(\log N)^{2}+O(\log N\log \log N)\] with implied constants depending only on \(\alpha\) and \(\beta\). The same holds with \(\{n\alpha+\beta\}\) replaced by \(1-\{n\alpha+\beta\}\). ### A higher dimensional generalization There are several natural higher dimensional generalizations of the sums in (1), which have been studied using Fourier analysis [1, 29], geometry of numbers [4] and lattices [19, 20]. In this setting, a vector \(\alpha\in\mathbb{R}^{d}\) is called badly approximable if \(\inf_{n\in\mathbb{Z}^{d}\backslash\{0\}}\|n\|_{\infty}^{d}\cdot\|n_{1}\alpha_{ 1}+\cdots+n_{d}\alpha_{d}\|>0\), where \(\|n\|_{\infty}=\max_{1\leq k\leq d}|n_{k}|\). Here we only comment on a result of Fregoli [19], who showed that for any badly approximable vector \(\alpha\in\mathbb{R}^{d}\), \[N^{d}\log N\ll\sum_{n\in[-N,N]^{d}\backslash\{0\}}\frac{1}{\|n_{1}\alpha_{1}+ \cdots+n_{d}\alpha_{d}\|}\ll N^{d}\log N.\] His main result in fact yields the precise asymptotics of the previous sum, in particular giving an alternative proof of Theorem 1 based on lattices. **Theorem 7**.: _Let \(\alpha\in\mathbb{R}^{d}\) be a badly approximable vector. For any \(N\geq 1\),_ \[\sum_{n\in[-N,N]^{d}\backslash\{0\}}\frac{1}{\|n_{1}\alpha_{1}+ \cdots+n_{d}\alpha_{d}\|} =d2^{d+1}N^{d}\log N+O(N^{d}),\] \[\sum_{n\in[-N,N]^{d}\backslash\{0\}}\frac{1}{\|n\|_{\infty}^{d} \cdot\|n_{1}\alpha_{1}+\cdots+n_{d}\alpha_{d}\|} =d^{2}2^{d}(\log N)^{2}+O(\log N)\] _with implied constants depending only on \(\alpha\)._ Proof.: The main result [19, Proposition 1.3] states that \[|\{n\in[-N,N]^{d}\backslash\{0\}\,:\,\|n_{1}\alpha_{1}+\cdots+n_{d}\alpha_{d} \|\leq t\}|=2^{d+1}tN^{d}+O\left(t^{d/(d+1)}N^{d^{2}/(d+1)}\right)\] uniformly in \(0<t\leq 1/2\). Using this instead of formula (10), the rest of the proof is identical to that of Theorem 1. Writing \[\sum_{n\in[-N,N]^{d}\backslash\{0\}}\frac{1}{\|n\|_{\infty}^{d}\cdot\|n_{1} \alpha_{1}+\cdots+n_{d}\alpha_{d}\|}=\sum_{\ell=1}^{N}\frac{1}{\ell^{d}}\sum_ {\|n\|_{\infty}=\ell}\frac{1}{\|n_{1}\alpha_{1}+\cdots+n_{d}\alpha_{d}\|},\] the second claim follows from summation by parts. ## Acknowledgments The author is supported by the Austrian Science Fund (FWF) project M 3260-N.
2308.01719
The Data Conversion Bottleneck in Analog Computing Accelerators
Most modern computing tasks have digital electronic input and output data. Due to these constraints imposed by real-world use cases of computer systems, any analog computing accelerator, whether analog electronic or optical, must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. The energy and latency costs incurred by data conversion place performance limits on analog computing accelerators. To avoid this overhead, analog hardware must replace the full functionality of traditional digital electronic computer hardware. This is not currently possible for optical computing accelerators due to limitations in gain, input-output isolation, and information storage in optical hardware. This article presents a case study that profiles 27 benchmarks for an analog optical Fourier transform and convolution accelerator which we designed and built. The case study shows that an ideal optical Fourier transform and convolution accelerator can produce an average speedup of 9.4 times and a median speedup of 1.9 times for the set of benchmarks. The optical Fourier transform and convolution accelerator only produces significant speedup for pure Fourier transform (45.3 times) and convolution (159.4 times) applications.
James T. Meech, Vasileios Tsoutsouras, Phillip Stanley-Marbell
2023-08-03T12:25:33Z
http://arxiv.org/abs/2308.01719v4
The Data Movement Bottleneck: Theoretical Shortcomings of Analog Optical Fourier Transform and Convolution Computing Accelerators ###### Abstract Modern computing tasks are constrained to having digital electronic input and output data. Due to these constraints imposed by the user, any analog computing accelerator must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. To avoid this the analog hardware would need to completely replace the full functionality of traditional digital electronic computer hardware. Using 27 empirically-measured benchmarks we estimate that an ideal optical accelerator that accelerates Fourier transforms and convolutions can produce an average speedup of \(9.4\times\), and a median speedup of \(1.9\times\) for the set of benchmarks. The maximum speedups achieved were \(45.3\times\) for a pure Fourier transform and \(159.4\times\) for a pure convolution. These results show that an optical accelerator only produces significant speedup for applications consisting exclusively of Fourier transforms and convolutions. In addition to the theoretical results we quantify the data movement bottleneck which causes a \(23.8\times\) slowdown in a prototype optical Fourier transform accelerator which we built from widely-available off-the-shelf parts. ## 1 Introduction Optical computing has been a popular research topic since the 1950s but there are still no commercially-available optical accelerators and no large-scale analysis of benchmark performance. Modern computing tasks are constrained to having digital electronic input and output data. Mass-produced digital electronic memory being the only off-the-shelf option for data storage constrains the input data storage to be digital electronic signals stored in the memory. Support for plotting and data visualization software is only available for programming languages designed to run on off-the-shelf digital electronic hardware. The traditional digital electronic computer architecture is better suited for the majority of applications than an application-specific analog computing accelerator and therefore substituting them would be unproductive. Academic and industrial researchers have been working on optical computing accelerators for 70 years [1, 60]. Despite this there are no commercially-available optical computing accelerators. The physics of light lends itself to fast and efficient Fourier transform and convolution operations [37, 32]: Optical accelerators use diffraction, the interference of Huygens wavelets of light to perform Fourier transform operations [27]. This is in contrast to digital electronic processors which break the high-level Fourier transform down into individual additions, multiplications, and other component operations, compute the results and then recombine the results to calculate the Fourier transform [11]. Having the light perform the computation is faster and more efficient than using digital electronics if we do not consider the time required for data movement [32]. Despite these benefits, researchers in academic institutions and industry have struggled for decades to implement practically-useful optical accelerators [10]. Startup companies repeatedly pivot to applying optical accelerators to new problems. They do this because the optical accelerator does not provide a large enough improvement in a metric that users care about for the target application [12, 60, 40, 39, 34]. As of today, there is still no commercially-available computer architecture that includes an optical accelerator, despite the growing popularity of optical interconnects [21, 44]. Therefore, any analog computing accelerator must perform an analog to digital conversion on its input data and a subsequent digital to analog conversion on its output data because of these constraints imposed by the user. The only alternative to this situation would be to develop an entire software stack to allow the analog hardware to perform all the functions of the traditional digital electronic computer hardware. Modern digital electronic computers already spend 62.7 % of their energy moving data, adding computing accelerators which cannot accelerate the entire application, and therefore exacerbating the data movement bottleneck will only make this worse [7]. Power delivery requirements trends are placing even more constraints on available pins and memory bandwidth, making the problem even worse [49]. Section 4 shows that the best possible case theoretical speedup for optical accelerators is still orders of magnitude smaller than that of other popular accelerator architectures. This will continue to be the case even after we overcome the data movement bottleneck. Section 5 shows that the lack of speedup causing the lack of adoption of optical accelerators is caused by a bottleneck when moving data from the inevitable digital electronic processor used for interfacing into the analog optical accelerator. ### Contributions This article presents the following contributions: 1. A benchmarking study of the effects of an optical accelerator on 27 applications that contain Fourier transform and convolution operations. (Section 4) 2. A prototype optical accelerator that illustrates the data movement bottleneck in a hardware prototype. (Section 5) ### How Does an Optical Accelerator Work? Figure 1 shows the typical 4\(f\) optical setup for Fourier transform and convolution operations. Let \(\mathcal{F}\) be the Fourier transform operator and \(\mathcal{F}^{-1}\) be the inverse Fourier transform operator. Let \(A\) and \(B\) be two-dimensional arrays and \(\mathcal{F}^{-1}C\) be the convolution of \(A\) and \(B\) where \[C=\mathcal{F}(A\otimes B)=\mathcal{F}(A)\cdot\mathcal{F}(B). \tag{1}\] Equation 1 shows that an analog optical accelerator can perform the convolution operation by taking the Fourier transform of both input datasets, calculating the dot product of the results, and finally inverse Fourier transforming the result of the product. The optical setup cannot perform the final inverse Fourier transform step. Instead the digital electronic processor interfacing with the optical setup performs this step. Figure 1 shows how the lenses in the setup Fourier transform the input data programmed into the aperture. The programmable aperture encodes information into the light at each of its pixels by manipulating the phase of the light between 0 and 2\(\pi\) according to the programmed digital value for that particular pixel. An analog optical accelerator that uses a camera to transduce the output light pattern to electronic signals can only calculate the magnitude component of the right hand side of Equation 1 and then the computer hardware must read the detector pixels and use a digital inverse Fourier transform to calculate the final result of Equation 1. The light can only compute the Fourier transform when the condition (that \(D\gg a\) and that \(D\gg a^{2}/\lambda\), where \(D\) is the distance between the programmable aperture and the camera detector, \(a\) is the width of the programmable aperture, and \(\lambda\) is the wavelength of the light [22]) for Fraunhofer diffraction is met [22]. ## 2 Motivation for Analog Fourier Transform and Convolution Optical Computing Accelerators Figure 2 shows the changes required at each abstraction layer of a software and hardware stack required to use the physics of light to accelerate a user-specified high-level computational problem (the Fourier transform). A computer systems architect has to make changes at every abstraction level in the software and hardware stack to take advantage of the physics of light to perform computation. Required changes include a new software application programming interface to load data into the accelerator, processor architecture changes to allow store word and load word instructions to access the optical accelerator and digital electronic processor memory, and the close integration of optical hardware with digital electronic hardware that uses incompatible process technologies. This is just as generations of engineers and scientists designed the modern digital electronic computer stack to realize the full potential of semiconductor transistors in digital electronic processors. The first row of Figure 2 shows how we move from the abstract idea of the Fourier transform all the way down the abstraction layers to the digital electronic hardware that we wish to use to perform the computation. Row two of Figure 2 requires changes at every level of the software and hardware stack. If we tried to use the physics of light to replace panel five of row one, the accelerator would not be able to use the Fourier transform properties of light and we would not see performance increases. The second row of Figure 2 shows the missing implementations that have made such optical accelerators unnecessarily inefficient due to a lack of computer systems knowledge in the optical computing community and vice-versa. The optical accelerator takes advantage of the physics of light to skip all of the component multiplication, division, and addition instructions shown in Figure 2, row one, panel three. Instead we load the data into the optical accelerator and the physics of light performs all of the computation for the Fourier transform in one analog step. The optical accelerator performs the transform using physics shown in Figure 2, row two, panel five. The optical field at point \(P\) is the superposition of the optical field at each elemental area \(dS\) of the total area, \(S\), of the aperture. Every single point in the optical accelerator output contains information from every single point in the optical accelerator aperture input. Each point in the wavefront at the aperture produces Huygens wavelets and the optical field beyond the aperture is the superposition of all of the wavelets. The parallels to the equation in panel one of both rows of Figure 2 are that the sum symbols use the value of each pixel in the input once per output pixel to compute the pixel-by-pixel result of the Fourier transform. This skipping of steps provides an opportunity for the acceleration of Fourier transform and convolution operations provided that the cost of moving data into and out of the optical accelerator does not outweigh the speedup we gain by using the Fourier transform and convolution properties of light. Unfortunately, Section 5 shows that the cost of moving data into and out of the optical accelerator will always be the bottleneck in optical Fourier transform accelerator designs. Section 4 shows that even the best case speedup we can gain by using the Fourier transform and convolution properties of light is often small. ## 3 Benchmarking Methodology We profiled 27 benchmark applications to estimate the maximum theoretical speedup that an optical Fourier transform and Figure 1: The 4\(f\) setup for optical convolution where \(A\) and \(B\) are programmable apertures and \(C\) is a camera detector. Each optical component is spaced a distance \(f\) from the previous one where \(f\) is the focal length of the convex lenses [1]. convolution could provide for each application. We provide a short description of each benchmark that we profiled on a 2.8-GHz Intel Core i7 CPU with 16 GB of 2133-MHz LPDDR3 RAM. All benchmarks are Python 3.8.9 code applications, not developed by the authors, which use well optimized Python libraries, and are available online. We used cProfile to profile each benchmark using Python 3.8.9 on MacOS Monterey Version 12.0.1. We profiled each benchmark assuming that the time taken by functions with Fourier transform or convolution related names was negligible. We used the results to estimate the speedup gained by offloading the optical Fourier transform and convolution functions to an accelerator that completes the operation in negligible time. This assumption will provide results showing the best case speedup for an optical Fourier transform and convolution accelerator. **Convolution (Application 0):** The SciPy implementation of convolution run over pre-generated \(100\times 100\) NumPy arrays. **Fourier Transform (Application 1):** The NumPy implementation of the fast Fourier transform run over a pre-generated \(5000\times 5000\) NumPy array. **Wiener Filter (Application 2):** The SciPy implementation of the Wiener Filter run over a pre-generated \(4000\times 4000\) NumPy array. **Self-healing Airy Beam (Application 3):** The LightPipes implementation of a self healing Airy diffraction simulation. Airy beams have applications including laser micro machining and particle and cell micro manipulation [16]. **Young's Experiment (Application 4):** The LightPipes implementation of a simulation of Young's double slit experiment. In the experiment a monochromatic plane wave illuminates two narrow slits which produces a diffraction pattern that illustrates the wave properties of light on a screen placed in the far field. The diffraction pattern is the Fourier transform of the slits function. It is possible to construct arbitrary far-field diffraction patterns by constructing the corresponding slit. **From Poisson Spot to a Non-Diffractive Bessel Beam (Application 5):** The LightPipes implementation of a simulation showing the proportionality of the width of a Bessel beam to the distance \(z\) from the Huygens light point source. Bessel beams have applications in encryption, optical atom trapping and, optical tweezers [33]. **Generation of a Bessel Beam with a Lens and an Annular Slit (Application 6):** The LightPipes implementation of a simulation of a Bessel beam. Bessel beams have applications in encryption, optical trapping of atoms, and optical tweezers [33]. **Generation of a Bessel Beam with an Axicon (Application 7):** Generating a Bessel beam with an annular slit is inefficient, most of the laser beam is unused. This benchmark is the LightPipes implementation of generating a Bessel beam with an axicon lens that uses more of the total optical beam power than the annular slit method and is therefore more efficient [8]. **Multi- Holes and Slits (Application 8):** The LightPipes implementation of a simulation of an extension of Young's experiment where multiple slits or holes are present. Changing Figure 2: The steps required to perform a Fourier transform on data using an optical accelerator instead of a digital electronic processor diverge at the first abstraction level below the mathematical equation for the Fourier transform. The optical accelerator requires changes at every level of the software and hardware stack to use Maxwell’s equations for electromagnetic waves to perform the Fourier transform. This figure captures the idea that inspired 70 years of research into optical accelerators [22, 18]. The lumped circuit abstraction shown in panel one is the assumption that the resistance, capacitance, and inductance of transistors are confined to idealized circuit components. This allows the designer to ignore the effects of electromagnetic waves. In contrast panel two directly uses the physics of electromagnetic waves to perform the computation. the spacing and geometry of the holes would allow the user to create apertures that create arbitrary diffraction patterns and then simulate the resulting diffraction pattern. A multi-slit diffraction grating has applications as a spectrometer [30]. **Diffraction from a Circular Aperture (Application 9):** The LightPipes implementation of a simulation of an extension of the Young's slit experiment where the aperture is circular instead of a slit. Diffraction through circular holes is used for simulating masks in epitaxy for semiconductors [24]. **Shack Hartmann Sensor (Application 10):** The LightPipes implementation of a Shack Hartmann sensor. The Shack Hartmann sensor is an array of lenses used to measure the phase distribution of a wavefront. The US Airforce used them to improve the images of satellites taken from earth [41]. **Spot of Poisson (Application 11):** The LightPipes implementation of a simulation of a laser beam illuminating a disk. The result of the experiment is a bright spot of light directly behind the round disk. Poisson predicted the existence of the spot by applying Maxwell's equations, later Arago performed the experiment and observed the spot. This was one of the first real-world demonstrations of the wave like nature of light. The Arago spot has applications in the design of telescopes [9]. **Fresnel Zone Plate (Application 12):** The LightPipes implementation of the simulation of a Fresnel zone plate. The Fresnel zone plate acts as a focusing lens for a plane wave. The Fresnel zone plate has applications in exoplanet detection [29]. **Unstable Laser Resonator (Application 13):** The LightPipes implementation of the simulation of an unstable laser resonator. Unstable laser resonators build energy to create laser beams [48]. **Interference of a Doughnut Laser Beam Collinear Beams (Application 14):** The LightPipes implementation of the simulation of interference of a doughnut laser with collinear beams. **Michelson Interferometer (Application 15):** The LightPipes implementation of a Michelson interferometer. The Michelson interferometer has applications in spectrometers, measuring the diameter of stars, and detecting gravitational waves [35]. **Phase Recovery (Application 16):** The LightPipes implementation of the Gerchberg Saxton phase recovery algorithm. Phase recovery is the act of recovering the phase information of the electric field that produced a diffraction pattern using only the light intensity of the diffraction pattern. It iteratively performs forward and backward Fourier transforms and applies the constraints of the target intensity image until the algorithm converges to the phase of the electric field that produced the original image [19]. Phase recovery has applications in holography, electron microscopy, X-ray crystallography, and characterizing optical systems such as telescopes. **Transformation of a Fundamental Gauss Mode into a Doughnut Mode with a Spiral Phase Plate (Application 17):** The LightPipes implementation of a simulation that uses a spiral phase plate to produce a doughnut-shaped beam. Doughnut-shaped beams have applications in super resolving microscopy, optical tweezers, and cell capture [59]. **Transformation of High Order Gauss Modes From Hermite to Laguerre (Application(18):** The LightPipes implementation of a simulation that transforms Hermite Gauss into Laguerre Gauss laser modes using two cylindrical lenses. Laguerre Gauss laser modes have applications in optical communication, micro manipulation and quantum information [5]. **Interference of a Doughnut Laser Beam Tilted Beams (Application 19):** The LightPipes implementation of the simulation of interference of a doughnut laser with tilted beams. **Double-Slit Experiment (Application 20):** The Prysm implementation of the simulation of Young's Experiment. The speedup value is similar to the LightPipes implementation. **Your First Diffraction Model (Application 21):** The Prysym implementation of diffraction through a circular aperture. The speedup value is similar to the LightPipes implementation. **Image Simulation (Application 22):** The Prysym implementation of an end-to-end image simulation of a Siemens' star including all optical and electrical noise. **Convolutional Neural Network Inference (Application 23):** A PyTorch tutorial implementation of inference over a convolutional neural network for classifying images from the CIFAR10 dataset. We benchmarked the training of the network and running inference over the network separately as they have significantly different potential speedup. Convolutional neural networks have a wide range of applications [6]. **Convolutional Neural Network Training (Application 24):** A PyTorch tutorial implementation of training a convolutional neural network for classifying images from the CIFAR10 dataset. The speedup achieved for the training is less than half of the speedup achieved for the inference. **Audio Resampling Transforms (Application 25):** A PyTorch tutorial implementation of audio resampling using convolution. These transforms are used to resample audio before passing it through larger neural networks for training and inference. **Pre-Trained Model Wave2Vec2 Speech Recognition Inference (Application 26):** A PyTorch implementation of speech recognition inference with the pre-trained Wave2Vec2 model. ## 4 Benchmarking Results: Amdahl's Law We benchmarked the applications described in Section 4 using Python and cProfile and applied Amdahl's law to the results [20, 2]. We benchmarked each application one hundred times to take into account any variation. Let \(P\) be the degree of acceleration a computer system applies to an application, \(f_{\text{fixed}}\) be the portion of the program we cannot accelerate, and \(f_{\text{accelerate}}\) be the portion of the program that we can accelerate, then Amdahl's law states that the speedup \(S\) we can achieve is \[S=\frac{1}{f_{\text{fixed}}+\frac{f_{\text{accelerate}}}{P}}. \tag{2}\] If we are able to use an optical accelerator to accelerate \(f_{\text{accelerate}}\) to the point that \(\frac{f_{\text{accelerate}}}{P}\ll f_{\text{fixed}}\) then \[S\approx\frac{1}{f_{\text{fixed}}}. \tag{3}\] \(S\) is the best case speedup we can achieve by accelerating the Fourier transform and convolution operations in a program. Figure 3 shows the potential speedup that we could get if we accelerated all Fourier transform and convolution operations in the benchmarks to the point where they were negligible. In practice the speedup achieved by a real optical accelerator would be smaller because all optical accelerators require time for a digital electronic processor to write to the programmable aperture and read from the camera detector. Our benchmarking study has the unrealistic assumption that this writing and reading takes zero time. Table 1 includes additional details and the names of the benchmarks included in Figure 3. ## 5 Analysis of Bottlenecks in an Off-the-Shelf Hardware Prototype Optical Accelerator Figure 3(a) shows a block diagram of the typical interface between a digital electronic processor and an optical accelerator built using off-the-shelf optical hardware modules. Typically these off-the-shelf optical hardware modules use a communication interface to allow a digital electronic processor to control the optical module as a peripheral input / output device. Figure 3(a) shows the local memory and digital to analog converter inside a spatial light modulator that allow an external digital electronic processor to program the light modulating pixels over the communications interface. The camera provides a similar interface to allow the digital electronic processor to read values from the camera pixels. It uses an analog to digital converter to convert the analog signal from the camera detector pixels to a digital signal for the processor to read from the local device memory over the communication interface. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Application & FFT/Conv & Total & FFT/Conv & End-to-End & Lines \\ & Time (s) & Time (s) & Fraction (\%) & Speed Up (\(\times\)) & Lines \\ \hline Convolution [45] & 0.158 & 0.159 & 99.37 & 159.41 & 1 \\ \hline Fourier Transform [38] & 0.912 & 0.933 & 97.79 & 45.32 & 1 \\ \hline Wiener Filter [46] & 1.164 & 1.724 & 67.51 & 3.08 & 1 \\ \hline Self-healing Airy beam [57] & 51.718 & 81.778 & 63.24 & 2.72 & 18 \\ \hline Young’s Experiment [57] & 0.0671 & 0.109 & 61.70 & 2.61 & 12 \\ \hline From Poisson spot to a non-diffractive Bessel beam [57] & 2.817 & 4.593 & 61.33 & 2.59 & 20 \\ \hline Generation of a Bessel beam with a lens and an annular slit [57] & 3.146 & 5.173 & 60.82 & 2.55 & 22 \\ \hline Generation of a Bessel beam with an axicon [57] & 2.839 & 4.677 & 60.71 & 2.55 & 18 \\ \hline Multi- holes and slits [57] & 0.200 & 0.328 & 60.70 & 2.55 & 21 \\ \hline Diffraction from a circular aperture [57] & 2.193 & 3.615 & 60.65 & 2.54 & 14 \\ \hline Shack Hartmann Sensor [57] & 2.142 & 4.051 & 52.88 & 2.12 & 25 \\ \hline Spot of Poisson [57] & 1.930 & 3.983 & 48.44 & 1.94 & 12 \\ \hline Fresnel zone plate [57] & 0.665 & 1.405 & 47.34 & 1.90 & 24 \\ \hline Unstable laser resonator [57] & 0.0645 & 0.163 & 39.43 & 1.65 & 41 \\ \hline Interference of a doughnut laser beam: collinear beams [57] & 0.0604 & 0.198 & 30.54 & 1.44 & 16 \\ \hline Michelson interferometer [57] & 0.0139 & 0.0472 & 29.45 & 1.42 & 25 \\ \hline Phase Recovery [57] & 0.296 & 1.580 & 18.75 & 1.23 & 16 \\ \hline Transformation into a doughnut mode with a spiral phase plate [57] & 0.296 & 1.230 & 18.75 & 1.23 & 13 \\ \hline Transformation of high order & 0.0386 & 0.211 & 18.29 & 1.22 & 42 \\ \hline Reverse of a doughnut laser beam: tilted beams [57] & 0.00506 & 0.0692 & 7.31 & 1.08 & 15 \\ \hline Double-Slit Experiment [15] & 0.0519 & 0.0929 & 55.91 & 2.27 & 12 \\ \hline Your First Diffraction Model [15] & 0.0787 & 0.164 & 47.80 & 1.92 & 20 \\ \hline Image Simulation [15] & 1.882 & 17.195 & 10.95 & 1.12 & 45 \\ \hline Convolutional Neural Network Inference [43] & 0.263 & 0.416 & 63.17 & 2.71 & 1 \\ \hline Convolutional Neural Network Training [43] & 8.428 & 78.936 & 10.68 & 1.12 & 16 \\ \hline Audio Resampling Transforms [42] & 0.0513 & 0.135 & 37.94 & 1.61 & 22 \\ \hline Pre-Trained Model Wave2Vec2 Speech Recognition Inference [4] & 0.179 & 0.519 & 34.53 & 1.53 & 4 \\ \hline \end{tabular} \end{table} Table 1: The maximum end-to-end speedup achievable by an optical accelerator for a range of 27 different benchmark applications according to Amdahl’s law. We ran each benchmark one hundred times and calculated the average for each column in the table. The average speedup is \(9.39\times\) which is close to the \(10\times\) requirement to make the accelerator worthwhile (Section 6). The result is heavily skewed by the high speedup values of 159.41\(\times\) and 45.32\(\times\) for convolutions and Fourier transforms. The median speedup is \(1.94\times\) which is less than one fifth of the \(10\times\) requirement. Spatial light modulators and digital micro-mirror devices are essentially a set of memory locations spatially arranged in large two-dimensional arrays. Moving data from a processor into these memory locations and back costs time and energy. This time and energy spent moving data outweighs the speed and efficiency benefits gained by using the properties of light to perform computation. Figures 3(c) and 3(b) show our prototype implementation of such a Fourier transform accelerator. We included the lenses, polarizers, and mechanical variable aperture to improve the resolution of the hardware prototype but they are not a fundamental requirement for performing Fourier transforms and convolutions using light. We conduct experiments to show the data-movement bottleneck using our hardware prototype. Figure 4: The optical accelerator architecture diagram and the hardware prototype that we built to analyze the data-movement bottleneck. The Raspberry Pi 4 is an interface that we remotely connect to using secure shell from a workstation computer and does not perform any computation other than programming the spatial light modulator and reading the camera. Figure 3: The potential end-to-end speedup for each application in Table 1 according to Amdahl’s law. The speedups are small unless almost 100 % of end-to-end benchmark execution time is spent on Fourier transforms or convolutions. The accelerator must speedup close to 100% of the application code to produce a large end-to-end speedup. All the box and whisker plots which show the run-to-run variation in the benchmark applications show small variation. Box plot definitions: center line, median; box limits, upper and lower quartiles; whiskers, 1.5x interquartile range; points, outliers. ### Execution Time Experiment Methodology We benchmark Python code to perform a \(1024\times 768\) pixel two-dimensional Fourier transform against the optical hardware setup performing the same calculation. The hardware setup is an end-to-end system controlled by a Raspberry Pi 4 that runs Python scripts to activate the optical hardware. For this reason, profiling the Python code fully captures the time spent in the digital electronic processor, data movement, and analog optical accelerator parts of the Python program. ### Execution Time Experiment Results Figure 5 shows that the off-the-shelf hardware prototype optical accelerator is \(23.8\times\) slower than a software fast Fourier transform of the same dimensions. We used the same Raspberry Pi 4 to benchmark the software fast Fourier transform and control the optical components (with no effort to optimize the code) alone to perform the Fourier transform. As the Fourier transform computation happens at the speed of light, the only fixed computation that prevents infinite speedup (from Amdahl's law) is the time required to produce the input data, load it into the spatial light modulator, and then read out the output from the camera detector. The fast Fourier transform has the second greatest theoretical speedup using an optical accelerator for all of the applications in Table 1. Therefore, none of the applications in Table 1 will see a speedup when running on our prototype optical accelerator built from off-the-shelf parts. Figure 5 shows that the majority of the computation time in the prototype optical accelerator is spent on data movement (programming the spatial light modulator and imaging the diffraction pattern using a camera). Boroumand et al. [7] state that \(62.7\,\%\) of energy is spent on moving data in modern computing systems. In our optical computing accelerator prototype \(99.599\,\%\) of time is spent moving data between the digital electronic processor and the analog optical accelerator. Cameras which can take images significantly faster than the camera we used in our experiment exist [14]. Nevertheless, the Fourier transform computation happens at the speed of light, so the data movement bottleneck will always dominate the computation time required by an optical Fourier transform and convolution computing accelerator. ## 6 Bespoke Hardware Accelerators Require \(10\times\) Theoretical Improvement Designing and building a computing accelerator is time consuming and therefore accelerators should provide at least \(10\times\) improvement of some metric for a large family of applications to be a commercial success [54]. The act of manufacturing the physical hardware for an accelerator is costly, time consuming, and requires design compromises. Therefore, the theoretical improvements produced by the accelerator must be large enough to take into account compromises in the design of the accelerator which will reduce the improvements from their theoretical maximum. Table 1 shows that an ideal (Fourier transform and convolution operations cost zero time) optical accelerator can only provide \(\geq 10\times\) speedup for two of the benchmarked applications (pure convolutions and pure Fourier transforms). These results show that the accelerator is only worthwhile if the system will run applications that consist exclusively of Fourier transforms and convolutions. The only known system where the entire application runs on the optical accelerator is processing synthetic aperture radar images and then exposing camera film using the light output by the optical system [17]. Popular accelerators in the literature report average speedups of \(60\times\) for convolutional neural networks on GPUs [31], \(1.6\times 10^{9}\times\) for a quantum accelerator [3], and \(2076\times\) fewer instructions executed compared to a Monte Carlo simulation for Laplace, an uncertainty quantification accelerator [56]. These improvements are orders-of-magnitude larger than those theoretically possible with an optical accelerator. Therefore, developing an optical Fourier transform and convolution accelerator is not worthwhile unless we are targeting applications that consist solely of Fourier transforms and convolutions with minimal time (\(\leq 10\,\%\)) spent performing other operations. Otherwise, by Amdahl's law the acceleration is limited to \(\leq 10\times\), the threshold below which it is not worth investing the time and capital in building an accelerator. ## 7 Related Work Table 2 shows the optical Fourier transform and convolution accelerator implementations. All of the optical accelerator designs in Table 2 use a digital electronic processor and slow peripheral communications to interface with optical hardware. The majority of the optical accelerators use slow frame rate spatial light modulators (Hz) [47, 55, 23] and digital micro-mirror devices (kHz) [13, 53, 52, 51] as the programmable aperture. We omit these references from Table 2 because they refer to datasheets of individual hardware Figure 5: The hardware Fourier transform (left) is \(23.8\times\) slower than the NumPy software fast Fourier transform (right). The Fourier transform operation in the hardware setup takes negligible time compared to moving data into and out of the optical components. The total time required to run the software and hardware Fourier transform is \(0.219\,\mathrm{s}\) and \(5.209\,\mathrm{s}\) respectively. components that one could use to build an optical accelerator instead of publications that present an optical accelerator system. The outlier is a commercial prototype that uses thermally-modulated Mach-Zehnder interferometers for increased switching speed [12, 50]. The output device column of Table 2 shows that the majority of the optical accelerators use a camera as their optical output device. The slow frame rates (Hz to kHz) of the hardware implementations shown in Table 2 are due to the slow communications interfaces in the optical hardware. The accuracy column of Table 2 shows that the accelerators produce accuracies comparable to digital electronic processors when applied to machine learning classification tasks. The optical accelerators in Table 2 provide speedups similar to our results from Table 1 but evaluate the speedup for a small number of customized applications and neglect to include the data movement time between the optical and electronic hardware in their speedup calculations. The work of Miscuglio et al. [36] includes the data movement in the speedup calculation and shows similar results to Section 5.2 where we show that most execution time is spent using the digital electronic processor to move data into and out of the optical devices. Mach-Zehnder interferometers are orders of magnitude faster than spatial light modulators and digital micromirror devices. Computer architectures which use Mach-Zehnder interferometers to perform computation using light will still suffer from a data movement bottleneck. ## Conclusion Modern computing tasks are constrained to having digital electronic input and output data. Mass produced electronic memory being the only off-the-shelf option for users constrains the input data storage to be digital electronic signals stored in the memory. Support for plotting and data visualization software is only available for programming languages designed to run on off-the-shelf digital electronic hardware. Therefore, any analog computing accelerator must perform an analog to digital conversion on its input data and a subsequent digital to analog conversion on its output data because of these constraints imposed by the user. The only alternative to this situation would be to develop an entire software stack to allow the analog hardware to perform all the functions of the traditional digital electronic computer hardware. The traditional digital electronic computer architecture is better suited for the majority of applications than an application-specific analog computing accelerator and therefore substituting them in this way would be unproductive. We performed the first large-scale benchmarking of applications that rely on Fourier transform and convolution operations and found that the median end-to-end speedup achievable by an optical accelerator for 27 benchmark applications is 1.94\(\times\) according to Amdahl's law. This median speedup is small compared to the speedup achievable by other popular types of accelerators. The average speedup is 9.39\(\times\), which is close to the 10\(\times\) requirement to make the accelerator worthwhile (Section 6). The average is heavily skewed by the high speedup values of 159.41\(\times\) and 45.32\(\times\) for convolutions and Fourier transforms. Our benchmarking study assumed that the data movement bottleneck did not exist, therefore our results are for the theoretical best case. For optical accelerators to be able to produce a worthwhile speedup we must overcome the data movement bottleneck. Even once we have overcome the bottleneck, most applications will only see a modest, less than 10\(\times\) speedup. Our results show that it is not worth building an optical accelerator unless it will be applied to applications that are \(\geq\) 90% Fourier transform or convolution. We profiled a Python Fourier transform program on a prototype optical Fourier transform accelerator and found it to be 23.8\(\times\) slower than using a Raspberry Pi 4. Our results show that the cause for the slowdown is moving data into and out of the optical accelerator. Even with faster programmable apertures and camera detectors, the data movement bottleneck will continue to be a show-stopping problem for optical accelerators.
2309.01430
DAT++: Spatially Dynamic Vision Transformer with Deformable Attention
Transformers have shown superior performance on various vision tasks. Their large receptive field endows Transformer models with higher representation power than their CNN counterparts. Nevertheless, simply enlarging the receptive field also raises several concerns. On the one hand, using dense attention in ViT leads to excessive memory and computational cost, and features can be influenced by irrelevant parts that are beyond the region of interests. On the other hand, the handcrafted attention adopted in PVT or Swin Transformer is data agnostic and may limit the ability to model long-range relations. To solve this dilemma, we propose a novel deformable multi-head attention module, where the positions of key and value pairs in self-attention are adaptively allocated in a data-dependent way. This flexible scheme enables the proposed deformable attention to dynamically focus on relevant regions while maintains the representation power of global attention. On this basis, we present Deformable Attention Transformer (DAT), a general vision backbone efficient and effective for visual recognition. We further build an enhanced version DAT++. Extensive experiments show that our DAT++ achieves state-of-the-art results on various visual recognition benchmarks, with 85.9% ImageNet accuracy, 54.5 and 47.0 MS-COCO instance segmentation mAP, and 51.5 ADE20K semantic segmentation mIoU.
Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, Gao Huang
2023-09-04T08:26:47Z
http://arxiv.org/abs/2309.01430v1
# DAT++: Spatially Dynamic Vision Transformer ###### Abstract Transformers have shown superior performance on various vision tasks. Their large receptive field endows Transformer models with higher representation power than their CNN counterparts. Nevertheless, simply enlarging the receptive field also raises several concerns. On the one hand, using dense attention in VIT leads to excessive memory and computational cost, and features can be influenced by irrelevant parts that are beyond the region of interests. On the other hand, the handcrafted attention adopted in PVT or Swin Transformer is data agnostic and may limit the ability to model long-range relations. To solve this dilemma, we propose a novel deformable multi-head attention module, where the positions of key and value pairs in self-attention are adaptively allocated in a data-dependent way. This flexible scheme enables the proposed deformable attention to dynamically focus on relevant regions while maintains the representation power of global attention. On this basis, we present **Deformable Attention Transformer (DAT)**, a general vision backbone efficient and effective for visual recognition. We further build an enhanced version DAT++. Extensive experiments show that our DAT++ achieves state-of-the-art results on various visual recognition benchmarks, with 85.9% ImageNet accuracy, 54.5 and 47.0 MS-COCO instance segmentation mAP, and 51.5 ADE20K semantic segmentation mIoU. Vision Transformer, deformable attention, dynamic neural networks. ## 1 Introduction Transformer [1] is originally introduced to solve natural language processing tasks. It has recently shown great potential in the field of computer vision [2, 3, 4]. The pioneering work, Vision Transformer [2] (ViT), stacks multiple Transformer blocks to process non-overlapping image patch (_i.e._ visual token) sequences, leading to a family of convolution-free models for visual recognition. Compared to their CNN counterparts [5, 6, 7, 8, 9, 10, 11, 12], ViTs have larger receptive fields and excel at modeling long-range dependencies, which are proven to achieve superior performance in the regime of a large amount of training data and model parameters. However, superfluous attention in visual recognition is a double-edged sword and has multiple drawbacks. Specifically, the excessive number of keys to attend per query patch yields high computational cost and slow convergence, and also increases the risk of overfitting. In order to avoid excessive attention computation, existing works [3, 4, 14, 15, 16, 17, 18, 19, 20, 21, 22] have leveraged carefully designed efficient sparse attention patterns to reduce computational complexity. As two representative approaches among them, Swin Transformer [3] adopts window-based local attention to restrict attention in local windows and shifts windows layer-wise to interact between windows, while Pyramid Vision Transformer (PVT) [4] spatially downsamples the key and value feature maps to save computation by attending queries to a coarsened set of keys. Although effective, hand-crafted attention patterns are data-agnostic and may not be optimal Fig. 1: Comparison of DAT with other Vision Transformers and DCN [13]. The red star and the blue star denote the different queries, and masks with solid line boundaries depict the regions to which the queries attend. In a **data-agnostic** way: (a) ViT [2] adopts full global attention for all queries. (b) Swin Transformer [3] uses partitioned window attention. In a **data-dependent** way: (c) DCN learns different deformed points for each query. (d) DAT learns shared deformed key locations for all queries.
2301.05074
Identification of light leptons and pions in the electromagnetic calorimeter of Belle II
The paper discusses new method for electron/pion and muon/pion separation in the Belle II detector at transverse momenta below 0.7 $\mathrm{GeV}/c$, which is essential for efficient measurements of semi-leptonic decays of $B$ mesons with tau lepton in the final state. The method is based on the analysis of patterns in the electromagnetic calorimeter by using a Convolutional Neural Network (CNN).
Anja Novosel, Abtin Narimani Charan, Luka Šantelj, Torben Ferber, Peter Križan, Boštjan Golob
2023-01-12T15:18:32Z
http://arxiv.org/abs/2301.05074v1
# Identification of light leptons and pions in the electromagnetic calorimeter of Belle II ###### Abstract The paper discusses new method for electron/pion and muon/pion separation in the Belle II detector at transverse momenta below 0.7 GeV/\(c\), which is essential for efficient measurements of semi-leptonic decays of \(B\) mesons with tau lepton in the final state. The method is based on the analysis of patterns in the electromagnetic calorimeter by using a Convolutional Neural Network (CNN). keywords: Electromagnetic calorimeter, Particle identification, Convolutional Neural Network + Footnote †: journal: Nucl. Instr. Meth. A ## 1 Introduction Searches for New Physics at the intensity frontier are based on very precise measurements of rare processes within the Standard Model. Of particular interest, because of persistent hints of Lepton Flavour Universality (LFU) violation, are semi-leptonic decays of \(B\) mesons, e.g. decays mediated by the \(b\to c\tau^{+}\nu_{\tau}\) transitions with a tau lepton in the final state and decays involving \(b\to s\mu^{+}\mu^{-}\) and \(b\to se^{+}e^{-}\) transitions. In decays with tau lepton in the final state, the tau lepton must be reconstructed from its long-lived decay products, for example from the decays \(\tau^{-}\to\mu^{-}\bar{\nu}_{\mu}\nu_{\tau}\) or \(\tau^{-}\to e^{-}\bar{\nu}_{e}\nu_{\tau}\). In the Belle II experiment [1; 2], the momentum spectrum of light leptons from tau decays is rather soft, a sizable fraction being below 0.7 GeV/\(c\). One of the crucial steps in the analysis of these decays is identifying low momenta light leptons (\(e\) or \(\mu\)) from hadronic background (mostly \(\pi\)). The simplest baseline feature for separating electrons from other charged particles (muons and pions) is \(E/p\), the ratio between the energy measured in the electromagnetic calorimeter and the reconstructed momentum of topologically matched charged track. This variable provides an excellent separation for particles with \(p>1\) GeV/\(c\), but due to increased energy losses from bremsstrahlung for low momentum electrons, the separation is less distinct [3]. Muons are identified in the \(K_{L}\) and muon system. However, its efficiency is very poor for low momentum muons that are out of acceptance of the dedicated sub-detector. Other sub-detectors designed for particle identification, the time of propagation detector and the aerogel ring-imaging Cherenkov detector, are not able to provide efficient \(\mu/\pi\) separation in this momentum range because at low momenta multiple scattering in the material of the detector as well as the material in front of it blurs the pattern considerably. Our main goal is to improve the identification of low momentum leptons using the information of energy deposition in the electromagnetic calorimeter in a form of images. As a classifier we are using a Convolutional Neural Network (CNN), a powerful machine learning technique designed for working with two-dimensional images. Using CNN on the images allows us to access the information on the shape of the energy deposition without depending on cluster reconstruction or track-cluster matching. In what follows, we will describe the electromagnetic calorimeter of Belle II, discuss the analysis of simulated pion, muon and electron patterns in the electromagnetic calorimeter, and present the results. ## 2 Electromagnetic calorimeter of Belle II The Belle II detector is a large-solid-angle magnetic spectrometer designed to reconstruct the products of collisions produced by the SuperKEKB collider. The detector consists of several sub-detectors arranged around the interaction point in cylindrical geometry: the innermost Vertex Detector (VXD) used for reconstructing decay vertices, a combination of the Pixel Detector (PXD) and Silicon Vertex Detector (SVD); the Central Drift Chamber (CDC) is the main tracking system; the Time of Propagation (TOP) detector in the barrel region and the Aerogel Ring-Imaging Cherenkov detector (ARICH) in the forward endcap region are used for hadron identification; the Electromagnetic Calorimeter (ECL) is used to measure the energy of photons and electrons and the outermost K-Long and Muon (KLM) detector detects muons and neutral \(K^{0}_{L}\) mesons [1]. The sub-detector relevant for this work is the ECL, more specifically its central barrel region barrel region which consists of 6624 CsI(Tl) scintillation crystals, covering the polar angle region \(32.2^{\circ}<\theta<128.7^{\circ}\) with respect to the beam axis. A solenoid surrounding the calorimeter generates a uniform 1.5 T magnetic field filling its inner volume [2]. We are mainly interested in the transverse momentum range \(0.28<p_{T}<0.7\) GeV/\(c\), where the minimal \(p_{T}\) threshold ensures the tracks are within the ECL barrel region acceptance. Currently, two methods for the particle identification in the ECL are available. The first method relies exclusively on the ratio of the energy deposited by a charged particle in the ECL and the reconstructed momentum of topologically matched charged track, \(E/p\). While for electrons this variable enables powerful discrimination, as electrons completely deposit their energy in the ECL, the \(\mu/\pi\) separation is strongly limited, especially for low-momentum particles with a broader \(E/p\) distribution as can be seen on Fig. 1. The second method uses Boosted Decision Trees (BDT) with several expert-engineered observables characterising the shower shape in the ECL [4]. ## 3 Analysis of the patterns in the electromagnetic calorimeter Our proposed method to improve the identification of low-momentum leptons is to exploit the specific patterns in the spatial distribution of energy deposition in the ECL crystals using a Convolutional Neural Network (CNN)1. The images are consistent with the 11 x 11 neighbouring crystals around the entry point of the extrapolated track into the ECL, where each pixel corresponds to an individual ECL crystal and pixel intensity to the energy deposited by charged particle in the crystal. Examples of the obtained images are shown on Fig. 2. While electrons generate electromagnetic showers depositing the majority of their energy in the ECL, the dominant interaction in CsI(Tl) for muons and pions in the aforementioned transverse-momentum range is ionization. Besides, pions can strongly interact with nuclei producing less localized images compared to muons [5]. Footnote 1: CNN is built using TensorFlow software available from tensorflow.org. For each binary classification we generated \(1.5\times 10^{6}\) events using the Belle II Analysis Software Framework [6], where the data set consists of the same number of signal (\(e\) or \(\mu\)) and background (\(\pi\)) events with uniformly distributed transverse momenta, polar angle and azimuthal angle. The two data sets were split on the train-validation-test set as \(70-10-20\%\) and we use the same CNN architecture for \(e/\pi\) and \(\mu/\pi\) case. As an input to the convolutional layers we use 11 x 11 images. Before fully connected layers we add the information about \(p_{T}\) and \(\theta_{\text{ID}}\), where the later represents an integer number corresponding to the location of the ECL crystal and is in the network implemented as an embedding. To perform a binary classification, we have 1 neuron in the output layer with a sigmoid activation function that outputs the signal probability that the image was produced by a lepton. ## 4 Performance To validate the performance of a binary classifier we use a Receiver Operating Characteristic (ROC) curve by plotting true positive rate (\(\mu\) or \(e\) efficiency) against the false positive rate (\(\pi\) mis-ID rate). As the reference for the existing ECL information, we use the log-likelihood difference, a powerful discriminator between the competing hypotheses, defined as \(\Delta\text{LL}^{\text{ECL}}=\log\text{L}^{\text{ECL}}_{e,\mu}-\log\text{L}^{ \text{ECL}}_{\pi}\) based only on \(E/p\)[3] and BDT ECL using the shower-shape information from the ECL, thoroughly described in [4]. The ROC curves obtained by these three methods are shown on Fig. 3 for \(e/\pi\) and on Fig. 4 for \(\mu/\pi\) classification. Figure 1: Distribution of \(E/p\) for simulated single particle candidates: \(e\) (green), \(\mu\) (red) and \(\pi\) (blue) for \(0.28\leq p_{T}<0.7\) GeV/\(c\) in the ECL barrel region. Figure 3: The performance of three different classifiers for \(e/\pi\) based on only ECL information: \(\Delta\text{LL}^{\text{ECL}}\), BDT ECL, and \(\Delta\text{LL}^{\text{CNN}}\). Figure 2: Examples of simulated energy depositions and the average over 80000 images for \(e\) (left), \(\mu\) (middle) and \(\pi\) (right). Looking at the shapes of ROC curves and the Area Under the Curve (AUC) values, it is evident that the CNN outperforms the existing classifiers, \(\Delta\mathrm{LL}^{\mathrm{ECL}}\) and BDT ECL for both \(e/\pi\) and \(\mu/\pi\). The performance of the CNN drops with increasing momentum as the path in the ECL gets shorter and the specific patterns in the images become less evident. ## 5 Summary and outlook We can conclude there is more information in the ECL that is currently used for particle identification. We saw that the separation between low-momentum light leptons and pions can be improved using a CNN on the ECL images. The newly proposed method looks very promising and worthwhile to be further developed. A comparison of the method presented in this article to a novel BDT-based analysis is a subject of a forthcoming publication [7]. ## 6 Acknowledgements We thank Anze Zupanc for his support with ideas and advice in the early stages of the project. This work was supported by the following funding sources: European Research Council, Horizon 2020 ERC-Advanced Grant No. 884719; BMBF, DFG, HGF (Germany); Slovenian Research Agency research grants No. J1-9124, J1-4358 and P1-0135 (Slovenia).
2305.17187
Clip-OGD: An Experimental Design for Adaptive Neyman Allocation in Sequential Experiments
From clinical development of cancer therapies to investigations into partisan bias, adaptive sequential designs have become increasingly popular method for causal inference, as they offer the possibility of improved precision over their non-adaptive counterparts. However, even in simple settings (e.g. two treatments) the extent to which adaptive designs can improve precision is not sufficiently well understood. In this work, we study the problem of Adaptive Neyman Allocation in a design-based potential outcomes framework, where the experimenter seeks to construct an adaptive design which is nearly as efficient as the optimal (but infeasible) non-adaptive Neyman design, which has access to all potential outcomes. Motivated by connections to online optimization, we propose Neyman Ratio and Neyman Regret as two (equivalent) performance measures of adaptive designs for this problem. We present Clip-OGD, an adaptive design which achieves $\widetilde{O}(\sqrt{T})$ expected Neyman regret and thereby recovers the optimal Neyman variance in large samples. Finally, we construct a conservative variance estimator which facilitates the development of asymptotically valid confidence intervals. To complement our theoretical results, we conduct simulations using data from a microeconomic experiment.
Jessica Dai, Paula Gradu, Christopher Harshaw
2023-05-26T18:22:42Z
http://arxiv.org/abs/2305.17187v2
# Clip-OGD: An Experimental Design for Adaptive Neyman Allocation in Sequential Experiments ###### Abstract From clinical development of cancer therapies to investigations into partisan bias, adaptive sequential designs have become increasingly popular method for causal inference, as they offer the possibility of improved precision over their non-adaptive counterparts. However, even in simple settings (e.g. two treatments) the extent to which adaptive designs can improve precision is not sufficiently well understood. In this work, we study the problem of Adaptive Neyman Allocation in a design-based potential outcomes framework, where the experimenter seeks to construct an adaptive design which is nearly as efficient as the optimal (but infeasible) non-adaptive Neyman design, which has access to all potential outcomes. Motivated by connections to online optimization, we propose Neyman Ratio and Neyman Regret as two (equivalent) performance measures of adaptive designs for this problem. We present Clip-OGD, an adaptive design which achieves \(\widetilde{\mathcal{O}}(\sqrt{T})\) expected Neyman regret and thereby recovers the optimal Neyman variance in large samples. Finally, we construct a conservative variance estimator which facilitates the development of asymptotically valid confidence intervals. To complement our theoretical results, we conduct simulations using data from a microeconomic experiment. + Footnote †: We thank Molly Offer-Westort, Benjamin Recht, Fredrik Sãovié, and Daniel Spielman for insightful discussions which helped shaped this work. Part of this work was done while Christopher Harshaw was visiting the Simons Institute for the Theory of Computing. Christopher Harshaw gratefully acknowledges support from Foundations of Data Science (FODSI), NSF grant DMS2023505 ## Table of Contents * 1 Introduction * 1.1 Related Work * 2 Preliminaries * 2.1 Potential Outcomes Framework * 2.2 Asymptotic Framework and Assumptions * 3 Neyman Design: The Infeasible Non-Adaptive Ideal * 4 Adaptive Neyman Allocation: An Online Optimization Approach * 4.1 Neyman Ratio and Neyman Regret: New Performance Measures * 4.2 Clip-OGD: A Variant of Online Stochastic Projected Gradient Descent * 5 Inference in Large Samples * 5.1 Variance Estimation * 5.2 Confidence Intervals * 6 Considering Alternative Designs * 7 Numerical Simulations * 8 Conclusion Introduction From medicine and public health to economics and public policy, randomized control trials are used in a variety of disciplines to investigate causal effects. Typically, treatment is assigned in a non-adaptive manner, where assignments are determined before any outcomes are observed. A sequential experimental approach, which adaptively assigns treatment based on previously observed outcomes, offers the possibility of more precise or high powered estimates of relevant causal effects. Adaptive experiments are run to develop clinical therapies for breast cancer (Barker et al., 2009), evaluate incentives to reduce partisan bias (Offer-Westort et al., 2021), and evaluate customer acquisition via online advertising (Schwartz et al., 2017), to name a few. In this paper, we study the problem of Adaptive Neyman Allocation, which we informally define as follows. An optimal non-adaptive experimental design which minimizes variance of an estimator will depend on the unknown potential outcomes, rendering it infeasible to run. However, by adaptively choosing treatment assignments in a sequential manner based on observed outcomes, we can hope to guarantee that the variance of the estimator under the adaptive design converges to the optimal non-adaptive variance. The problem of Adaptive Neyman Allocation is to construct such an adaptive design which guarantees the variance converges to the (infeasible) optimal non-adaptive design. An experimental design which sufficiently addresses the Adaptive Neyman Allocation problem offers the advantage of higher statistical power, relative to a broad class of fixed experimental designs. Practically speaking, this means that either smaller confidence intervals are obtained for a given number of experimental units, or that fewer units are required to achieve confidence intervals of a given length. In practice, this means that investigating causal effects can be cheaper--in terms of time, money, and other valuable resources--when adaptive experiments are run. Although several experimental designs have been proposed for this purpose (Hahn et al., 2011; Blackwell et al., 2022), none have provided formal guarantees that the optimal non-adaptive variance can be achieved and the effectiveness of such designs has recently been called into question Cai and Rafi (2022). The main contributions of this work are as follows: 1. **Neyman Ratio and Regret**: We propose two (equivalent) performance measures of experimental designs for the problem of Adaptive Neyman Allocation: Neyman Ratio and Neyman Regret. We show that guarantees on the rates of these performance measures directly translate to guarantees on the convergence of variance to the Neyman variance. 2. **Clip-OGD**: We propose the adaptive design Clip-OGD, a variant of online stochastic projected gradient descent for which the Neyman regret is \(\widehat{\mathcal{O}}(\sqrt{T})\). This guarantees that the variance of the sequential effect estimator approaches the Neyman variance. 3. **Confidence Intervals**: By constructing a conservative variance estimator, we provide confidence intervals which guarantee asymptotic coverage of the average treatment effect. In Section 7, we support these theoretical results with simulations using data from a microeconomic experiment. Our results rely on viewing the Adaptive Neyman Allocation problem through the lens of online convex optimization. However, as discussed in Section 4.2, due to the subtleties arising in the problem, we do not know of an existing online algorithm which directly obtains these results. ### Related Work We work within the potential outcomes framework for causal inference Neyman (1923), Rubin (1980), Imbens and Rubin (2015). The idea of optimal treatment allocation dates back to Neyman (1934), where he demonstrates that sampling from treatments proportional to the within-treatment outcome variance will minimize the variance of standard estimators. Unfortunately, this type of design is not practically feasible when little is known about the statistics of outcomes from each treatment. Robbins (1952) highlights adaptive sampling as one of the more pressing open statistical problems at the time. In Chapter 5, Solomon and Zacks (1970) presents a survey of adaptive designs for survey sampling, but from a Bayesian perspective. More recently, Hahn et al. (2011) proposed a two stage design in a super-population setting, where data is uniformly collected from both arms in the first stage, statistics of the treatment arm are estimated, and a fixed probability derived from estimated statistics is used in the second stage. They derive the limiting distribution of the effect estimator under the two-stage design, which has a variance that is similar to, but asymptotically bounded away from the optimal Neyman variance. In a design-based setting, Blackwell et al. (2022) propose a similar two-stage approach and, through simulations, provide practical guidance on how to choose the length of the first stage. Although both of these works are motivated by achieving the Neyman variance, neither formally show that this is possible under the two-stage design. Causal inference under adaptively collected data has seen a variety of recent developments which are adjacent to, but distinct from, the problem of Adaptive Neyman Allocation. One line of research has been to construct estimators via re-weighting which ensure consistency and normality when data is collected via bandit algorithms (Hadada et al., 2021; Zhang et al., 2020, 2021). A second line of research has been to provide inferential methods which are valid under data-dependent stopping times (Wald, 1945; Howard et al., 2021; Ham et al., 2022). Finally, Offer-Westort et al. (2021) propose an adaptive experimental design for improved selective inference, when only the effect of the best performing treatment is to be inferred. ## 2 Preliminaries The sequential experiment takes place over \(T\) rounds, where we assume that \(T\) is fixed and known to the experimenter. At each iteration \(t\in[T]\), a new experimental unit (e.g. clinical participant), enters into the experiment, so that there are \(T\) units in total. In an abuse of notation, we identify units with their respective round \(t\in[T]\). The experimenter assigns a (random) treatment \(Z_{t}\in\{0,1\}\) (e.g. drug or placebo) to the experimental unit. The unit has two real-valued potential outcomes \(y_{t}(1),y_{t}(0)\) which are unknown to the experimenter and represent the unit's measured response to the treatment assignments (e.g. measured heart rate). The term "potential" is used here because while only one treatment is assigned and thus only one outcome is observed, both outcomes have the potential to be observed. At the end of the round, the experimenter sees the observed outcome \(Y_{t}=\mathbf{1}[Z_{t}=1]y_{t}(1)+\mathbf{1}[Z_{t}=0]y_{t}(0)\). ### Potential Outcomes Framework In this paper, we adopt a _design-based framework_ where the sequence of potential outcomes \(\{y_{t}(1),y_{t}(0)\}_{t=1}^{T}\) is deterministic and the only source of randomness is treatment assignment itself. In particular, we place no assumption on the homogeneity of the outcomes: they are not necessarily related to each other in any systematic way. Although the potential outcomes are deterministic, we introduce finite population analogues of various statistics. Define the finite population second moments \(S(1)\) and \(S(0)\) and correlation of the treatment and control outcomes \(\rho\) to be \[S(1)^{2}=\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\enspace,\quad S(0)^{2}=\frac{1 }{T}\sum_{t=1}^{T}y_{t}(0)^{2}\enspace,\quad\text{and}\quad\rho=\frac{\frac{1}{ T}\sum_{t=1}^{T}y_{t}(1)y_{t}(0)}{S(1)S(0)}\enspace.\] Observe that the correlation between treatment and control outcomes is bounded \(\rho\in[-1,1]\). Although we refer to \(\rho\) as the correlation, it also known as the cosine similarity and is generally not equal to the Pearson correlation coefficient. We remark that although the potential outcomes \(y_{t}(1)\) and \(y_{t}(0)\) are deterministic, the observed outcome \(Y_{t}\) is random, as it depends on random treatment assignment. The natural filtration according to these rounds is denoted as \(\mathcal{F}_{1}\ldots\mathcal{F}_{T}\), so that \(\mathcal{F}_{t}\) captures all randomness before the sampling of \(Z_{t}\), i.e. the treatments assigned and outcomes observed in previous rounds. In this sequential setting, the mechanism for random treatment assignment can incorporate observed outcomes from previous experimental rounds. This treatment mechanism, referred to as the _experimental _design_, is selected by and thus known to the experimenter. Formally, the experimental design is a sequence of functions \(\{\Pi_{t}\}_{t=1}^{T}\) with signature \(\Pi_{t}:(\{0,1\}\times\mathbb{R})^{t-1}\to[0,1]\) such that treatment is assigned as \(\Pr(Z_{t}=1\mid\mathcal{F}_{t})=\Pi_{t}(Z_{1},Y_{1},\ldots Z_{t-1},Y_{t-1})\). We denote \(P_{t}=\Pr(Z_{t}=1\mid\mathcal{F}_{t})\) as the (random) probability of treatment assignment at iteration \(t\), given previously observed treatment assignments and outcomes. The causal estimand of interest is the _average treatment effect_, defined as \[\tau=\frac{1}{n}\sum_{t=1}^{T}y_{t}(1)-y_{t}(0)\enspace.\] The average treatment effect captures the average counterfactual contrast between a unit's outcomes under the two treatment assignments. For example, this could be the average contrast of a clinical participant's heart rate under the drug or placebo. Individual treatment effects are defined as \(\tau_{t}=y_{t}(1)-y_{t}(0)\), but they cannot be estimated in any reasonable sense, as only one outcome is observed. A standard estimator of the average treatment effect is the Horvitz-Thompson estimator, which weights observed outcome by the probability of their observation (Narain, 1951; Horvitz and Thompson, 1952). For adaptive designs, the standard Horvitz-Thompson estimator is infeasible because the marginal probability of treatment assignment \(\Pr(Z_{t}=1)\) depends on the unknown potential outcomes. For this reason, we investigate the _adaptive Horvitz-Thompson estimator_, which uses the random (observed) treatment probabilities used at each iteration. \[\hat{\tau}\triangleq\frac{1}{T}\sum_{t=1}^{T}Y_{T}\Big{(}\frac{\mathbf{1}[Z_ {t}=1]}{P_{t}}+\frac{\mathbf{1}[Z_{t}=0]}{1-P_{t}}\Big{)}\enspace,\] where we recall that \(P_{t}=\Pi_{t}(Z_{1},Y_{1},\ldots Z_{t-1},Y_{t-1})\) is the treatment probability under the experimental design given the observed data. When treatment assignments are non-adaptive and independent, then the adaptive Horvitz-Thompson estimator is the equivalent to the standard Horvitz-Thompson estimator. Below, we provide positivity conditions under which the adaptive estimator is unbiased, and derive its variance. **Proposition 2.1**.: _If \(\min\{P_{t},1-P_{t}\}>0\) almost surely for all \(t\in[T]\) then the adaptive Horvitz-Thompson estimator is unbiased: \(\mathbb{E}[\hat{\tau}]=\tau\)._ **Proposition 2.2**.: _The variance of the adaptive Horvitz-Thompson estimator is_ \[T\cdot\mathrm{Var}(\hat{\tau})=\frac{1}{T}\sum_{t=1}^{T}\Bigl{(}y_{t}(1)^{2} \,\mathbb{E}\Bigl{[}\frac{1}{P_{t}}\Bigr{]}+y_{t}(0)^{2}\,\mathbb{E}\Bigl{[} \frac{1}{1-P_{t}}\Bigr{]}\Bigr{)}-\frac{1}{T}\sum_{t=1}^{T}\tau_{t}^{2}\enspace.\] ### Asymptotic Framework and Assumptions Following the convention of design-based inference, we analyze statistical methods within an asymptotic framework (see e.g., Freedman, 2008; Lin, 2013; Savje et al., 2021). This provides a formal basis for reasoning about the performance of statistical methods as the sample size increases, giving meaning to conventional notions such as consistency and limiting distribution. Formally speaking, the asymptotic sequence of potential outcomes is a triangular array \(\{\{y_{t,T}(1),y_{t,T}(0)\}_{t=1}^{T}\}_{T=1}^{\infty}\), which yields a sequence of estimands \(\{\tau_{T}\}_{T=1}^{\infty}\) and, together with an appropriately specified sequence of experimental design, a sequence of estimators \(\{\hat{\tau}_{T}\}_{T=1}^{\infty}\). Analysis which applies to a fixed \(T\) is said to be finite-sample (e.g. \(\mathbb{E}[\hat{\tau}_{T}]=\tau_{T}\)) whereas analysis which applies to the entire sequence is said to be asymptotic (e.g. \(\tau_{T}-\hat{\tau}_{T}\xrightarrow{p}0\)). Although we use an asymptotic framework, we emphasize that the majority of our results are derived from finite-sample analysis and are merely interpreted through the lens of the asymptotic framework. We drop the subscript \(T\) for notational clarity. The main regularity conditions we place on the sequence of potential outcomes is below. **Assumption 1**.: There exist constants \(c\leqslant C\) such that for all \(T\) in the asymptotic sequence: 1. **Bounded Moments**: \(c\leqslant\big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(k)^{2}\big{)}^{1/2}\leqslant \big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(k)^{4}\big{)}^{1/4}\leqslant C\ \forall\ k\in\{0,1\}\). 2. **Bounded Correlation**: \(\rho\geqslant-(1-c)\). The upper moment bound in Assumption 1 stipulates that the potential outcomes cannot grow too large with the sample size, while the lower moment bound is a type of non-degeneracy condition that prevents an increasingly large fraction of the outcomes going to zero. These assumptions are analogous to finite fourth moment and positive second moment assumptions in an i.i.d. setting. The bounded correlation assumption stipulates that the treatment and control outcomes are not exactly negatively correlated. In this paper, we do not assume that these constants \(C\) and \(c\) are known to the experimenter; however, if the experimenter can correctly specify such bounds (perhaps knowing a priori the scaling of the outcomes) then some of the constant factors in our analysis can be improved. In the next section, these regularity assumptions will ensure that the Neyman variance converges to zero at the parametric rate. ## 3 Neyman Design: The Infeasible Non-Adaptive Ideal The problem of Adaptive Neyman Allocation is to construct an adaptive experimental design that achieves nearly the same variance as an optimal non-adaptive experimental design, chosen with knowledge of all potential outcomes. The optimal non-adaptive design, referred to as the Neyman Design, is infeasible to implement because it depends on all potential outcomes, which are unknown to the experimenter at the design stage. The goal is that an adaptive experimental design--which can select treatment assignment based on observed outcomes--can gather enough information to perform as well as the infeasible Neyman design. In order to define the optimal non-adaptive design, we begin by defining the class of Bernoulli designs. Informally, the class of Bernoulli designs consists of non-adaptive designs where each unit receives treatment \(Z_{t}=1\) with probability \(p\), independently of past treatment assignments and observations. Formally, this class is parameterized by a non-adaptive sampling probability \(p\in[0,1]\) such that for all \(t\in[T]\), the treatment policy \(\Pi_{t}\) is a constant function whose value is \(p\). Using Proposition 2.2, we can derive the variance of the Bernoulli design with parameter \(p\in[0,1]\) to be \[T\cdot V_{p}=S(1)^{2}\Big{(}\frac{1}{p}-1\Big{)}+S(0)^{2}\Big{(}\frac{1}{1-p} -1\Big{)}+2\rho S(1)S(0)\enspace.\] From the above, we can see that in order to minimize the variance of the Horvitz-Thompson estimator under the Bernoulli design, we should set the sampling probability \(p\) so as to balance the square of the second moments of treatment and control outcomes. The Neyman Design is the Bernoulli design which minimizes the variance of the Horvitz-Thompson estimator. The corresponding optimal probability \(p^{*}\) and variance \(V_{\text{N}}\) are referred to as the Neyman probability and Neyman variance, respectively. The following proposition derives these quantities in terms of the potential outcomes. **Proposition 3.1**.: _The Neyman variance is \(T\cdot V_{\text{N}}=2(1+\rho)S(1)S(0)\), which is achieved by the Neyman probability \(p^{*}=(1+S(0)/S(1))^{-1}\)._ In order to quantify the reduction in variance achieved by the Neyman design, define the _relative Neyman efficiency with respect to \(p\in[0,1]\)_ to be \(V_{\text{N}}/V_{p}\). Intuitively, this ratio is a scale-free measure which captures the percent reduction in variance of the sequential Horvitz-Thompson estimator under the Neyman design. Formally, the equation for the relative Neyman efficiency is given below: \[\frac{V_{\text{N}}}{V_{p}}=2(1+\rho)\Bigg{[}\frac{S(1)}{S(0)}\cdot\frac{(1-p) }{p}+\frac{S(0)}{S(1)}\cdot\frac{p}{(1-p)}+\rho\Bigg{]}^{-1}\enspace.\] Consider the setting where outcomes are uncorrelated, and treatment outcomes are larger than control outcomes, e.g. \(\rho=0\), \(S(1)=4\cdot S(0)\). In this case, the Neyman design is able to achieve less than half the variance of the uniform Bernoulli design (with \(p=1/2\)): we can plug in to 3 to see that in this setting, we have \(V_{N}/V_{p}=0.47\). The improvement is larger if the experimenter makes erroneous assumptions about the relative magnitudes of the treatment and control outcomes and attempts to set \(p\) accordingly: for example, if the experimenter had set \(p=1/4\), incorrectly believing that \(S(1)\leqslant S(0)\), then the Neyman allocation results in a sixfold improvement in variance. Blackwell et al. (2022) derives qualitatively similar analysis of Neyman efficiency for stratified designs. While the relative Neyman efficiency is helpful in determining the variance reduction afforded by the (infeasible) optimal Bernoulli design, it does not address the main question: which adaptive experimental designs can guarantee similar variance reduction? In the next section, we propose a performance metric which better addresses this question. ## 4 Adaptive Neyman Allocation: An Online Optimization Approach ### Neyman Ratio and Neyman Regret: New Performance Measures Let \(V\) be the variance of the adaptive experimental design. We introduce our first performance measure of a sequential experimental design for Adaptive Neyman Allocation. **Definition 1**.: The _Neyman ratio_ of a sequential experimental design is \(\kappa_{T}=(V-V_{\mathrm{N}})/V_{\mathrm{N}}\). The subscript \(T\) in \(\kappa_{T}\) in included the reflect dependence of the number of rounds \(T\). The Neyman ratio is motivated by the following relationship between the adaptive variance and the optimal Neyman variance: \[V=\left(\frac{V}{V_{\mathrm{N}}}\right)\cdot V_{\mathrm{N}}=\left(1+\kappa_{T} \right)\cdot V_{\mathrm{N}}\enspace. \tag{1}\] Equation (1) shows that the adaptive design can recover the Neyman variance if and only if the Neyman ratio \(\kappa_{T}\) can be made arbitrarily small. For this reason, we propose the Neyman ratio as a performance measure of a sequential experimental design. A natural question then becomes: how small can the Neyman ratio \(\kappa_{T}\) be made as the number of rounds \(T\) increases? To answer this question, we view the problem of minimizing the Neyman ratio through the lens of online optimization. To this end, we must re-express the variance of the sequential experimental design. For each round \(t\in[T]\), define the cost function \(f_{t}:[0,1]\to\mathbb{R}\) as \(f_{t}(p)=y_{t}(1)^{2}/p+y_{t}(0)^{2}/(1-p)\). Observe that by Proposition 2.2, the variance is given by \(T\cdot\mathrm{Var}(\hat{\tau})=\mathbb{E}[\frac{1}{T}\sum_{t=1}^{T}f_{t}(P_{t})]\). This reformulation of variance does not allow us to minimize variance directly, for the usual reason that the outcomes, and thus the cost functions \(f_{t}\), are not fully observed. On the other hand, our goal is only to show that the variance of the adaptive design is comparable to the Neyman variance. **Definition 2**.: The _Neyman regret_ of a sequential experimental design is \[\mathcal{R}_{T}=\sum_{t=1}^{T}f_{t}(P_{t})-\min_{p\in[0,1]}\sum_{t=1}^{T}f_{t }(p)\enspace.\] Recall that \(P_{t}\) is the random treatment probability at round \(t\). The Neyman regret compares the accumulated costs \(f_{t}(P_{t})\) incurred by the adaptive design to the accumulated costs incurred by the optimal Bernoulli design which has access to all potential outcomes. The Neyman regret is random because the sequence \(P_{1},\ldots P_{T}\) is random. The following theorem connects the expected Neyman regret to the Neyman ratio. **Theorem 4.1**.: _Under Assumption 1, the Neyman ratio is within a constant factor of the \(1/T\)-scaled expected Neyman regret: \(\kappa_{T}=\Theta(\frac{1}{T}\,\mathbb{E}[\mathcal{R}_{T}])\)._ Theorem 4.1 demonstrates that the Neyman ratio can be made small by minimizing the expected Neyman regret in an online fashion. In particular, any sublinear bound on the expected Neyman regret ensures that the Neyman ratio goes to zero so that, in large samples, the adaptive design achieves the variance reduction of the optimal Neyman design. Any adaptive design which aims to achieve Neyman variance must, to some extent, minimize expected Neyman regret. Fortunately, online optimization is a well-studied area with a rich source of techniques from which we may draw inspiration. However, to the best of our knowledge, existing regret minimization algorithms are not well-suited to minimizing the Neyman regret. For example, the multi-arm bandit literature typically defines regret in terms of a finite number of actions that can be taken [14] while Adaptive Neyman Allocation consists of a continuum of actions as \(P_{t}\in[0,1]\). This means that algorithms like UCB [1] and EXP3 [1] are not appropriate for Adaptive Neyman Allocation. Our cost objectives \(f_{t}\) and action space \([0,1]\) are both convex, so the problem of Adaptive Neyman Allocation is an instance of Online Convex Optimization (OCO) [1]. Even so, the problem of minimizing Neyman regret is not immediately amenable to existing algorithms, which typically requires assumptions on the cost functions such as bounded gradients or known Lipschitz parameters. In this setting, the cost functions have gradients which blow up at the boundary and Lipschitz parameters cannot be guaranteed as they rely on the unknown heterogeneous potential outcomes. For these reasons, we must design a new algorithm specifically tailored to Adaptive Neyman Allocation. ### Clip-OGD: A Variant of Online Stochastic Projected Gradient Descent We present Clip-OGD, which aims to minimize the Neyman regret and thus recover the Neyman variance in large samples. The algorithm is based on the online stochastic projected gradient descent principle, but with a twist: the projection set continuously grows over the rounds. At each round \(t\), a new treatment probability \(P_{t}\) is chosen by updating the previous sampling probability \(P_{t-1}\) in the negative (estimated) gradient direction of the previous cost, and then projecting to an interval \([\delta_{t},1-\delta_{t}]\). Initially, this projection interval contains only the point \(1/2\) and it grows as the rounds increase, allowing for larger amounts of exploitation in later rounds. The gradient estimator \(G_{t}\) is obtained via a Horvitz-Thompson principle. Clip-OGD is formally presented below as Algorithm 1, where the projection operator is defined as \(\mathcal{P}_{c}(x)=\max\{c,\min\{x,1-c\}\}\). ``` Input: Step size \(\eta\) and decay parameter \(\alpha\) Initialize \(P_{0}\gets 1/2\) and \(G_{0}\gets 0\) for\(t=1\dots T\)do Set projection parameter \(\delta_{t}=(1/2)\cdot t^{-1/\alpha}\) Compute new treatment probability \(P_{t}\leftarrow\mathcal{P}_{\delta_{t}}(P_{t-1}-\eta\cdot G_{t-1})\) Sample treatment assignment \(Z_{t}\) as \(1\) with probability \(P_{t}\) and \(0\) with probability \(1-P_{t}\) Observe outcome \(Y_{t}=\mathbf{1}[Z_{t}=1]y_{t}(1)+\mathbf{1}[Z_{t}=0]y_{t}(0)\) Construct gradient estimator \(G_{t}=Y_{t}^{2}\Big{(}-\frac{\mathbf{1}[Z_{t}=1]}{P_{t}^{3}}+\frac{\mathbf{1} [Z_{t}=0]}{(1-P_{t})^{3}}\Big{)}\) end for ``` **Algorithm 1**Clip-OGD Unlike the two-stage design of [11, 16], Clip-OGD does not feature explicit explore-exploit stages, but rather performs both of these simultaneously. The trade-off is implicitly controlled through parameters \(\eta\) and \(\alpha\): smaller values of \(\eta\) limit the amount of that sampling probabilities can update and, likewise, larger values of \(\alpha\) prevent extreme probabilities in earlier stages. Because the gradient of the cost functions are inversely proportional to the treatment probabilities, limiting the extremeness of the treatment probabilities in this way ensures that the gradient estimates do not increase at a fast rate. By appropriately setting input parameters, Clip-OGD achieves \(\widetilde{\mathcal{O}}(\sqrt{T})\) expected Neyman regret, where the \(\widetilde{\mathcal{O}}(\cdot)\) notation hides sub-polynomial factors. **Theorem 4.2**.: _Under Assumption 1 the parameter values \(\eta=\sqrt{1/T}\) and \(\alpha=\sqrt{5\log(T)}\) ensure the expected Neyman regret of Clip-OGD is asymptotically bounded: \(\mathbb{E}\big{[}\mathcal{R}_{T}\big{]}\leq\widetilde{\mathcal{O}}\big{(} \sqrt{T}\big{)}\)._ Theorem 4.2 answers, in the affirmative, that it is possible to construct an adaptive experimental design whose variance recovers that of the Neyman variance, in large samples. Note that the amount of exploration (as given by the parameters \(\eta\) and \(\alpha\)) should be increasing with \(T\) in order to recover these regret bounds. In Appendix C, we show that Clip-OGD is somewhat robust to different values of the decay parameter, i.e. for any value \(\alpha>5\), the expected regret will be sublinear. We also show that if the experimenter presumes to have correctly specified bounds \(C\) and \(c\) appearing in Assumption 1, then the step size can be modified to improve the constant factors in the Neyman regret bound, which may lead to improved performance in moderate sample sizes. We conjecture that the minimax rate for expected Neyman regret is \(\mathcal{O}(\sqrt{T})\), but proving this is beyond the scope of the current paper--we only remark that we do not know it to immediately follow from any existing regret lower bounds for OCO. ## 5 Inference in Large Samples The proposed Clip-OGD was constructed to ensure that the variance of the adaptive Horvitz-Thompson estimator quickly approaches the Neyman variance. In this section, we provide confidence intervals for the average treatment effect which also enjoy reduced width compared to non-adaptive counterparts. A necessary condition for variance estimation is that the variance itself cannot be going to zero too quickly. In design-based inference, it is common to directly posit a so-called "non-superefficient" assumption that \(\text{Var}(\hat{\tau})=\Omega(1/T)\)(Aronow and Samii, 2017; Leung, 2022; Harshaw et al., 2022). The non-superefficiency assumption may be seen as an additional regularity assumption on the outcomes, e.g. preventing \(y_{t}(1)=y_{t}(0)=0\) for all \(t\in[T]\). In this work, a similar lower bound on the rate of the adaptive variance is obtained through a different, perhaps more transparent, assumption on the expected Neyman regret. **Assumption 2**.: The outcome sequence is not overly-fit to Clip-OGD: \(-\mathbb{E}[\mathcal{R}_{T}]=o(T)\). Effectively, Assumption 2 rules out settings where the outcomes were chosen with knowledge of Clip-OGD with the explicit purpose of minimizing Neyman regret. As shown in the appendix, Assumptions 1 and 2 imply that the adaptive variance achieves the parametric rate: \(\text{Var}(\hat{\tau})=\Theta(1/T)\). ### Variance Estimation In this section, we provide a variance estimator and show its stability in large samples. Rather than estimating the adaptive variance (which has no simple closed form), our approach is to estimate the Neyman variance directly. For an adaptive design achieving sublinear expected Neyman regret, these two quantities are asymptotically equivalent. In this way, our variance estimator will be appropriate not only for Clip-OGD, but for any adaptive design achieving sublinear expected Neyman regret. Recall that the Neyman variance is given by \(T\cdot V_{\text{N}}=2(1+\rho)S_{1}S_{0}\), where \(\rho\) is the outcome correlation, \(S_{1}\) is the second moment of treatment outcomes and \(S_{0}\) is the second moment of control outcomes. Unfortunately, the outcome correlation term is generally not estimable without strong assumptions in a design-based framework. Indeed, the difficulty is that terms like \(y_{t}(1)y_{t}(0)\) are unobservable due to the fundamental problem of causal inference (Imbens and Rubin, 2015). A common solution to the problem is to opt for a conservative estimate of the variance, which will ensure validity of resulting confidence intervals. We propose estimating the following upper bound on the variance: \(T\cdot\text{VB}=4S_{0}S_{1}\). This upper bound on the Neyman variance is tight (i.e. \(\text{VB}=V_{\text{N}}\)) when outcome correlation satisfies \(\rho=1\). For example, this occurs when all individual treatment effects are zero, i.e. \(y_{t}(1)=y_{t}(0)\) for all \(t\in[T]\). Conversely, the upper bound will become looser for smaller values of the outcome correlation. In this sense, our bound resembles both the Neyman bound and the Aronow-Samii bound (Neyman, 1923; Aronow and Samii, 2013). It may be possible to use the recent insights of Harshaw et al. (2021) in order to construct variance bounds which are tight in other scenarios, but that is beyond the scope of the current paper. Our variance estimator is defined as \[T\cdot\widehat{\text{VB}}\triangleq 4\sqrt{\left(\frac{1}{T}\sum_{t=1}^{T}y_{t}^{ 2}\frac{\mathbf{1}[z_{t}=1]}{p_{t}}\right)\cdot\left(\frac{1}{T}\sum_{t=1}^{T }y_{t}^{2}\frac{\mathbf{1}[z_{t}=0]}{1-p_{t}}\right)}\,\] which is essentially a plug-in Horvitz-Thompson estimator for the second moments. Theorem 5.1 shows the error of the normalized variance estimator converges at a parametric rate. **Theorem 5.1**.: _Under Assumptions 1 and 2, and the parameters stated in Theorem 4.2, the error of the normalized variance estimator under Clip-OGD is \(T\cdot\widehat{\text{VB}}-T\cdot\text{VB}=\widetilde{\mathcal{O}}_{p}(T^{-1 /2})\)._ ### Confidence Intervals The variance estimator may be used to construct confidence intervals for the average treatment effect. This offers experimenters standard uncertainty quantification techniques when running Clip-OGD. The following corollary shows that the resulting Chebyshev-type intervals are asymptotically valid. **Corollary 5.1**.: _Under Assumptions 1 and 2, and parameters stated in Theorem 4.2, Chebyshev-type intervals are asymptotically valid: for all \(\alpha\in(0,1]\), \(\liminf_{T\to\infty}\Pr(\tau\in\hat{\tau}\pm\alpha^{-1/2}\sqrt{\widehat{\text{VB }}})\geq 1-\alpha\)._ While these confidence intervals are asymptotically valid under our regularity assumptions, they may be overly conservative in general. In particular, they will over cover when the Chebyshev tail bound is loose. We conjecture that the adaptive Horvitz-Thompson estimator under Clip-OGD satisfies a Central Limit Theorem, which would imply asymptotic validity of the narrower Wald-type intervals where \(\alpha^{-1/2}\) scaling is replaced with the corresponding normal quantile, \(\Phi^{-1}(1-\alpha/2)\). As discussed in Section 7, the adaptive estimator appears approximately normal in simulations. Until this is formally shown, we recommend experimenters exhibit caution when using Wald-type confidence intervals for the adaptive Horvitz-Thompson estimator under Clip-OGD. ## 6 Considering Alternative Designs Explore-Then-CommitTwo-stage adaptive designs have been proposed for the purpose of variance reduction (Hahn et al., 2011; Blackwell et al., 2022). Due to its similarities to algorithms in the bandits literature, we call these types of designs Explore-Then-Commit (ETC) (Lattimore and Szepesvari, 2020). At a high level, an Explore-then-Commit design runs the Bernoulli design with \(p=1/2\) for \(T_{0}\leq T\) iterations, uses the collected data to estimate \(p^{*}\) by \(\widehat{p}^{*}\), and then runs the Bernoulli design with \(p=\widehat{p}^{*}\) for the remaining \(T_{1}=T-T_{0}\) iterations. These ETC designs are conceptually simpler than Clip-OGD, and may be reasonable to apply in more restricted settings where changing the treatment probabilities is difficult or costly. However, we provide the following negative result which shows that they can suffer linear Neyman regret. **Proposition 6.1** (Informal).: _For all explore phase lengths \(T_{0}\) satisfying \(T_{0}=\Omega(T^{*})\) for some \(\epsilon>0\), there exist a class of potential outcomes sequences satisfying Assumption 1 such that the Neyman regret under Explore-then-Commit is linear: \(\mathcal{R}_{T}=\Omega_{p}(T)\)._ A formal statement of Proposition 6.1 and its proof appear in Appendix E.1. ETC designs suffer larger variance when the estimated \(\widehat{p}^{*}\) may be far from the true optimal probability \(p^{*}\). In a design-based setting, this happens when the units in the explore phase are not representative of the entire sequence. Formulating conditions under which Explore-then-Commit designs achieve low Neyman regret is beyond the scope of this paper, but the proof of Proposition 6.1 shows that regularity conditions on the order of the units will be required. Multi Arm Bandit AlgorithmsMulti Arm Bandit (MAB) algorithms are often used for adaptive decision making settings, from online advertising to product development. The goal of MAB algorithms is to minimize the outcome regret, which measures the contrast between the overall value obtained from the actions relative to the value of the best action. The outcome regret is conventionally defined as \(\mathcal{R}_{T}^{\text{outcome}}=\max_{k\in\{0,1\}}\sum_{t=1}^{T}y_{t}(k)- \sum_{t=1}^{T}Y_{t}\). In certain contexts, minimizing outcome regret may be a more desirable goal than estimating a treatment effect to high precision. However, the following proposition illustrates that these two objectives are generally incompatible. **Proposition 6.2**.: _Let \(\mathcal{A}\) be an adaptive treatment algorithm achieving sublinear outcome regret, i.e. there exists \(q\in(0,1)\) such that \(\mathbb{E}[\mathcal{R}_{T}^{\text{outcome}}]\leq O(T^{q})\) for all outcome sequences satisfying Assumption 1. Then, there exists a class of outcome sequences satisfying Assumption 1 on which \(\mathcal{A}\) suffers super-linear Neyman regret, i.e. \(\mathbb{E}[\mathcal{R}_{T}]\geq\Omega(T^{2-q})\)._ Proposition 6.2 demonstrates that the outcome regret and the Neyman regret cannot generally be simultaneously minimized. In particular, sublinear outcome regret implies that the variance of the estimator must converge slower than the \(\Theta(1/T)\) parametric rate. This result contributes to a growing body of work which highlights trade-offs between various possible objectives in sequential decision making (Burtini et al., 2015). It is beyond the scope of the current paper to determine how such trade-offs ought to be resolved. ## 7 Numerical Simulations We evaluate the performance of Clip-OGD and Explore-then-Commit (ETC) for the purpose of Adaptive Neyman Allocation on the field experiment of Groh and McKenzie (2016), which investigates the effect of macro-insurance on micro-enterprises in post-revolution Egypt. The experimental units are 2,961 clients of Egypt's largest microfinance organization and the treatment was a novel insurance product. Several outcomes were recorded including whether the clients took on loans, introduced a new product or service, and the amount invested in machinery or equipment following treatment. To allocate treatment, Groh and McKenzie (2016) use a non-adaptive matched pair experimental design. Our goal here is not to provide a new analysis of this study, but rather to construct a plausible experimental setting under which to evaluate adaptive experimental designs. In our simulations, we focus on the numerical outcome "invested in machinery or equipment". The experimental data contains only observed outcomes, so we must impute the missing potential outcomes in order to simulate the experiment. We use a constant treatment effects model \(y_{t}(1)-y_{t}(0)=\tau+\gamma_{t}\), where \(\tau=90,000\) and \(\gamma_{1}\ldots\gamma_{T}\sim\mathcal{N}(0,\sigma^{2})\) are independent with \(\sigma=5,000\). This randomness is invoked only to impute potential outcomes, i.e. not re-sampled during each run of the experiment. In order to increase the sample size, we create a larger population by repeating this processes 5 times, which yields a total of \(14,445\) units after those with missing entries are removed. Units are shuffled to appear in an arbitrary order and outcomes are normalized to be in the range\([0,1]\). Figure 1 presents two plots illustrating how the variance of the adaptive HT estimator varies with different designs. The \(x\) axis contains the number of rounds \(T\) and the \(y\) axis contains the normalized variance \(T\cdot\text{Var}(\hat{\tau})\) under the designs. For each value of \(T\), we take the population to be the first \(T\) units in the sequence. Clip-OGD is run with the parameters recommended in Theorem 4.2 and ETC is run with \(T_{0}=T^{1/3}\) so that the exploration phase grows with \(T\). The variance under Clip-OGD and ETC is estimated empirically from 50,000 runs of the experiment, while the variance under the Bernoulli and Neyman designs is computed exactly. In Figure 0(a), we observe that Clip-OGD requires about \(T=4,000\) samples to achieve variance equal to Bernoulli, but eventually converges to the Neyman variance. As discussed in Section 4.2, it may be possible to improve the convergence rate by incorporating knowledge of the outcome moments in the design parameters. On the other hand, ETC remains comparable with Bernoulli even for small values of \(T\), but remains far away from the Neyman design for large samples. In Figure 0(b), a similar simulation is run, except that the potential outcomes of the first 100 units are flipped, so that the first units have negative individual treatment effects. While this produces little effect on the performance of Clip-OGD, it substantially worsens the performance of ETC, which relies on the early outcomes to estimate an optimal treatment probability. In particular, ETC performs worse than Bernoulli under this minor modification--even in large samples--corroborating Proposition 6.1. In the appendix, we evaluate the proposed confidence intervals, showing that Clip-OGD enjoys intervals of reduced width. We show that normal based intervals cover at the nominal level and provide further evidence that the estimator is asymptotically normal under Clip-OGD. ## 8 Conclusion In this paper, we have proposed the Neyman ratio and Neyman regret as a performance measure of experimental designs for the Adaptive Neyman Allocation problem. To this end, we proposed Clip-OGD which achieves \(\widehat{\mathcal{O}}(\sqrt{T})\) expected Neyman regret under mild regularity conditions on the outcomes. This formally establishes--for the first time--the existence of adaptive experimental designs under which the variance of the effect estimator quickly approaches the Neyman variance. Finally, we have provided a variance estimator which provides experimenters with uncertainty quantification methods when using Clip-OGD. The main drawback of our analysis is that it is most relevant for moderate and large sample sizes; in particular, our work does not properly address whether adaptive designs are always beneficial in small samples. There are several research directions which can improve relevance of this methodology to practice. First, establishing conditions under which a central limit theorem holds for Clip-OGD will yield smaller and thus more desirable Wald-type confidence intervals. Second, investigations into batched treatment allocations and delayed observations of outcomes would allow practitioners more flexibility in their designs. Finally, investigating variants of Adaptive Neyman Allocation in the presence of interference (Aronow and Samii, 2017; Harshaw et al., 2022) would allow for more realistic inference in complex settings, e.g. social network experiments and marketplace experiments. Figure 1: Normalized Variance of Adaptive Estimator under Experimental Designs
2308.06076
Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD Space
Creating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse. However, prevalent methods such as blendshape-based approaches and facial rigging techniques are time-consuming, labor-intensive, and lack standardized configurations, making facial animation production challenging and costly. In this paper, we propose a novel self-supervised framework, Versatile Face Animator, which combines facial motion capture with motion retargeting in an end-to-end manner, eliminating the need for blendshapes or rigs. Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters regardless of their topology, textures, blendshapes, and rigs; and 2) we introduce a mesh retarget module to utilize RGBD animation to create 3D facial animation by manipulating facial mesh with controller transformations, which are estimated from dense optical flow fields and blended together with geodesic-distance-based weights. Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results, highlighting its potential as a promising solution for the cost-effective and efficient production of facial animation in the metaverse.
Haoyu Wang, Haozhe Wu, Junliang Xing, Jia Jia
2023-08-11T11:29:01Z
http://arxiv.org/abs/2308.06076v1
# Versatile Face Animator: Driving Arbitrary 3D Facial Avatar ###### Abstract. Creating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse. However, prevalent methods such as blendshape-based approaches and facial rigging techniques are time-consuming, labor-intensive, and lack standardized configurations, making facial animation production challenging and costly. In this paper, we propose a novel self-supervised framework, Versatile Face Animator, which combines facial motion capture with motion retargeting in an end-to-end manner, eliminating the need for blendshapes or rigs. Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters regardless of their topology, textures, blendshapes, and rigs; and 2) we introduce a mesh retarget module to utilize RGBD animation to create 3D facial animation by manipulating facial mesh with controller transformations, which are estimated from dense optical flow fields and blended together with geodesic-distance-based weights. Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results, highlighting its potential as a promising solution for the cost-effective and efficient production of facial animation in the metaverse. 3D facial animation; Motion capture; Motion retargeting. + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer + Footnote †: journal: Computer + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics For example, in _The Curious Case of Benjamin Button_, filmmakers captured 170 blendshapes of Brad Pitt (Pitt, 2018). Additionally, the absence of a standard configuration for creating blendshapes complicates cross-mapping between different expression spaces and hinders motion transfer across distinct avatars. Another prevalent approach for generating 3D facial animation is facial rigging, which involves manipulating motion controls to create the desired animation (Kumar et al., 2019). However, facial rigging is typically an iterative and laborious process since no consistent rig can be used for all possible motions. As a result, the rigging process often becomes a bottleneck in 3D animation production. Furthermore, varying standards across different software packages make transferring facial motion across characters with distinct rigs extremely challenging. Both blendshape-based methods and facial rigging, as discussed above, are facing similar challenges: (i) they are time-consuming and labor-intensive, which limits their accessibility to common users, and (ii) they lack standardization, making it challenging to transfer facial motion across different characters with varying rigs or blendshape configurations. These challenges hinder the development of the metaverse, where users expect to act on arbitrary characters in a short set-up time. Motivated by these issues, we aim to explore a new solution that directly drives the facial mesh with raw RGBD videos, eliminating reliance on blendshapes or rigs. To this end, we propose a novel framework, Versatile Face Antimator (VFA), that combines facial motion capture with motion retargeting to drive the facial mesh with captured RGBD videos end-to-end. We aim to model the facial motion in color and depth fields and generate RGBD animation to drive facial mesh. To achieve this goal, our framework consists of the RGBD animation module and the mesh retarget module. First, the RGBD animation module generates the animated RGBD frame with hierarchical motion dictionaries. It then estimates the correspondence between the source RGBD image and the animated frame with a distilled flow generator. More specifically, the RGBD animation module encodes arbitrary facial motion into a combination of basic transformations in the motion dictionary and generates the animated frame from coarse to fine. The flow generator is then trained to estimate a dense optical flow field for building correspondence between source images and animated frames. The flow generator is distilled from the RGBD generator under animated RGBD frames' supervision, eliminating the need for extra labels. The mesh retarget module then deforms the facial mesh with the dense optical flow. It first detects the controlling points of the mesh automatically and then calculates the geodesic-distance-based controlling weights of each vertex. Afterward, the mesh retarget module estimates controlling point transformations according to the dense optical flow. The transformations are then blended with calculated weights to deform the mesh. To summarize, this work makes three main contributions: * We propose VFA, a novel self-supervised framework that combines facial motion capture with facial motion retargeting in an end-to-end manner, providing a cost-effective solution for 3D facial animation production. * We introduce a new method to learn facial motion in both the color field and the depth field with hierarchical motion dictionaries and generate RGBD animation coarse-to-fine. * We present a new pipeline for transferring RGBD animation to create 3D animation by deforming the mesh with controller transformations, which are estimated from a dense optical flow field and blended with geodesic-distance-based controlling weights. Our approach presents two main advantages: 1) It employs self-supervised training using raw facial RGBD data, eliminating the need for annotation or additional configuration; and 2) it can animate arbitrary 3D characters, regardless of their topology, blendshapes, or rigs. A comprehensive set of experiments, encompassing both qualitative and quantitative analyses, showcases the outstanding performance of our method in generating 3D facial animations at a relatively low cost. This positions our approach as a promising solution for 3D facial animation production. ## 2. Related Work **Blendshapes and Facial Rigging.** Blendshapes, an approximate semantic parameterization of facial expression, have become the standard approach to generating realistic facial animation in the industry (Kumar et al., 2019). With little computation, an extensive range of expressions can be expressed by linearly combining blendshape targets. However, hundreds of blendshape targets are required to build an expression space with enough expressiveness. To reduce this unbearable cost, researchers have proposed methods to reduce the demands of training expressions (Beng et al., 2017) or to fine-tune the blendshape model based on a generic prior (Kumar et al., 2019). To deal with transferring blendshape weights across different expression spaces, Kim _et al._ proposed a method that animated rendered images in the 2D domain and then estimated blendshape weights from the retargeted images (Kumar et al., 2019), which is similar to our proposed framework but can only drive a particular set of avatars due to the reliance of blendshapes. Facial rigging is another widely used technique that seeks to build motion controls and animate the target character (Kumar et al., 2019). To some extent, blendshape weights can be seen as a kind of control rig. Several neural approaches have been proposed to estimate facial rigs from animation using neural networks (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018), which enables motion transfer across characters. However, both blendshape-based methods and facial rigging techniques still suffer from their high configuration costs and lack of standard criteria in production. In our framework, we aim to model the facial motion of RGBD frames via 2D facial animation methods. This approach eliminates the need for blendshapes or facial rigs, reducing the configuration cost while maintaining satisfactory performance. **Data-driven 3D Facial Retargeting.** Facial retargeting techniques have significantly advanced with the development of neural networks. The Variational Auto Encoder (VAE) (Kumar et al., 2019) has been introduced to disentangle facial identity and expression in the latent space, allowing for the transfer of facial motion (Kumar et al., 2019; Wang et al., 2018). Recently, Zhang _et al._ (Zhang et al., 2018) proposed training character-specific VAE models to transfer characters' expressions across different domains. Most current studies on neural facial retargeting methods are based on the 3D morphable model (3DMM), which isolates identity and expressions (Wang et al., 2018; Wang et al., 2018). Moser _et al._ (Moser et al., 2019) inspired our work by proposing to treat 3D facial retargeting as 2D face swapping between the actor and the target character. They animated the rendered images using an unsupervised image-to-image translation model and then regressed the 3DMM parameters from the animated images. However, these methods typically require a large amount of paired high-accuracy 3D facial data, which is difficult to capture. Additionally, 3DMM-based methods suffer from a lack of expressiveness due to their linear nature. In our work, we propose to train our method with RGBD videos which be captured easily using a single Azure Kinect V2 camera. Our method eliminates the need for paired 3D facial data and allows arbitrary deformation by warping the facial mesh with an estimated optical flow field. **2D Facial Animation.** Generating 2D facial animation, also known as face reenactment, has seen rapid progress due to advancements in deep learning. To facilitate the image-to-image translation model for facial reenactment, researchers have introduced facial structure representations as prior knowledge, such as facial landmarks, (Sang et al., 2017; Wang et al., 2018; Wang et al., 2019), semantic label maps (Wang et al., 2019; Wang et al., 2019) and optical flows (Wang et al., 2019; Wang et al., 2019). However, such semantic labels for supervised learning are usually difficult to access for training and inference. A self-supervised motion transfer approach, i.e., the first-order motion model, was introduced to learn facial keypoint transformations from raw videos and warp the source image with the estimated dense optical flow (Wang et al., 2019). Based on the first-order motion model, Hong _et al._ proposed to recover facial depth images in a self-supervised manner and leverage the depth information to generate 2D facial animation (Wang et al., 2019). Wang _et al._ proposed a novel method called LIA to drive still 2D images via latent space navigation, which eliminates the need for explicit structure representations like keypoints and can discover high-level motion transformations in latent space (Wang et al., 2019). However, there remains a gap between 2D face reenactment and 3D facial retargeting, since most methods treat 3D information as prior knowledge, and few focus on how to transfer facial motion in the 3D mesh. We bridge the gap by incorporating the depth information from RGBD videos and modeling facial motion in the depth field using the depth motion dictionary to generate animated RGBD frames and subsequently deform the facial mesh. ## 3. Methodology We aim to animate the source 3D facial mesh \(\mathcal{S}\) of a target avatar based on the facial motion from a raw RGBD video \(\mathcal{D}\) captured by Azure Kinect V2. To achieve this, our proposed end-to-end framework consists of the RGBD animation module and the mesh retarget module, as depicted in Fig. 1. **The RGBD animation module** is designed to model facial motion extracted from the driving frame \(\mathbf{D}\) of video \(\mathcal{D}\) and transfer it to the rendered image \(\mathbf{S}\) from mesh \(\mathcal{S}\). Additionally, the RGBD animation module estimates a dense optical flow field \(\Phi\), which can be used to establish correspondence between the source image and the animated frame. The estimated dense flow \(\Phi\) will be utilized in the mesh retarget module. **The mesh retarget module** deforms the source mesh \(\mathcal{S}\) with detected facial landmarks as controllers and geodesic-distance-based controlling weights. The transformations of controllers are estimated using dense flow field \(\Phi\) from the RGBD animation module and then are mapped to 3D world space with the animated depth frames. Finally, the transformations are blended to generate the desired 3D facial animation frame by frame. In the following, we will introduce the two modules in detail. ### RGBD Animation Module #### 3.1.1. Encoder The RGBD animation module is inspired by the LIA method proposed by Wang _et al._(Wang et al., 2019). It utilizes an auto-encoder structure and consists of an encoder and two generators. The encoder encodes images to latent codes, and the RGBD generator decodes these codes and generates animated RGBD frames coarse-to-fine. Furthermore, the flow generator estimates dense optical flow fields which will be utilized in the mesh retarget module. In the following section, we proceed to discuss them comprehensively. The encoder is designed to learn a latent code \(z_{S\to D}\) to represent the motion transformation from \(\mathbf{S}\) to \(\mathbf{D}\). However, as Wang _et al._(Wang et al., 2019) point out, it is challenging to learn \(z_{S\to D}\) directly from the input image pair, as the model needs to model the direction and norm Figure 1. An overview of our proposed framework VFA. We generate 3D facial animation with the source mesh and captured RGBD video as input in an end-to-end manner. Our model consists of an RGBD animation module and a mesh retarget module. The RGBD animation module encodes source image \(\mathbf{S}\) to \(z_{S\to R}\) and encodes facial motion from driving frame \(\mathbf{D}\) to \(w_{R\to D}\) using the motion dictionary \(\mathcal{D}_{m}\). With the composed latent code \(z_{S\to D}\), the RGBD animation module generates the driven RGBD frame and estimates a dense optical flow field \(\Phi\), which can be used to warp the source image. The mesh retarget module then warp the source mesh \(\mathcal{S}\) utilizing information from the animated RGBD pair and dense flow \(\Phi\) to generate 3D facial animation. of the vector \(z_{S\to D}\) simultaneously. To overcome this challenge, we assume there exists a reference frame \(\mathbf{R}\) so that the motion transformation from \(\mathbf{S}\) to \(\mathbf{D}\) can be decomposed as \(\mathbf{S}\rightarrow\mathbf{R}\rightarrow\mathbf{D}\). This allows us to learn the transformations \(\mathbf{S}\rightarrow\mathbf{R}\) and \(\mathbf{R}\rightarrow\mathbf{D}\) independently and then compose them to represent \(\mathbf{S}\rightarrow\mathbf{D}\). We model \(z_{S\to D}\) as the target point in the latent space, which can be reached from the source point \(z_{S\to D}\) along a path \(w_{R\to D}\) in the latent space. Mathematically, the latent code \(z_{S\to D}\) can be decomposed as \(z_{S\to D}=z_{S\to R}+w_{R\to D}\). To ensure that the learned latent codes are in the same latent space, we utilize a single encoder to encode the source image and the driving image. As depicted in Fig. 1, the encoder encodes the source image and the driving image as \(z_{S\to R}\) and \(z_{D\to R}\) respectively. To extract high-level motion information from \(z_{D\to R}\), we propose to encode motion via Linear Motion Decomposition (Sutton et al., 2017). Specifically, we introduce a learnable orthogonal basis called the motion dictionary \(D_{m}\). Each vector of the motion dictionary represents a direction \(\mathbf{d}_{i}\) of the motion space. \(z_{D\to R}\) is mapped to a magnitude vector \(A_{R\to D}\) by an MLP layer. Then, the latent path \(w_{R\to D}\) is obtained by linearly combining the magnitude vector \(A_{R\to D}\) with the basis vector \(\mathbf{d}_{i}\) from the motion dictionary \(D_{m}\). With the latent code \(z_{S\to R}\) learned from the source image \(\mathbf{S}\) and the latent path \(w_{R\to D}\) extracted from the driving frame \(\mathbf{D}\), we can obtain \(z_{S\to D}\) which represents the transformation \(\mathbf{S}\rightarrow\mathbf{D}\). #### 3.1.2. Generator We proceed to introduce the RGBD generator and the flow generator respectively. The general architecture of the RGBD generator is depicted in Fig. 2, which consists of the flow and refinement networks. To learn multi-scale features, the generator employs a 6-level pyramid architecture and uses skip connection between layers. The _StyleConv_(Sutton et al., 2017) layer is introduced to decode \(z_{S\to D}\) and estimate multiple levels of optical flow fields \(\{\phi_{i}\}_{1}^{6}\). These optical flow fields \(\{\phi_{i}\}_{1}^{6}\) are then used to warp the feature maps \(x_{i}^{enc}\) from the corresponding level of the source encoder. However, as Siarohin et al. (2017) pointed out, the occluded parts of the source image \(\mathbf{S}\) can not be recovered by simply warping the image. Consequently, we propose to estimate the masks \(\{m_{i}\}_{1}^{6}\) along with \(\{\phi_{i}\}_{1}^{6}\). The masks are utilized to mask the occluded region to inpaint in the refinement network. In this way, the transformed feature map is formulated as: \[x_{i}^{\prime}=m_{i}\odot f_{w}(x_{i}^{enc},\phi_{i}),\] where \(f_{w}\) represents the backward warping function. The estimated optical flow \(\{\phi_{i}\}_{1}^{6}\) provides the pixel-wise correspondence between the source and warped images in the 2D domain, but it is not enough to model motion in the depth field. When an object moves relative to the camera's z-axis, it can alter the pixel values in the depth frame, which can not be captured by the optical flow \(\{\phi_{i}\}_{1}^{6}\). To address this issue, we introduce the depth Figure 2. An overview of the generator. The generator employs a 6-level pyramid architecture, comprising the warp network and the refinement network. The warp network utilizes _StyleConv_(Sutton et al., 2017) to estimate optical flow fields \(\{\phi_{i}\}_{1}^{6}\) and masks \(\{m_{i}\}_{1}^{6}\), and warps feature maps \(x_{i}^{enc}\) from the encoder. The depth motion dictionaries are introduced to model motion in the depth field. The refinement network then utilizes convolution layers to generate the animated frames in a coarse-to-fine manner. motion dictionaries \(\{D_{i}^{depth}\}_{1}^{6}\) to adequately learn motion in the depth field. The basis vectors of the depth motion dictionary represent the direction of the depth motion space. By linearly combining the basis vectors with the predicted magnitude vector \(\beta_{i}^{\delta\to D}\), we estimate the motion in the depth field. This allows us to obtain the feature map \(x_{i}^{depth}\) for generating accurate depth images. \(x_{i}^{depth}\) can be expressed as: \[x_{i}^{depth}=m_{i}\odot f_{w}(x_{i}^{enc},\phi_{i})+\sum_{j=1}^{M}\beta_{i,j }^{\delta\to D}\mathbf{d}_{i,j}^{depth},\] where \(M\) denotes the size of the depth motion dictionary \(D_{i}^{depth}\), and \(\mathbf{d}_{i,j}^{depth}\) represents the basic vectors of \(D_{i}^{depth}\). In the refinement network, we adopt a coarse-to-fine approach to generate precise RGBD results. At each layer of the refinement network, we combine the upsampled results from the previous layer with inpainted feature maps to generate images. This iterative process allows us to progressively refine the generated images in a hierarchical manner, capturing finer details and improving the overall visual quality of the outputs. We note that the warp network predicts optical flow fields \(\phi_{i}\) to warp the feature maps \(x_{i}^{enc}\) from the encoder. This poses a challenge for the mesh retarget module in analyzing the flow fields and accurately tracking the movement of controlling points during animation. To address this challenge, we introduce a dense flow generator to generate a dense optical flow field, denoted as \(\Phi\), which represents the pixel-wise correspondence between the input image \(\mathbf{S}\) and the animated image. The dense flow generator is trained through distillation from the original generator, utilizing the warped image from the refinement network and the source image as training data. This training scheme allows the dense flow generator to generate \(\Phi\) without conversion or extra training data. This approach facilitates the mesh retarget module to track the transformation of controllers to perform mesh retargeting. ### Mesh Retarget Module The design of the mesh retarget module is inspired by linear blend skinning (LBS), the most popular shape deformation algorithm for real-time animation due to its efficiency and simplicity (Srivastava et al., 2015; Wang et al., 2016). Our method modifies the vertex positions while preserving mesh connectivity to achieve accurate and consistent animation results for different target meshes. Furthermore, we determine controlling weights using geodesic distances, which maintain the mesh topology and produce natural results. We use an open-source library, Mediapipe (Madiapipe, 2018) to detect facial landmarks as controllers. These detected landmarks provide rich semantic information facilitating reasonable and transferable blend transformations. Then we compute geodesic distances on the mesh surface between controlling points and mesh vertices. For each mesh vertex, we assign the 10 nearest controllers to determine the controlling weights based on the inverse square of the geodesic distances. We must note that we use geodesic distance instead of Euclidean distance to preserve the mesh topology. Specifically, using geodesic distance as the metric ensures that the upper and lower lip vertices are not mistakenly considered neighbors. Further details on the comparison are discussed in Section 4.5. When generating animation frame by frame, we analyze the flow generator's dense flow field \(\Phi\) to estimate controller transformations. However, these estimated transformations are in the 2D screen space, while the source mesh \(\mathcal{S}\) exists in 3D world space. The transformations can not be directly aggregated to deform the mesh. Therefore, to map the transformation to 3D space, we estimate the position of controller \(v_{j}\) utilizing the depth of its corresponding pixel and unproject it to the 3D world space using the perspective matrix. We formulate this process as follows: \[\mathbf{v}_{j}=\mathbf{P}^{-1}(v_{j}.x,v_{j}.y,d(v_{j}),1)^{T},\] where \(P\) denotes the perspective matrix, and \(d(v_{j})\) denotes the depth value of \(v_{j}\)'s corresponding pixel in the image. We then track the movement of the controllers with the dense optical flow field \(\Phi\) and estimate controller transformations in 3D world space. The deformed vertex positions can be calculated by linearly combining the transformations with the controlling weights. The mesh retarget module plays a critical role in our proposed framework, bridging the 2D image animation problem and the 3D facial retargeting problem. By utilizing geodesic-determine controlling weights and incorporating depth information from generated frames, this module enables direct warping of the source mesh \(\mathcal{S}\) without blendshapes or rigs. This integration simplifies retargeting facial motion to avatars, making our proposed framework a cost-efficient solution for creating realistic 3D facial animation. ### Training Losses In the training stage, the RGBD animation module is trained in a self-supervised manner to reconstruct the driving frame \(\mathbf{D}\). To further enhance the robustness and performance of our model, we fine-tune the RGBD module based on the weights of LIA pretrained in the VoxCeleb (VoxCeleb, 2017) dataset. Four losses are used to train the RGBD module: a reconstruction loss \(\mathcal{L}_{rec}\), a perceptual loss \(\mathcal{L}_{ug}\), a smooth loss \(\mathcal{L}_{sm}\) and a structure preserve loss \(\mathcal{L}_{sp}\). \(\mathcal{L}_{rec}\) is calculated using \(\mathcal{L}_{1}\) distance, while the perceptual loss \(\mathcal{L}_{ug}\), proposed by Johnson _et al._(Johnson et al., 2017), is calculated on multi-scales feature maps extracted from the pre-trained VGG-19 network (Krizhevsky et al., 2017). To improve the quality of the depth image we generated, we design two depth-related losses: the smooth loss \(\mathcal{L}_{sm}\) and the structure preserve loss \(\mathcal{L}_{sp}\). The smooth loss is designed based on the Laplacian operator to improve the smoothness of \(\hat{\mathbf{D}}\): \[\mathcal{L}_{sm}=\mathbb{E}\big{|}\nabla^{2}\mathbf{D}-\nabla^{2}\hat{\mathbf{D }}\big{|}_{2},\] where \(\nabla^{2}\mathbf{D}\) denotes a Laplacian operator:\(\nabla^{2}\mathbf{D}(x,y)=\mathbf{D}(x+1,y)+\mathbf{D}(x-1,y)+\mathbf{D}(x,y+1)+ \mathbf{D}(x,y-1)-4\mathbf{D}(x,y)\) Furthermore, to preserve the original geometric structure and the depth discontinuity along the edges in depth frames, we introduce the structure preserve loss \(\mathcal{L}_{sp}\) proposed by Jeon _et al._(Jeon et al., 2017): \[\mathcal{L}_{sp}=\mathbb{E}_{p}\Big{|}\max_{q\in\Omega(p)}\big{|}\nabla \mathbf{D}(p)\big{|}-\max_{q\in\Omega(p)}\big{|}\nabla\hat{\mathbf{D}}(p) \big{|}_{2},\] where \(\Omega(p)\) denotes a local region in the neighborhood of \(p\), and \(\nabla\mathbf{D}(p)\) denotes the gradient calculated as \(\nabla_{x}\mathbf{D}(x,y)=\mathbf{D}(x+1,y)-\mathbf{D}(x-1,y),\nabla_{y} \mathbf{D}(x,y)=\mathbf{D}(x,y+1)-\mathbf{D}(x,y-1)\). In practice, we set \(\Omega(p)\) as a \(5\times 5\) window near the pixel \(p\). Our full loss function while training the RGBD animation module is the combination of the four losses discussed above: \[\mathcal{L}=\mathcal{L}_{\textit{ggg}}+\lambda_{rec}\mathcal{L}_{rec}+\lambda_{ sm}\mathcal{L}_{sm}+\lambda_{sp}\mathcal{L}_{sp},\] where we use three user-define hyperparameters for balance. In practice, these parameters are set as \(\lambda_{rec}=\lambda_{sm}=200,\lambda_{sp}=50\). It is important to note that our method exhibits robustness to different hyperparameter settings. Consequently, we do not demonstrate an ablation study examining the combined loss function. ## 4. Experiments ### Experiments Settings **Dataset** Our model is pre-trained in the VoxCeleb (VoxCeleb, 2017) dataset, and fine-tuned in the **MMFace4D** dataset proposed by Wu _et al._(Wu et al., 2019). The MMFace4D dataset is a large-scale facial RGBD video dataset captured by Azure Kinect V2. During training, we selected 191 identities, used 16,549 videos, cropped the facial region, removed the background, resized the frames to \(256\times 256\), and normalized the frames to the range of \([-1,1]\). For testing, we utilized the test dataset from VoxCeleb (VoxCeleb, 2017) and VoxCeleb2 (VoxCeleb, 2017) as well as RGBD videos of 41 unseen identities from MMFace4D. **Baselines** Our proposed method is the first neural approach attempting to create 3D facial animation driven by raw RGBD videos in an end-to-end manner, utilizing estimated optical flow fields to transform mesh vertices and deform the facial mesh. To provide a comprehensive evaluation of our method, we compare it with three state-of-the-art optical-flow-based 2D animation methods: FOMM (Wu et al., 2019), OSFV (Wu et al., 2019) and DaGAN (Dai et al., 2019). To animate RGBD images and drive the 3D mesh under our framework, we modify these methods and train them on the MMFace4D dataset using the loss function formulated in Sec. 3.3. Both methods are initialized with pre-trained weights on VoxCeleb (VoxCeleb, 2017). ### Evaluate Metrics We evaluate the performance of our model based on: (i) reconstruction fidelity using \(\mathcal{L}_{1}\) and LPIPS metrics, (ii) generated video quality using video FID, (iii) semantic consistency using average keypoint distance (AKD), average Euclidean distance (AED) and cosine similarity (CSIM), and (iv) generated depth images quality using \(\mathcal{L}_{1}\) and \(\mathcal{L}_{sm}\) in Sec. 3.3. These metrics provide us with a comprehensive evaluation of our model. **Video FID**(Wang et al., 2018), derived from Frechet inception distance (FID), is a metric that assesses both the visual quality and temporal consistency of the generated videos. Lower video FID indicates a higher quality of the generated videos. In our experiments, we utilize a pre-trained ResNext101(Hu et al., 2019) model to extract spatiotemporal features and compute video FID as an objective measure of video quality. **AKD** aims to measure the difference between the facial landmarks of the reconstructed frame \(\mathbf{\hat{D}}\) and the real frame \(\mathbf{D}\). We adopt the facial landmark detection method proposed by Bulat and Tzimiropoulos (Bulat and Tzimiropoulos, 2018) and compute the average distance between corresponding landmarks as AKD. **AED and CSIM**(Wang et al., 2018) both evaluate the ability to preserve identity while generating videos. We extracted identity embedded features with ArcFace (Beng et al., 2019), calculated the mean Euclidean distance between the identity embeddings as AED, and the cosine similarity between the embeddings as CSIM. ### Quantitative Analysis To provide a quantitative analysis, we conduct two experiments to evaluate our framework thoroughly: same-identity reconstruction \begin{table} \begin{tabular}{l|c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c|}{MMFace4D} & \multicolumn{8}{c}{VoxCeleb} \\ & \(\mathcal{L}_{1}\) & LPIPS & AKD & AED & CSIM & Depth \(\mathcal{L}_{1}\) & \(\mathcal{L}_{sm}\) & \(\mathcal{L}_{1}\) & LPIPS & AKD & AED & CSIM \\ \hline FOMM (Wu et al., 2019) & 10.61 & 0.123 & 2.822 & 0.537 & 0.839 & 0.040 & 0.019 & 12.27 & 0.128 & 2.398 & 0.574 & 0.814 \\ OSFV (Wu et al., 2019) & 9.32 & 0.121 & 2.612 & 0.528 & 0.842 & 0.037 & 0.017 & 11.87 & 0.121 & **2.385** & 0.562 & 0.822 \\ DaGAN (Dai et al., 2019) & 8.13 & 0.104 & 2.502 & 0.529 & **0.844** & 0.036 & 0.015 & 11.77 & 0.122 & 2.542 & 0.570 & 0.820 \\ Ours & **8.04** & **0.104** & **2.312** & **0.440** & **0.844** & **0.030** & **0.015** & **11.26** & **0.119** & 2.475 & **0.570** & **0.820** \\ \hline \hline \end{tabular} \end{table} Table 1. Results of Same-identity Reconstruction. We compared our method with three state-of-the-art methods on two datasets: MMFace4D (Wu et al., 2019) and VoxCeleb (VoxCeleb, 2017). For all metrics except CSIM, the lower, the better. Figure 3. User Study of Motion Retargeting. We asked 20 users to evaluate the generated videos’ visual quality and semantic consistency with the driving video. The score is in the range of 1-5, and a higher score denotes better. in Sec. 4.3.1 to assess the quality of our reconstruction, and cross-identity motion retargeting in Sec. 4.3.2 to evaluate the motion transfer ability of our approach. #### 4.3.1. Same-identity Reconstruction In this experiment, we aim to evaluate the reconstructing ability of our method. For simplicity, we focus on evaluating the quality of RGBD animation, as it directly affects the quality of mesh retargeting in our framework. We used the first frame as the source image (**S**) and the remaining frames as driving frames (**D**) to reconstruct the video. We conducted this experiment on the MMFace4D dataset and the VoxCeleb test set and reported the results in Tab. 1. As Tab. 1 shows, our method achieves the best performance across all the metrics. Compared with the three baseline methods, our method achieves the highest reconstruction fidelity in both datasets, particularly in depth frames. This result further validates the effectiveness of the depth motion dictionaries proposed in this paper. While FOMM (Krizhevsky et al., 2015) and OSFV (Zhu et al., 2017) treat the depth information as a simple image channel, and DaGAN (Dai et al., 2017) fails to model motion in the depth field, our method excels in depth reconstruction due to the depth motion dictionaries. Furthermore, our method achieves the highest scores in AKD, AED, and CSIM, indicating its ability to transfer motion while preserving the identity of the source character. These results highlight the strength of our multi-level flow-based generator. #### 4.3.2. Cross-identity Motion Retargeting In this experiment, we aim to assess the motion transfer ability of our method. Specifically, we used source images (**S**) and driving frames (**D**) from different video sequences, which differs from Sec. 4.3.1. We designed three tasks: driving source images from VoxCeleb2 with videos from VoxCeleb2 (Vox2\(\rightarrow\)Vox2), driving source images from MMFace4D with videos from VoxCeleb2 (MM\(\rightarrow\)Vox2), and driving source images from MMFace4D with videos from MMFace4D (MM\(\rightarrow\)MM). It's important to note that all the images and videos used here were unseen during the training of our models, ensuring a fair evaluation. As the ground truth animation videos were unavailable, we used video FID (Zhu et al., 2017) to assess our generated videos' visual quality and temporal consistency. We randomly selected 2200 source images and driving video clips for each task to generate retargeted videos. These videos were then downsampled to the resolution of \(112\times 112\) and randomly cut to 32 frames. We computed video FID by calculating the distance between the generated data and the real data distributions sampled from the source dataset. The results are presented in Tab. 2. Our method consistently outperforms the other methods regarding video FID for all tested tasks, demonstrating superior motion transfer ability. To provide an intuitive demonstration of the performance of the four methods, we show some transferred RGBD results in Fig. 4. FOMM produced some artifacts, such as a puffy face, while OSFV generated noisy results in color and depth frames. Although DaGAN transferred the facial motion better, it did not preserve the identity well. In contrast, our method generated the most natural and clearest color frame and the cleanest and smoothest depth frame, achieving the best performance in transferring the facial motion of RGBD frames. To further compare the effectiveness of our method with baseline methods, we conducted a user study. Each participant was asked to evaluate and rate the videos generated by the methods. Specifically, Figure 4. RGBD results of cross-identity motion retargeting. The first column shows the source images, while the second column shows the driving frames. The following columns show the transferred results of FOMM (Krizhevsky et al., 2015), OSFV (Zhu et al., 2017), DaGAN (Dai et al., 2017), and our method, respectively. Figure 5. Qualitative results from our method. The leftmost column displays the driving frames. In contrast, the subsequent columns exhibit three target characters: a woman, a child, and an alien. The top row shows the source meshes. More results are presented in the supplementary material. we randomly generated groups of videos. Each video group contained a video generated by our method and three videos generated by the three baselines. These video groups and their corresponding driving videos were presented to 20 human raters. The raters were asked to evaluate the video's visual quality and semantic consistency. As Fig. 3 reported, our method obtained the highest scores, which means that our method generates the most realistic video while transferring facial motion from driving videos. Notably, the three baselines rely on facial keypoints transformation, so their performance may be affected by the accuracy of the keypoint detector. However, our method captures facial motion by hierarchical motion dictionaries and generates RGBD frames coarse-to-fine, which facilitates realistic motion retargeting. ### Qualitative Analysis In Figure 5, we present qualitative examples of our proposed method. Specifically, we recorded an RGBD video using the Azure Kinect V2 to drive the facial expressions of three target characters: a woman, a child, and an alien. Despite the dissimilarity between the actor and the target characters, our method generated impressive results and accurately retargeted facial motion, particularly the motion of the mouth, and transferred micro-expressions, such as eye-widening, squinting, and mouth stretching. Furthermore, our method demonstrated impressive ability in animating the alien avatar, which had a significantly different appearance from the actor and was not seen during the training phase. However, expressions such as rolling eyes, gazing, and sticking out the tongue could not be transferred to the target character, as the target mesh did not model eyes and tongue separately. Overall, our results demonstrate the potential of our method as a novel solution for generating 3D facial animation. ### Ablation Study #### 4.5.1. Controlling Weights Calculation As discussed in Section 3.2, using geodesic distances to calculate controlling weights is crucial for producing accurate retargeted results. To further verify this assertion, we present a case study. When controlling weights are calculated using Euclidean distance, artifacts such as the wave-lip artifact can occur when the mouth is open, as illustrated in Fig. 6(a). This is because the movement of controlling points from the lower lip heavily influences the vertex of the upper lip. However, calculating blend weights using geodesic distance can avoid such artifacts, as the controlling points of the lower lip will not be considered neighbors of the vertex in the upper lip. Therefore, our method generates more natural results, as shown in Fig. 6(b). #### 4.5.2. Depth motion dictionary We provide an in-depth analysis of our design of the depth motion dictionaries \(\{D_{i}^{depth}\}_{1}^{6}\) in the generator, as discussed in Sec. 3.1.2. We focus on whether introducing \(D_{i}^{depth}\) benefits the generation of RGBD frames and determine the optimal number of basis vectors that \(D_{i}^{depth}\) requires. Here we performed the same task as discussed in Sec. 4.3.1, and reported the reconstruction faithfulness metrics, _i.e._, \(\mathcal{L}_{1}\), and LPIPS. As shown in Tab. 3, the depth motion dictionary \(D_{i}^{depth}\) indeed benefits the reconstruction ability of our method, especially in terms of depth image generation. Notably, when the size of \(D_{i}^{depth}\) is set to 5, our model achieves the best reconstruction results, which indicates that a few basic transformations can represent the depth motion space. Thus, a small depth motion dictionary is sufficient to model facial motion in the depth field. ## 5. Conclusion In this paper, we propose a novel self-supervised framework, Versatile Face Animator, for transferring facial motion from captured RGBD videos to 3D facial meshes to create 3D facial animation. Our framework comprises two modules: a flow-based RGBD animation module that animates RGBD frames with hierarchical motion dictionaries and a mesh retarget module that performs 3D facial retargeting using blend transformations. Our end-to-end approach eliminates the need for labor-intensive and time-consuming blendshape-based methods or facial rigging techniques. Extensive experiments demonstrate that our framework is a promising and cost-efficient solution for generating 3D facial animation compared with existing literature. However, there are still some limitations to our method. The RGBD animation module may not perform well in some occluded cases, and more training data may be required to improve retarget performance for unseen avatars. Additionally, the estimation of the controller transformations and the accuracy of the generated depth frames significantly influence the realisticness of the retargeted mesh. In future work, we plan to focus on improving the quality of generated RGBD frames and the versatility of our framework for 3D facial animation production. We believe that the simplicity, efficiency, and versatility of our framework are crucial steps toward the future of the metaverse. ## 6. Acknowledgements This work is supported by the National Key R&D Program of China under Grant No. 2021QY1500, the State Key Program of the National Natural Science Foundation of China (NSFC) (No.61831022). \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{Size of \(D_{i}^{depth}\)} & \multicolumn{3}{c|}{MMFace4D} & \multicolumn{2}{c}{VoxCeleb} \\ & \(\mathcal{L}_{1}\) & LPIPS & Depth \(\mathcal{L}_{1}\) & \(\mathcal{L}_{1}\) & LPIPS \\ \hline 0 & 8.64 & 0.118 & 0.043 & 11.42 & 0.129 \\ 5 & **8.04** & **0.104** & **0.030** & **11.26** & **0.119** \\ 10 & 8.71 & 0.116 & 0.036 & 11.69 & 0.126 \\ 20 & 8.73 & 0.119 & 0.035 & 11.35 & 0.127 \\ \hline \hline \end{tabular} \end{table} Table 3. Ablation Study on Depth Motion Dictionary. Figure 6. Comparison of mesh retargeting results. (a) Wave-lip artifacts are caused by using Euclidean distance to calculate blend weights \(w_{i,j}\). (b) More natural results are obtained with our method using geodesic distance to determine \(w_{i,j}\).
2307.00282
A nontopological soliton in an $\mathcal{N} = 1$ supersymmetric gauge Abelian model
A version of $\mathcal{N} = 1$ supersymmetric scalar electrodynamics is considered here, and it is shown that an electrically charged nontopological soliton exists in this model. In addition to the long-range electric field, the soliton also possesses a long-range scalar field, which leads to a modification of the intersoliton interaction potential at large distances. The supersymmetry of the model makes it possible to express fermionic zero modes of the soliton in terms of bosonic fields. The properties of the nontopological soliton are investigated using analytical and numerical methods.
A. Yu. Loginov
2023-07-01T09:32:29Z
http://arxiv.org/abs/2307.00282v1
# A nontopological soliton in an \(\mathcal{N}=1\) supersymmetric gauge Abelian model ###### Abstract A version of \(\mathcal{N}=1\) supersymmetric scalar electrodynamics is considered here, and it is shown that an electrically charged nontopological soliton exists in this model. In addition to the long-range electric field, the soliton also possesses a long-range scalar field, which leads to a modification of the intersoliton interaction potential at large distances. The supersymmetry of the model makes it possible to express fermionic zero modes of the soliton in terms of bosonic fields. The properties of the nontopological soliton are investigated using analytical and numerical methods. keywords: nontopological soliton, electric charge, supersymmetry, fermionic zero modes + Footnote †: journal: Physics Letters B ## 1 Introduction Many models of field theory have solutions that describe spatially localised and nonspreading field configurations with a finite energy [1; 2]. Nontopological solitons [3] represent one of these field configurations. A necessary condition for the existence of a nontopological soliton is the symmetry of the corresponding field model, which may be both global and local. In addition, the interaction potentials of the model must meet a certain condition [4; 5]. The symmetry of the model results in the existence of a conserved Noether charge. The field configuration of a nontopological soliton is an extremum (minimum or saddle point) of the energy functional at a fixed value of the Noether charge, and this basic property largely determines the other properties of a nontopological soliton; in particular, it leads to the characteristic time dependence \(\exp\left(-i\omega t\right)\) of a soliton field. Nontopological solitons may be formed during a primordial phase transition, thus making a contribution to various scenarios of the evolution of the early Universe [6]. Furthermore, they may play an essential role in baryogenesis via the Affleck-Dine mechanism [7], and are considered to be places where dark matter may be concentrated [8]. Some field models with local Abelian symmetry admit the existence of electrically charged nontopological solitons. First described in Refs. [9; 10], they have since been investigated in many other works (see, e.g., Refs. [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]). The properties of electrically charged solitons differ significantly from those of solitons without an electric charge; in particular, the electric charge and the energy of a nontopological soliton cannot be arbitrarily large in the general case [10; 20]. In addition, an electrically charged nontopological soliton can exist only if the gauge coupling constant does not exceed some maximum value [10]. The main goal of this work is to study a nontopological soliton in a version of \(\mathcal{N}=1\) supersymmetric scalar electrodynamics. The interaction potential of this model is expressed in terms of a superpotential, which leads to relations between the nonlinear interaction constants. In addition, the superpotential largely determines the form of the scalar-fermion interaction. The requirements of renormalisability and gauge invariance impose severe restrictions on the form of the superpotential, all of which significantly reduces the number of model parameters compared to the nonsupersymmetric case. Throughout this paper, we use the natural units \(c=1\), \(\hbar=1\). The metric tensor and the Dirac matrices are defined according to Ref. [24]. ## 2 Lagrangian and field equations of the model The \(\mathcal{N}=1\) supersymmetric gauge model under consideration includes three left-chiral matter superfields \(\Phi_{-1}\), \(\Phi_{0}\), and \(\Phi_{+1}\), and one Abelian gauge superfield \(V\). The left-chiral superfield \(\Phi_{n}\) contains two components: the complex scalar field \(\phi_{n}\) and the left-hand Dirac spinor field \(\psi_{nL}\). Written in the Wess-Zumino gauge, the gauge superfield \(V\) also contains two components: the Abelian gauge field \(A_{\mu}\) and the Majorana spinor field \(\lambda\). The superfields \(\Phi_{n}\) and \(V\) also contain auxiliary fields, but these can be expressed in terms of the above mentioned physical fields. The Lagrangian of the model takes the form \[\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\sum_{n}\left(D_{\mu} \phi_{n}\right)^{*}D^{\mu}\phi_{n}-V\left(\phi\right)\] \[-\frac{1}{2}\bar{\lambda}\gamma^{\mu}\partial_{\mu}\lambda-\sum_ {n}\overline{\psi_{nL}}\gamma^{\mu}D_{\mu}\psi_{nL}\] \[-\frac{1}{2}\sum_{nm}\left\{f_{nm}\left(\psi_{nL}^{\mathrm{T}} \epsilon\psi_{mL}\right)+f_{nm}^{*}\left(\psi_{nL}^{\mathrm{T}}\epsilon\psi_ {mL}\right)^{*}\right\}\] \[+i\sqrt{2}\sum_{n}q_{n}\left\{\phi_{n}\left(\overline{\psi_{nL}} \lambda\right)-\phi_{n}^{*}\left(\bar{\lambda}\psi_{nL}\right)\right\}. \tag{1}\] In Eq. (1), the matrix \(\epsilon=-i\gamma_{0}\gamma_{2}\gamma_{5}\), the Latin indices \(n\) and \(m\) run over the set \([-1,0,1]\), and the covariant derivatives \[D_{\mu}\phi_{n} =\partial_{\mu}\phi_{n}-iq_{n}A_{\mu}\phi_{n}, \tag{2a}\] \[D_{\mu}\psi_{n} =\partial_{\mu}\psi_{n}-iq_{n}A_{\mu}\psi_{n}, \tag{2b}\] where \(q_{n}=ne\) are the Abelian charges of the left-chiral superfield \(\Phi_{n}\). To avoid \(U(1)\)-\(U(1)\)-\(U(1)\) and \(U(1)\)-graviton-graviton anomalies, the sum of the \(U(1)\) quantum numbers of all left-chiral superfields and the sum of their cubes should vanish, which is obviously true in our case. The field-dependent coefficients \(f_{nm}\) and the interaction potential \(V\left(\phi\right)\) are expressed in terms of the superpotential \[f\left(\phi\right)=m\phi_{-1}\phi_{+1}+g\phi_{-1}\phi_{0}\phi_{+1}, \tag{3}\] where \(m\) is a mass parameter and \(g\) is a coupling constant. The coefficients \(f_{nm}=\partial^{2}f/\partial\phi_{n}\partial\phi_{m}\), and the interaction potential \[V\left(\phi\right) =\sum_{n}\left|\partial f/\partial\phi_{n}\right|^{2}+\frac{1}{2} \Bigl{(}\sum_{n}q_{n}\phi_{n}^{*}\phi_{n}\Bigr{)}^{2}\] \[=\left|m+g\phi_{0}\right|^{2}\left(\left|\phi_{-1}\right|^{2}+ \left|\phi_{+1}\right|^{2}\right)\] \[+g^{2}\left|\phi_{-1}\right|^{2}\left|\phi_{+1}\right|^{2}\] \[+\frac{e^{2}}{2}\left(\left|\phi_{+1}\right|^{2}-\left|\phi_{-1} \right|^{2}\right)^{2}. \tag{4}\] The field equations of model (1) have the form \[\partial_{\mu}F^{\mu\nu}=j^{\nu}, \tag{5}\] \[D_{\mu}D^{\mu}\phi_{n}-\frac{\partial V}{\partial\phi_{n}^{*}}- \frac{1}{2}\sum_{k^{\prime}m^{\prime}}f_{k^{\prime}m^{\prime}n}^{*}\left(\psi _{k^{\prime}L}^{\mathrm{T}}\epsilon\psi_{m^{\prime}L}\right)^{*}\] \[-i\sqrt{2}q_{n}\left(\overline{\lambda}\psi_{nL}\right)=0, \tag{6}\] \[D\!\!\!/\psi_{nL}-\sum_{m^{\prime}}f_{nm^{\prime}}^{*}\epsilon \left(\overline{\psi_{m^{\prime}L}}\right)^{\mathrm{T}}-i\sqrt{2}q_{n}\phi_{n} \lambda_{R}=0,\] (7) \[\partial\!\!\!/\lambda+i\sqrt{2}\sum_{m^{\prime}}q_{m^{\prime}} \left\{\phi_{m^{\prime}}\epsilon\left(\overline{\psi_{m^{\prime}L}}\right)^{ \mathrm{T}}+\phi_{m^{\prime}}^{*}\psi_{m^{\prime}L}\right\}=0, \tag{8}\] where the coefficients \(f_{kmn}=\partial^{3}f/\partial\phi_{k}\partial\phi_{m}\partial\phi_{n}\) and the electromagnetic current \[j^{\nu}=i\sum_{n}q_{n}\phi_{n}^{*}\overleftrightarrow{D^{\nu}}\phi_{n}-i\sum_ {n}q_{n}\overline{\psi_{nL}}\gamma^{\nu}\psi_{nL}. \tag{9}\] Later on, we shall also need the expression for the energy density of an electrically charged bosonic field configuration of the model \[\mathcal{E} =\frac{1}{2}E_{i}E_{i}+\sum_{n}\left\{\left(D_{t}\phi_{n}\right)^{ *}D_{t}\phi_{n}\right.\] \[+\left(D_{i}\phi_{n}\right)^{*}D_{i}\phi_{n}\right\}+V\left(\phi \right), \tag{10}\] where \(E_{i}=F_{i0}\) are the components of the electric field strength. ## 3 Ansatz and some properties of the nontopological soliton The model (1) can be viewed as the Abelian gauge version of a model of the Wess-Zumino type [25]. In Ref. [26], it was shown that for superpotentials of the type in Eq. (3), these models admit the existence of nontopological solitons. It follows from continuity considerations that nontopological solitons can also exist in gauge model (1), at least for sufficiently small values of the gauge coupling constant \(e\). Let us define the shifted field \(\varphi_{0}\left(\mathbf{x},t\right)=mg^{-1}+\phi_{0}\left(\mathbf{x},t\right)\). To find a nontopological soliton solution, we shall use the spherically symmetrical ansatz: \[\phi_{+1}\left(\mathbf{x},t\right) =2^{-\frac{1}{2}}\exp\left(-i\omega t\right)f_{+1}\left(r\right), \tag{11a}\] \[\phi_{-1}\left(\mathbf{x},t\right) =2^{-\frac{1}{2}}\exp\left(i\omega t\right)f_{-1}\left(r\right),\] (11b) \[\varphi_{0}\left(\mathbf{x},t\right) =2^{-\frac{1}{2}}\left(\chi_{1}\left(r\right)+i\chi_{2}\left(r \right)\right)\] (11c) \[A^{\mu}\left(\mathbf{x},t\right) =\left(\Phi\left(r\right),\,0\right). \tag{11d}\] The energy density (10), written in terms of the ansatz functions (11), takes the form \[\mathcal{E} =\frac{1}{2}\Omega^{2}\left(f_{-1}^{2}+f_{+1}^{2}\right)+\frac{1 }{2}\Phi^{\prime 2}\] \[+\frac{1}{2}\left(f_{-1}^{\prime 2}+f_{+1}^{\prime 2}+\chi_{1}^{ \prime 2}+\chi_{2}^{\prime 2}\right)+V, \tag{12}\] where the interaction potential \[V =\frac{g^{2}}{4}\left(f_{-1}^{2}+f_{+1}^{2}\right)\left(\chi_{1}^ {2}+\chi_{2}^{2}\right)\] \[+\frac{g^{2}}{4}f_{-1}^{2}f_{+1}^{2}+\frac{e^{2}}{8}\left(f_{+1}^ {2}-f_{-1}^{2}\right)^{2}, \tag{13}\] the function \(\Omega\left(r\right)=\omega-e\Phi\left(r\right)\), and the prime indicates the derivative with respect to \(r\). The Lagrangian density \(\mathcal{L}\) differs from the energy density \(\mathcal{E}\) only in regard to the sign of the terms in the second line of Eq. (12). The electromagnetic current of spherically symmetrical field configuration (11) is \[j^{\nu}=\left(e\Omega\left(f_{-1}^{2}+f_{+1}^{2}\right),0,0,0\right). \tag{14}\] Substituting ansatz (11) into the bosonic parts of field equations (5) and (6), we obtain a system of nonlinear differential equations for the ansatz functions: \[\Omega^{\prime\prime}+\frac{2}{r}\Omega^{\prime}-e^{2}\left(f_{-1}^{2}+f_{+1} ^{2}\right)\Omega=0, \tag{15}\] \[f_{\pm 1}^{\prime\prime}+\frac{2}{r}f_{\pm 1}^{\prime}+\frac{\partial U}{ \partial f_{\pm 1}}=0, \tag{16}\] \[\chi_{1,2}^{\prime\prime}+\frac{2}{r}\chi_{1,2}^{\prime}+\frac{\partial U}{ \partial\chi_{1,2}}=0, \tag{17}\] where the effective potential \[U=\frac{1}{2}\Omega^{2}\left(f_{-1}^{2}+f_{+1}^{2}\right)-V. \tag{18}\] The regularity of the soliton field configuration and the finiteness of the soliton energy lead to the following boundary conditions: \[f_{\pm 1}^{\prime}\left(0\right) =0,\qquad f_{\pm 1}\left(r\right)\underset{r\rightarrow\infty}{ \longrightarrow}0, \tag{19a}\] \[\chi_{1,2}^{\prime}\left(0\right) =0,\qquad\chi_{1,2}\left(r\right)\underset{r\rightarrow\infty}{ \longrightarrow}\chi_{1,2\,\mathrm{vac}},\] (19b) \[\Omega^{\prime}\left(0\right) =0,\qquad\quad\Omega\left(r\right)\underset{r\rightarrow\infty}{ \longrightarrow}\omega. \tag{19c}\] The boundary conditions in Eqs. (19b) and (19c) need some explanation. From Eqs. (1) and (4), it follows that the classical vacuum of model (1) is \[F_{\mu\nu}=0,\quad\phi_{\pm 1}=0,\quad\phi_{0}=\phi_{0\,\mathrm{vac}}, \tag{20}\] where \(\phi_{0\,\mathrm{vac}}\) is an arbitrary complex constant. From Eq. (20), it follows that model (1) has an infinite number of vacua at the classical level, as reflected in the boundary condition in Eq. (19b). All of these vacua are invariant under both the \(U(1)\) gauge and \(\mathcal{N}=1\) supersymmetry transformations. According to the non-renormalisation theorems [27, 28], this will also be true when perturbative quantum corrections are taken into account. Eqs. (13), (17), and (18) tell us that \(\chi_{1}\) and \(\chi_{2}\) satisfy the same linear homogeneous differential equation, while Eq. (19b) tells us that \(\chi_{1}\) and \(\chi_{2}\) satisfy the same homogeneous boundary condition at \(r=0\). It follows that the ratio \(\chi_{2}(r)/\chi_{1}(r)\) does not depend on \(r\), and is equal to \(\chi_{2\,\mathrm{vac}}/\chi_{1\,\mathrm{vac}}\). The phase of the ansatz function \(\varphi_{0}(r)=2^{-1/2}(\chi_{1}(r)+i\chi_{2}(r))\) is therefore a constant. However, from Eqs. (12) and (13), it follows that in this case, the energy density and the Lagrangian density do not depend on the phase of \(\varphi_{0}\left(r\right)\). Without loss of generality, we can set this phase (and hence \(\chi_{2}(r)\)) equal to zero. The field configurations of model (1) are determined up to gauge transformations. In particular, the choice of ansatz (11) is equivalent to the choice of the radial gauge. However, this gauge does not fix the soliton field configuration completely; to do this, we need to impose an additional condition \(\Phi(\infty)=0\), which is equivalent to Eq. (19c). The basic property of any non-topological soliton is that it is an extremum of the energy functional \(E\) at a fixed value of some Noether charge \(Q_{N}\) (in our case \(E=4\pi\int_{0}^{\infty}\mathcal{E}(r)r^{2}dr\) and \(Q_{N}=4\pi e^{-1}\int_{0}^{\infty}j^{0}(r)r^{2}dr\)). This property results in the differential relation \[dE/dQ_{N}=\Omega_{\infty}, \tag{21}\] where \(\Omega_{\infty}\equiv\Omega(\infty)=\omega-e\Phi(\infty)=\omega\). Note that a similar relation also holds for the electrically charged magnetic monopoles [29]. Eqs. (13) and (18) tell us that the potentials \(V\) and \(U\) are invariant under the permutation \(f_{-1}\leftrightarrow f_{+1}\). It follows that if \(f_{-1}(r)\), \(f_{+1}(r)\), \(\chi_{1}(r)\), and \(\Omega(r)\) is a solution of system (15) - (17), then \(f_{+1}(r)\), \(f_{-1}(r)\), \(\chi_{1}(r)\), and \(\Omega(r)\) is also a solution. Using qualitative research methods for differential equations, it can be shown that the solutions \(f_{-1}(r)\) and \(f_{+1}(r)\) coincide when the gauge coupling constant \(e=0\). In the following, we define the function \(\delta\left(r,e^{2}\right)=f_{+1}\left(r,e^{2}\right)-f_{-1}\left(r,e^{2}\right)\), where the dependence on the gauge coupling constant is explicitly indicated and we use the fact that the potential \(V\) depends on \(e\) only through \(e^{2}\). The function \(\delta\left(r,e^{2}\right)\) satisfies the nonlinear differential equation \[\delta^{\prime\prime}+\frac{2}{r}\delta^{\prime}+\left[\Omega^{2 }+2^{-1}g^{2}\left(f_{-1}^{2}-\chi_{1}^{2}\right)-2e^{2}f_{-1}^{2}\right]\delta\] \[+2^{-1}\left(g^{2}-4e^{2}\right)f_{-1}\delta^{2}-2^{-1}e^{2} \delta^{3}=0, \tag{22}\] where the dependence of \(\delta\), \(\Omega\), \(f_{-1}\), and \(\chi_{1}\) on \(r\) and \(e^{2}\) is omitted. From Eq. (19a), it follows that \(\delta\left(r,e^{2}\right)\) satisfies the boundary conditions \[\delta^{\prime}\left(0,e\right)=0,\quad\delta\left(\infty,e\right)=0. \tag{23}\] Our goal is to find the derivatives \(\delta^{(n)}\equiv\partial^{n}\delta/\partial e^{n}\) at \(e=0\). To do this, we differentiate Eq. (22) with respect to \(e\), and then set \(e=0\). As a result, we obtain the trivial linear equation \(\delta^{(1)\prime\prime}+2r^{-1}\delta^{(1)\prime}=0\). Its solution must satisfy the boundary conditions in Eq. (23) differentiated with respect to \(e\), and it is therefore easy to see that the solution is \(\delta^{(1)}(r,0)=0\). Thus, we have established that \(\delta(r,0)=0\) and \(\delta^{(1)}(r,0)=0\). By continuing to differentiate Eq. (22) with respect to \(e\), setting \(e=0\), and taking into account the previous results at each step, it can be shown that \(\delta^{(n)}(r,0)=0\) for any \(n\geq 0\). It follows that \(\delta\left(r,e^{2}\right)\) vanishes, and hence \(f_{+1}\left(r,e^{2}\right)=f_{-1}\left(r,e^{2}\right)\equiv f\left(r,e^{2}\right)\). We now examine the asymptotics of the soliton fields for large \(r\). Suppose that \(f(r)\) tends to zero exponentially as \(r\to\infty\). In this case, we can neglect the nonlinear terms in Eqs. (15) and (17), and obtain the asymptotic forms of \(\Omega(r)\) and \(\chi_{1}(r)\) as \(r\to\infty\): \[\Omega\sim\omega-\frac{e}{4\pi}\frac{Q}{r}, \tag{24}\] \[\chi_{1}\sim\chi_{1\,\text{vac}}-\frac{1}{4\pi}\frac{Q_{\text{s}}}{r}, \tag{25}\] where \(Q=4\pi\int_{0}^{\infty}j^{0}(r)r^{2}dr\) is the electric charge of the soliton, and \(Q_{s}\) is the scalar charge defined by analogy with the large-distance asymptotics \(\Phi\sim Q/(4\pi r)\) for the electric potential. We see that both \(\Omega=\omega-e\Phi\) and \(\chi_{1}\) tend rather slowly (\(\propto r^{-1}\)) to their limiting values as \(r\to\infty\). It should be noted that nontopological solitons with a long-range scalar field were studied in Refs. [30; 26; 31]. Furthermore, electrically charged nontopological solitons with a long-range scalar field were studied in Refs. [21; 23; 32]. By substituting Eqs. (24) and (25) into Eq. (16), retaining the terms linear in \(f(r)\), and solving the resulting differential equation, we obtain the large-distance asymptotics of \(f(r)\) as \[f\left(r\right) \sim f_{\infty}e^{-\Delta r}\left(\Delta r\right)^{\beta}\] \[\times\left(1-\frac{a^{2}}{32\pi^{2}\Delta^{3}r}-\frac{b}{8\pi \Delta^{2}r}\right), \tag{26}\] where \(f_{\infty}\) is a constant, \[\Delta =\left(\omega_{\text{max}}^{2}-\omega^{2}\right)^{1/2}, \tag{27a}\] \[a =e\omega_{\text{max}}\left|Q\right|-g\left|\omega\right|\left|Q_{ \text{s}}\right|,\] (27b) \[b =e\left|\omega\right|\left|Q\right|-g\omega_{\text{max}}\left|Q_{ \text{s}}\right|,\] (27c) \[\beta =-1-b/\left(4\pi\Delta\right), \tag{27d}\] and the parameter \(\omega_{\text{max}}=2^{-1/2}g\left|\chi_{1\,\text{vac}}\right|\). We see that our assumption about the exponential asymptotics of \(f(r)\) turned out to be correct; we also see that the long-range terms in the asymptotics of \(\Omega(r)\) and \(\chi_{1}(r)\) modify the pre-exponential factor in the asymptotics of \(f(r)\). Furthermore, we can conclude that the nontopological soliton cannot exist when \(\left|\omega\right|>\omega_{\text{max}}\), since in this case asymptotics (26) shows oscillating behavior, leading to an infinite energy and charge for the corresponding field configuration. The presence of two long-range fields in Eqs. (24) and (25) leads to a modification of the intersoliton interaction potential at large distances. It can be shown that in the case of large distances and low velocities, the leading term of the intersoliton interaction potential is \[V_{12}=\frac{Q^{(1)}Q^{(2)}-Q_{\text{s}}^{(1)}Q_{\text{s}}^{(2)}}{4\pi r_{12}}, \tag{28}\] where \(Q^{(i)}\) (\(Q_{\text{s}}^{(i)}\)) is the electric (scalar) charge of the \(i\)-th soliton. Eq. (28) tells us that the energy of the intersoliton interaction is the sum of the energies of the Coulomb and scalar interactions. Depending on the signs of \(Q^{(1)}\) and \(Q^{(2)}\), the Coulomb energy may be both positive (repulsion) and negative (attraction). At the same time, it follows from the inhomogeneity of the boundary condition in Eq. (19b) that for the fixed vacuum in Eq. (20), the scalar charges \(Q_{\mathrm{s}}^{(i)}\) of the solitons must have the same sign. Hence, unlike the Coulomb field, the long-range scalar field always leads to attraction between solitons. ## 4 Fermionic zero modes The Lagrangian density (1) is written in the Wess-Zumino gauge, meaning that the corresponding action \(S=\int\mathcal{L}d^{4}x\) is not invariant under the usual \(\mathcal{N}=1\) supersymmetry transformations. However, it will be invariant under the modified supersymmetry transformations [33]: \[\delta\phi_{n} =\sqrt{2}\overline{\alpha_{R}}\psi_{nL}, \tag{29a}\] \[\delta\psi_{nL} =\sqrt{2}\gamma^{\mu}\left(D_{\mu}\phi_{n}\right)\alpha_{R}+ \sqrt{2}\mathcal{F}_{n}\alpha_{L},\] (29b) \[\delta A_{\mu} =\bar{\alpha}\gamma_{\mu}\lambda,\] (29c) \[\delta\lambda =i\mathcal{D}\gamma_{5}\alpha-\frac{1}{4}F_{\mu\nu}\left[\gamma^ {\mu},\gamma^{\nu}\right]\alpha, \tag{29d}\] where \[\alpha=-i\begin{pmatrix}\epsilon_{a}\\ \sum_{b}e_{ab}e_{b}^{*}\end{pmatrix}, \tag{30}\] \[\begin{pmatrix}\epsilon_{1}\\ \epsilon_{2}\end{pmatrix}=\begin{pmatrix}\epsilon_{11}+i\epsilon_{12}\\ \epsilon_{21}+i\epsilon_{22}\end{pmatrix}, \tag{31}\] \[\mathcal{F}_{n}=-\left(\partial f/\partial\phi_{n}\right)^{*}, \tag{32}\] and \[\mathcal{D}=e\left(\phi_{+1}^{*}\phi_{+1}-\phi_{-1}^{*}\phi_{-1}\right). \tag{33}\] In Eq. (31), \(\epsilon_{ij}\) are real infinitesimal anticommuting transformation parameters and \(e_{ab}\) is an antisymmetric \(2\times 2\) matrix with \(e_{12}=+1\), from which it follows that \(\alpha\) in Eq. (30) is the Majorana spinor. In Eq. (32), the auxiliary fields \(\mathcal{F}_{n}\) are expressed in terms of superpotential (3), and it is assumed that all the fields in Eqs. (29a)-(29d) satisfy field equations (5)-(8). Fermionic zero modes are generated by the action of transformations (29b) and (29d) on purely bosonic field configuration (11). To represent these in a compact form, we introduce a column \(\Psi\) consisting of four fermionic fields included in the Lagrangian (1). The transposed form of \(\Psi\) is \[\Psi^{\mathrm{T}}=N\left(\psi_{+1L}^{\mathrm{T}},\psi_{0L}^{\mathrm{T}},\psi_ {-1L}^{\mathrm{T}},\lambda^{\mathrm{T}}\right), \tag{34}\] where \[\psi_{\pm 1L}=\begin{pmatrix}A_{\pm 1}f+Cf^{\prime}\\ B_{\pm 1}f+Df^{\prime}\\ 0\\ 0\end{pmatrix}e^{\mp i\omega t}, \tag{35}\] \[\psi_{0L}=\begin{pmatrix}i\epsilon_{1}2^{-\frac{1}{2}}gf^{2}+C\chi_{1}^{ \prime}\\ i\epsilon_{2}2^{-\frac{1}{2}}gf^{2}+D\chi_{1}^{\prime}\\ 0\\ 0\end{pmatrix}, \tag{36}\] \[\lambda=i\Phi^{\prime}\begin{pmatrix}\epsilon_{1}c+\epsilon_{2}e^{-i\varphi} s\\ -\epsilon_{2}c+\epsilon_{1}e^{i\varphi}s\\ -\epsilon_{2}^{*}c+\epsilon_{1}^{*}e^{-i\varphi}s\\ -\epsilon_{1}^{*}c-\epsilon_{2}^{*}e^{i\varphi}s\end{pmatrix}, \tag{37}\] and \(N\) is a normalisation factor. For brevity, in Eqs. (35)-(37), we use the notation \[A_{\pm 1} =\pm ie_{2}^{*}\Omega+i2^{-\frac{1}{2}}\epsilon_{1}g\chi_{1}, \tag{38a}\] \[B_{\pm 1} =\mp i\epsilon_{1}^{*}\Omega+i2^{-\frac{1}{2}}\epsilon_{2}g\chi_{1},\] (38b) \[C =-\epsilon_{2}^{*}c+\epsilon_{1}^{*}e^{-i\varphi}s,\] (38c) \[D =-\epsilon_{1}^{*}c-\epsilon_{2}^{*}e^{i\varphi}s, \tag{38d}\] where \(c=\cos(\theta)\), \(s=\sin(\theta)\), \(\epsilon_{1}=\epsilon_{11}+i\epsilon_{12}\), and \(\epsilon_{2}=\epsilon_{21}+i\epsilon_{22}\). Eqs. (35)-(37) depend linearly on the four anticommuting parameters \(\epsilon_{ij}\), and hence Eq. (34) can be written as \(\Psi=\sum_{ij}\epsilon_{ij}\Psi_{ij}\). It follows that there are four (according to the number of the \(\mathcal{N}=1\) supersymmetry generators) independent fermionic zero modes \(\Psi_{ij}\) expressed in terms of ansatz functions (11). It can be shown that the components of the fermionic zero modes \(\Psi_{ij}\) satisfy field equations (7) and (8), provided that the ansatz functions \(\Omega\), \(f\), and \(\chi_{1}\) satisfy Eqs. (15)-(17). The fermionic zero modes satisfy the orthonormality condition \[\int\Psi_{ij}^{\dagger}\Psi_{i^{\prime}j^{\prime}}d^{3}x=\delta_{ii^{\prime}} \delta_{jj^{\prime}}, \tag{39}\] provided that the normalisation factor \[N =\left[2\pi\int_{0}^{\infty}\left[4\left(\Phi^{\prime 2}+f^{ \prime 2}\right)+2\chi_{1}^{\prime 2}\right.\right.\] \[\left.\left.+g^{2}f^{4}+2f^{2}\left(2\Omega^{2}+g^{2}\chi_{1}^{2 }\right)\right]r^{2}dr\right]^{-\frac{1}{2}}. \tag{40}\] From Eq. (37), it follows that the gaugino component \(\lambda\) of the fermionic zero mode \(\Psi_{ij}\) is proportional to the electric field strength \(E_{r}=-\Phi^{\prime}\) of the soliton, and therefore decreases rather slowly (\(\propto r^{-2}\)) at large distances. Furthermore, Eqs. (25) and (36) tell us that at large distances, the component \(\psi_{0L}\propto\chi_{1}^{\prime}\sim Q_{\rm s}/(4\pi r^{2})\). We see that similarly to the \(\lambda\) component, the \(\psi_{0L}\) component of \(\Psi_{ij}\) decreases slowly (\(\propto r^{-2}\)) at large distances. In contrast, Eqs. (26) and (35) tell us that the two remaining components \(\psi_{\pm 1L}\) of \(\Psi_{ij}\), which correspond to the short-range scalar fields \(\phi_{\pm 1}\), decrease exponentially away from the soliton. Written in terms of the left-handed fermion fields (including the massless "neutrino" \(\psi_{0L}\)), the Lagrangian (1) is not invariant under the \(P\) and \(C\) transformations; it is, however, invariant under the combined \(CP\) transformation. Under the latter transformation, the original soliton solution (\(f(r)\exp(\mp i\omega t)\), \(\chi_{1}(r)\), \(\Phi(r)\), \(\Omega(r)\)) of the energy \(E\) and electric charge \(Q\) is transformed into an antisoliton solution (\(f(r)\exp(\pm i\omega t)\), \(\chi_{1}(r)\), \(-\Phi(r)\), \(-\Omega(r)\)) of the energy \(E\) and electric charge \(-Q\). It can be shown that under the \(CP\) transformation, the fermionic zero modes \(\Psi_{ij}\) of the soliton turn into those \(\tilde{\Psi}_{ij}\) of the antisoliton: \[\left[\Psi_{11}(x)\right]^{CP} = -\tilde{\Psi}_{22}(x),\] \[\left[\Psi_{12}(x)\right]^{CP} = -\tilde{\Psi}_{21}(x),\] \[\left[\Psi_{21}(x)\right]^{CP} = \tilde{\Psi}_{12}(x),\] \[\left[\Psi_{22}(x)\right]^{CP} = \tilde{\Psi}_{11}(x). \tag{41}\] This is because the \(CP\) transformation is a discrete symmetry of the Lagrangian (1), and hence must convert one fermion-soliton solution into another. ## 5 Numerical results The system of differential equations (15) - (17) with boundary conditions (19) represents a mixed boundary value problem on the semi-infinite interval \(r\in[0,\infty)\). To solve this system, we use the numerical methods provided in the Maple package [34]. Formally, the boundary value problem (15) - (19) depends on five parameters: \(\omega\), \(m\), \(g\), \(e\), and \(\chi_{1\,{\rm vac}}\). However, it is easily shown that the energy and Noether charge of the soliton depends nontrivially on only three dimensionless parameters: \[E\left(\omega,m,g,e,\chi_{1\,{\rm vac}}\right)=mg^{-2}\tilde{E} \left(\tilde{\omega},\tilde{e},\tilde{\chi}_{1\,{\rm vac}}\right), \tag{42}\] \[Q_{N}\left(\omega,m,g,e,\chi_{1\,{\rm vac}}\right)=g^{-2}\tilde {Q}_{N}\left(\tilde{\omega},\tilde{e},\tilde{\chi}_{1\,{\rm vac}}\right), \tag{43}\] where \(\tilde{\omega}=\omega/m\), \(\tilde{e}=e/g\), and \(\tilde{\chi}_{1\,{\rm vac}}=\chi_{1\,{\rm vac}}/m\). Hence, without loss of generality, we can set the parameters \(m\) and \(g\) equal to unity. In addition, we set the dimensionless parameter \(\tilde{\chi}_{1\,{\rm vac}}=2\sqrt{2}\) in these numerical calculations. Figure 1 shows the dependence of the soliton energy \(\tilde{E}\) on the phase frequency \(\tilde{\omega}\) for several values of the gauge coupling constant \(\tilde{e}\). We see that for each \(\tilde{e}\), the phase frequency \(\tilde{\omega}\in(\tilde{\omega}_{\rm min}(\tilde{e}),\tilde{\omega}_{\rm max}]\), where \(\tilde{\omega}_{\rm max}=2^{-1/2}\tilde{\chi}_{1\,{\rm vac}}=2\). As \(\tilde{e}\) decreases, the minimum allowable frequency \(\tilde{\omega}_{\rm min}(\tilde{e})\) falls monotonically, reaching the limiting value \(\tilde{\omega}_{\rm min}(0)=0\). Using numerical methods, we can show that as \(\tilde{\omega}\rightarrow\tilde{\omega}_{\rm min}(\tilde{e})\), the soliton energy \[\tilde{E}\left(\tilde{\omega},\tilde{e}\right)\sim a(\tilde{e})(\tilde{\omega }-\tilde{\omega}_{\rm min}(\tilde{e}))^{-2}, \tag{44}\] where \(a(\tilde{e})\) is a function of \(\tilde{e}\). It follows that the soliton energy increases indefinitely as \(\tilde{\omega}\rightarrow\tilde{\omega}_{\rm min}(\tilde{e})\). On the other hand, \(\tilde{\omega}_{\rm min}(\tilde{e})\) monotonically increases with \(\tilde{e}\), meaning that there is a limiting value \(\tilde{e}_{\rm max}\) for which \(\tilde{\omega}_{\rm min}(\tilde{e}_{\rm max})=\tilde{\omega}_{\rm max}\). It follows that the nontopological soliton can exist only when \(\tilde{e}\in[0,\tilde{e}_{\rm max})\). In the subplot in Fig. 1, we can see the curves \(\tilde{E}(\tilde{\omega},\tilde{e})\) in the vicinity of the maximum allowable phase frequency \(\tilde{\omega}_{\rm max}\). All the curves \(\tilde{E}(\tilde{\omega},\tilde{e})\) in the subplot tend to zero as \(\tilde{\omega}\rightarrow\tilde{\omega}_{\rm max}\). It has been found numerically that as \(\tilde{\omega}\rightarrow\tilde{\omega}_{\rm min}(\tilde{e})\), the soliton energy \[\tilde{E}\left(\tilde{\omega},\tilde{e}\right)\approx b(\tilde{e}) \left(\tilde{\omega}_{\rm max}-\tilde{\omega}\right)^{1/2}, \tag{45}\] where \(b(\tilde{e})\) is an increasing function of \(\tilde{e}\). Figure 1: Dependence of the soliton energy \(\tilde{E}\) on the phase frequency \(\tilde{\omega}\) for several values of the gauge coupling constant \(\tilde{e}\). According to Eq. (21), the curves \(\tilde{Q}_{N}(\tilde{\omega},\tilde{e})\) are related to the curves \(\tilde{E}(\tilde{\omega},\tilde{e})\) by the integral relation \(\tilde{Q}_{N}\left(\tilde{\omega},\tilde{e}\right)=-\int_{\tilde{\omega}}^{ \tilde{\omega}_{\max}}\tau^{-1}\partial_{\tau}\tilde{E}\left(\tau,\tilde{e} \right)d\tau\). It follows that the curves \(\tilde{Q}_{N}(\tilde{\omega},\tilde{e})\) will be similar to the curves \(\tilde{E}(\tilde{\omega},\tilde{e})\) shown in Fig. 1; in particular, the behavior of the curves \(\tilde{Q}_{N}(\tilde{\omega},\tilde{e})\) in the neighborhoods of \(\tilde{\omega}_{\min}\) and \(\tilde{\omega}_{\max}\) is the same as that of the curves \(\tilde{E}(\tilde{\omega},\tilde{e})\). Figure 2 shows the dependence of the soliton energy \(\tilde{E}\) on the Noether charge \(\tilde{Q}_{N}\) for several values of the gauge coupling constant \(\tilde{e}\). In Fig. 2, the black dashed line \(\tilde{E}=\tilde{\omega}_{\max}\tilde{Q}_{N}\) corresponds to the energy of a plane-wave configuration with a given Noether charge \(\tilde{Q}_{N}\). We see that for all values of \(\tilde{e}\) considered here, the energies of the solitons with a given \(\tilde{Q}_{N}\) are lower than the energy of the corresponding plane-wave configuration. It follows that these solitons are stable against decay into massive charged \(\phi\)-mesons. We have established that the energy \(\tilde{E}(\tilde{\omega},\tilde{e})\) and the Noether charge \(\tilde{Q}_{N}(\tilde{\omega},\tilde{e})\) of the soliton increase indefinitely as \(\tilde{\omega}\rightarrow\tilde{\omega}_{\min}(\tilde{e})\). In view of this, it would be interesting to explore the behavior of the soliton fields in this limit. To do this, we define the dimensionless profile functions \(\tilde{f}(\tilde{r})=m^{-1}gf(r)\), \(\tilde{\chi}_{1}(\tilde{r})=m^{-1}g\chi_{1}(r)\), and \(\tilde{\Phi}(\tilde{r})=m^{-1}g\Phi(r)\), where \(\tilde{r}=m^{-1}r\). We also define the dimensionless energy density \(\tilde{\mathcal{E}}(\tilde{r})=m^{-4}g^{2}\mathcal{E}(r)\) and the dimensionless Noether charge density \(\tilde{j}_{N}^{0}(\tilde{r})=m^{-3}g^{2}j_{N}^{0}(r)\). Figure 3 shows these dimensionless functions for parameter values \(\tilde{e}=0.1\) and \(\tilde{\omega}=0.32214\). Note that \(\tilde{\omega}=0.32214\) is the minimum value of the phase frequency, which we were able to achieve by numerical methods for \(\tilde{e}=0.1\). We see that only \(\tilde{f}(\tilde{r})\) and \(\tilde{j}_{N}^{0}(\tilde{r})\) are localised, whereas \(\tilde{\Phi}(\tilde{r})\), \(\tilde{\chi}_{1}(\tilde{r})\), and \(\tilde{\mathcal{E}}(\tilde{r})\) are long-range, which is consistent with the asymptotic forms in Eqs. (24), (25), and (26). We also see that \(\tilde{\chi}_{1}(\tilde{r})\approx 0\) in the interior of the soliton. The long-range character (\(\propto r^{-4}\)) of the energy density \(\tilde{\mathcal{E}}\) arises from the gradient of the long-range electric potential \(\tilde{\Phi}\) and the gradient of the long-range neutral scalar field \(\tilde{\chi}_{1}\). According to Eq. (14), the local character of the charge density \(\tilde{j}_{N}^{0}\) is due to the local character of the function \(\tilde{f}\). Note that the electrostatic repulsion causes the electric charge density to increase near the surface of the soliton. Eq. (25) tells us that the asymptotics of \(\tilde{\chi}_{1}\) is characterised by the scalar charge \(\tilde{Q}_{\rm s}=gQ_{\rm s}\). Using numerical methods, we find that similarly to the energy \(\tilde{E}\) and the Noether charge \(\tilde{Q}_{N}\), the scalar charge \[\tilde{Q}_{\rm s}\left(\tilde{\omega},\tilde{e}\right)\propto\left(\tilde{ \omega}-\tilde{\omega}_{\min}\left(\tilde{e}\right)\right)^{-2} \tag{46}\] as \(\tilde{\omega}\rightarrow\tilde{\omega}_{\min}\). However, unlike the Noether (electric) charge \(Q_{N}\) (\(Q=eQ_{N}\)), the scalar charge \(Q_{\rm s}\) is simply a definition and is not related to any symmetry of model (1). ## 6 Conclusion In the present paper, we show that an electrically charged nontopological soliton exists in a version of \({\cal N}=1\) supersymmetric scalar electrodynamics. A characteristic feature of this soliton is the presence of two long-range fields, which slowly (\(\propto r^{-1}\)) tend to limiting values: these are the electrostatic Coulomb field, and the electrically neutral massless scalar field. The presence of these two long-range fields leads to a modification of the intersoliton interaction in comparison with the purely Coulomb case. Another feature of the soliton is that its energy and electric charge take arbitrarily large values when the modulus of the phase frequency tends to the minimum possible value. In contrast, the energy and electric charge of the soliton vanish when the modulus of the phase frequency tends to the maximum possible value. We note that in the general case, the energy and electric charge of a nontopological soliton cannot be arbitrarily large due to Coulomb repulsion [10; 20]. We avoid this restriction because the attraction due to the massless scalar field compensates for the Coulomb repulsion. A similar situation also arises in the massless limit of the gauged Fridberg-Lee-Sirlin model [21; 23]. It is also worth noting that the electric charge and energy of the dyon (electrically charged magnetic monopole) also cannot be arbitrarily large in the general non-BPS case [35]. Only in the BPS limit, when the scalar field of the dyon becomes massless, can the energy and electric charge take arbitrarily large values. The \({\cal N}=1\) supersymmetry of the model makes it possible to obtain expressions for the fermionic zero modes in terms of bosonic fields of the soliton. The fermionic zero modes are bound states of the fermion-soliton system, and their components that correspond to the long-range bosonic fields are also long-range. In accordance with the number of \({\cal N}=1\) supersymmetry generators, the number of independent fermionic zero modes of the soliton is four. The fermionic zero modes of two solitons with opposite electric charges are related by the \(CP\) transformation. In this work, we have investigated a nontopological soliton of an \({\cal N}=1\) supersymmetric Abelian gauge model. It is known [36; 37], however, that nontopological solitons can also exist in non-Abelian gauge models. In particular, it was shown in Ref. [38] that an electrically charged nontopological soliton exists in the Weinberg-Salam model of electroweak interactions. This model allows for \({\cal N}=1\) supersymmetric extension, and its fermionic sector contains both massive (\(e\), \(\mu\), \(\tau\)) and massless (\(\nu_{e}\), \(\nu_{\mu}\), \(\nu_{\tau}\)) fermions. The bosonic superpartners of the neutrinos (sneutrinos) also have zero masses. We can assume that, similarly to the nonsupersymmetric case [38], an electrically charged nontopological soliton also exists in this model, meaning that some properties of this soliton will be similar to those studied in this work. In particular, in addition to the long-range Coulomb field, this soliton will have long-range fields of massless sneutrinos. Furthermore, it will be possible to express the fermionic zero modes of this soliton in terms of its bosonic fields. ## Acknowledgements This work was supported by the Russian Science Foundation, grant No 23-11-00002.
2308.10614
Active crystallization from power functional theory
We address the gas, liquid, and crystal phase behaviour of active Brownian particles in three dimensions. The nonequilibrium force balance at coexistence leads to equality of state functions for which we use power functional approximations. Motility-induced phase separation starts at a critical point and quickly becomes metastable against active freezing for P\'eclet numbers above a nonequilibrium triple point. The mean swim speed acts as a state variable, similar to the density of depletion agents in colloidal demixing. We obtain agreement with recent simulation results and correctly predict the strength of particle number fluctuations in active fluids.
Sophie Hermann, Matthias Schmidt
2023-08-21T10:23:36Z
http://arxiv.org/abs/2308.10614v2
# Active crystallization from power functional theory ###### Abstract We address the gas, liquid, and crystal phase behaviour of active Brownian particles in three dimensions. The nonequilibrium force balance at coexistence leads to equality of state functions for which we use power functional approximations. Motility-induced phase separation starts at a critical point and quickly becomes metastable against active freezing for Peclet numbers above a nonequilibrium triple point. The mean swim speed acts as a state variable, similar to the density of depletion agents in colloidal demixing. We obtain quantitative agreement with recent simulation results and correctly predict the strength of particle number fluctuations in active fluids. + Footnote †: preprint: APS/123-QED The occurrence of freezing in a many-body system is often due to the presence of strong, short-ranged repulsion between the constituent particles [1; 2]. Conditions of high enough density are required for crystallization as a global ordering phenomenon to occur and these can be induced by external constraints, such as confinement by walls, or via interparticle attraction [3; 4]. In colloidal systems, attraction between the particles can be generated by adding depletion agents, such as polymers, colloidal rods, or smaller-sized colloidal spheres. The depletants create an effective attraction between the primary particles and the resulting effective interaction potential is accessible via formally integrating out (averaging over) the depletant degrees of freedom [5] and machine learning [6; 7]. In general the resulting interaction potential has a strong many-body character, although notable exceptions exist, such as the Asakura-Oosawa model [8; 9; 10; 11], where for sufficiently small polymer-to-colloid size ratio a description based on an effective pair potential is exact [10]. In a striking analogy, Turci and Wilding [12] have recently related the phase behaviour of three-dimensional active Brownian particles (ABPs) [12; 13] to such depletion-driven binary mixtures. ABPs form a central model system for active matter and their phase behaviour has received much prior attention [14; 15; 16; 17; 18; 19; 20], including the two-dimensional version of the model [14; 19; 20]. The particles undergo overdamped Brownian motion and they self-propel (swim) along a built-in direction, which diffuses freely. The system displays motility-induced phase separation (MIPS) into dense and dilute coexisting nonequilibrium steady states, despite of the absence of explicit interparticle attraction. The phenomenon was addressed on the basis of a wide variety of theoretical techniques [17; 18; 19; 20; 21], including very recent work by Omar et al. [22] based on forces. However, none of these approaches has yet been applied to active freezing. Despite the significant number of theoretical efforts [17; 18; 19; 20; 21; 22; 23], no consensus has been reached on a common framework which would act as an uncontested platform for the description of active systems, such as the theory of simple liquids for spatially inhomogeneous and phase-separated systems does in equilibrium [1; 24; 25; 26]. It is a rather common point of view that "the link between experiment and theory in active matter is often rather qualitative" [27]. Having a predictive theory is highly valuable though, given that much relevant experimental work is being carried out, e.g. based on light-controlled systems [28], as also used in studies of active polarization [29; 30], cluster formation [31], the self-propulsion mechanism of Quincke rollers [27], the experimental study of active sedimentation [32], capillary rise [33], and poly-crystallinity [34]. Equally so, simulations studies of wetting [35], vortex crystal formation [36], inertial effects in nematic turbulence [37], interfacial properties [38] and of dynamical features [39] of active particles could benefit from having a predictive theory. In this Letter we use power functional theory [40], which is a general framework for the description of the dynamics of many-body systems, including ABPs [40; 41; 42; 43; 44; 45]. We base our treatment of freezing on the active force balance, as used in studies of active drag forces [41; 42], motility-induced phase separation [43; 44] and the interfacial tension between phase-separated states [45] in two-dimensional ABPs. The theory satisfies exact sum rules which result from Noether's theorem for correlation functions [46; 47] as well as from the continuity equation for the global polarization [48]. We demonstrate that the framework gives a physically sound and semi-quantitative account of the full phase behaviour of ABPs in three dimensions. In their analogy, Turci and Wilding [12] suggest that the Peclet number, which measures the strength of the self-propulsion in the active system relative to diffusive motion, is akin to the depletants' fugacity (or polymer reservoir density) in an equilibrium mixture. We confirm and extend this point of view, as in our theoretical approach the mean swim speed plays a role akin to the actual polymer density in the system. We work on the level of one-body correlation functions, which depend on position \(\mathbf{r}\) and on particle orientation, as represented by a unit vector \(\boldsymbol{\omega}\). The continuity equation relates the divergence of the translational current \(\mathbf{J}(\mathbf{r},\boldsymbol{\omega},t)\) and of the rotational current \(\mathbf{J}^{\omega}(\mathbf{r},\boldsymbol{\omega},t)\) to temporal changes of the one-body density distribution according to: \[\frac{\partial\rho(\mathbf{r},\mathbf{\omega},t)}{\partial t}=-\nabla\cdot\mathbf{J}( \mathbf{r},\mathbf{\omega},t)-\nabla^{\omega}\cdot\mathbf{J}^{\omega}(\mathbf{r}, \mathbf{\omega},t). \tag{1}\] Here \(\nabla\) and \(\nabla^{\omega}\) indicate the derivatives with respect to \(\mathbf{r}\) and \(\mathbf{\omega}\), respectively, and the density profile \(\rho(\mathbf{r},\mathbf{\omega},t)\) is position- and orientation-resolved. We consider steady states such that the left hand side of Eq. (1) vanishes and we drop the time argument \(t\) from here on. As no explicit torques act in the system, the orientational current stems solely from the free rotational diffusion of the active spheres: \(\mathbf{J}^{\omega}(\mathbf{r},\mathbf{\omega})=-D_{\mathrm{rot}}\nabla^{\omega} \rho(\mathbf{r},\mathbf{\omega})\), where \(D_{\mathrm{rot}}\) indicates the rotational diffusion constant. For the present case of overdamped active motion, the exact force balance is given by: \[\gamma\mathbf{v}(\mathbf{r},\mathbf{\omega})=\mathbf{f}_{\mathrm{id}}(\mathbf{r}, \mathbf{\omega})+\mathbf{f}_{\mathrm{int}}(\mathbf{r},\mathbf{\omega})+\gamma s\mathbf{ \omega}. \tag{2}\] The left hand side represents the negative friction force with friction constant \(\gamma\) and the velocity field is the ratio of current and density, \(\mathbf{v}(\mathbf{r},\mathbf{\omega})=\mathbf{J}(\mathbf{r},\mathbf{\omega})/\rho( \mathbf{r},\mathbf{\omega})\). The three driving contributions on the right hand side of Eq. (2) are the ideal diffusive force field \(\mathbf{f}_{\mathrm{id}}(\mathbf{r},\mathbf{\omega})=-k_{B}T\nabla\ln\rho(\mathbf{ r},\mathbf{\omega})\), the internal force field \(\mathbf{f}_{\mathrm{int}}(\mathbf{r},\mathbf{\omega})\), which arises from the Weeks-Chandler-Anderson (WCA) interparticle interactions, and the swim force \(\gamma s\mathbf{\omega}\) with \(s\) indicating the speed of free swimming. The one-body interparticle interaction force field \(\mathbf{f}_{\mathrm{int}}(\mathbf{r},\mathbf{\omega})\) is accessible via sampling in simulations [41, 42, 43, 44] and via machine-learning, as recently demonstrated in passive flow [49] and in equilibrium [50]. We split the interparticle forces according to [51, 45]: \[\mathbf{f}_{\mathrm{int}}(\mathbf{r},\mathbf{\omega})=\mathbf{f}_{\mathrm{ad}}( \mathbf{r})+\mathbf{f}_{\mathrm{flow}}(\mathbf{r},\mathbf{\omega})+\mathbf{f}_{ \mathrm{struc}}(\mathbf{r},\mathbf{\omega}), \tag{3}\] where the right hand side consists of the adiabatic force field \(\mathbf{f}_{\mathrm{ad}}(\mathbf{r})\), the superadiabatic flow force field \(\mathbf{f}_{\mathrm{flow}}(\mathbf{r},\mathbf{\omega})\) and the superadiabatic structural force field \(\mathbf{f}_{\mathrm{struc}}(\mathbf{r},\mathbf{\omega})\). Here the adiabatic force field \(\mathbf{f}_{\mathrm{ad}}(\mathbf{r})\) is defined as acting in an equilibrium system of passive WCA particles that do not swim. The WCA particles are spheres and hence there is no nontrivial dependence on \(\mathbf{\omega}\) in the adiabatic system. Its density distribution \(\bar{\rho}(\mathbf{r})\) is identical to the orientation-integrated density distribution in the active system. In the adiabatic system \(\bar{\rho}(\mathbf{r})\) is stabilized by an external potential. If one wishes to think in terms of functional dependencies, then \(\mathbf{f}_{\mathrm{ad}}(\mathbf{r})\) is an instantaneous density functional, in the sense of functional dependencies as they form the core of classical density functional theory of inhomogeneous liquids and solids [24, 25, 26, 1, 40]. Both the flow and the structural force contributions in Eq. (2) are of superadiabatic nature, i.e. they are genuine nonequilibrium force fields which arise from the interparticle interactions [40]. In equilibrium, as well as in passive uniaxial flow, the three force contributions were shown to be amenable to supervised machine learning [49, 50], which we take as confirmation of the general force splitting concept (3), as is here applied to the active system. Figure 1: Phase diagram for three-dimensional ABPs. (a) Theoretical result as a function of the scaled bulk density \(\rho_{b}\sigma^{3}\) and the scaled average swim speed \(v_{b}\gamma\sigma/\epsilon\). Shown are stable (solid lines) and metastable (dashed lines) binodals; slanted tielines connect coexisting nonequilibrium states. The orange dotted line indicates the line of maximal compressibility \(\chi_{\mathrm{max}}\). Note the similarity to a phase diagram of a colloid-polymer mixture (inset, adapted from Ref. [11]) as a function of the colloid (polymer) packing fraction \(\eta^{C}(\eta^{P})\). (b) Same as (a) but shown as a function of \(\rho_{b}\sigma^{3}\) and the Péclet number Pe. The tie lines are horizontal in this representation. (c) Same as (b) but obtained from computer simulations in Ref. [12]. Shown are active gas-active fluid (circles), active gas-crystal (triangles) coexistence densities as well as \(\chi_{\mathrm{max}}\) (diamonds). The inset is a schematic phase diagram for a polymer-colloid mixture with size ratio \(q=0.6\), taken from Ref. [10]. The flow force \(\mathbf{f}_{\mathrm{flow}}(\mathbf{r},\mathbf{\omega})\), as it is part of Eq. (3), is defined to compensate the friction and the active force in the force balance relationship (2) such that equality is achieved: \[\gamma\mathbf{v}(\mathbf{r},\mathbf{\omega})=\mathbf{f}_{\mathrm{flow}}(\mathbf{r}, \mathbf{\omega})+\gamma s\mathbf{\omega}. \tag{4}\] The flow equation (4) is invariant under motion reversal [40; 49; 51] and it affects the spatial structure formation as represented by the density profile, only indirectly, as we detail below. As an approximation we resort to the superadiabatic drag force of Ref. [41] with the simple form \(\mathbf{f}_{\mathrm{flow}}(\mathbf{\omega})=-\gamma v_{b}\mathbf{\omega}\rho_{b}/( \rho_{j}-\rho_{b})\), where \(v_{b}=\mathbf{v}\cdot\mathbf{\omega}\) is the mean forward swim speed. This assumption yields the common linear relationship of the mean swim velocity and the average density, \(v_{b}/s=1-\rho_{b}/\rho_{j}\). The parameter \(\rho_{j}=\mathrm{const}\) determines the slope of the decay of \(v_{b}\) with bulk density and we adjust its value to \(\rho_{j}=1.436\sigma^{-3}\), where \(\sigma\) is the lengthscale of the WCA pair potential. The superadiabatic structural force field \(\mathbf{f}_{\mathrm{sup}}(\mathbf{r},\mathbf{\omega})\) balances the remaining adiabatic and ideal terms in Eq. (2), which implies the following force cancellation: \[0=\mathbf{f}_{\mathrm{id}}(\mathbf{r})+\mathbf{f}_{\mathrm{ad}}(\mathbf{r})+ \mathbf{f}_{\mathrm{struc}}(\mathbf{r}). \tag{5}\] As a consistency check, the sum of Eqs. (4) and (5) recovers the full force balance relationship (2). The ideal term is generally numerically small, and we hence approximate the exact ideal force \(-k_{B}T\nabla\ln\rho(\mathbf{r},\mathbf{\omega})\approx-k_{B}T\nabla\ln\tilde{\rho }(\mathbf{r})\equiv\mathbf{f}_{\mathrm{id}}(\mathbf{r})\), where as before \(\tilde{\rho}(\mathbf{r})\) is the position-dependent and orientation-averaged one-body density profile. Equation (5) balances the repulsion that acts in the adiabatic system with the nonequilibrium force contributions. We recall that the adiabatic system consists of steeply repulsive spheres without orientations. Hence the structural nonequilibrium forces necessarily need also be independent of orientation, \(\mathbf{f}_{\mathrm{struc}}(\mathbf{r})\), in order to satisfy Eq. (5). As all force fields in Eq. (5) are of gradient nature [the non-gradient forces are contained in Eq. (4)], we can integrate in position and obtain the following chemical potential balance: \[\mu_{\mathrm{id}}(\mathbf{r})+\mu_{\mathrm{ad}}(\mathbf{r})+\mu_{\mathrm{struc }}(\mathbf{r})=\mu. \tag{6}\] Here \(\mu=\mathrm{const}\) arises from the spatial integration. All terms on the left hand side of Eq. (6) are solely defined by generating via spatial differentiation the (negative) force contributions that occur in the structural force balance (5). Explicitly, we have \(\mathbf{f}_{\mathrm{id}}(\mathbf{r})=-\nabla\mu_{\mathrm{id}}(\mathbf{r})\), with the standard ideal gas chemical potential expression: \(\mu_{\mathrm{id}}(\mathbf{r})=-k_{B}T\ln\bar{\rho}(\mathbf{r})\); the adiabatic force field: \(\mathbf{f}_{\mathrm{ad}}(\mathbf{r})=-\nabla\mu_{\mathrm{ad}}(\mathbf{r})\); and the superadiabatic structural force field: \(\mathbf{f}_{\mathrm{struc}}(\mathbf{r})=-\nabla\mu_{\mathrm{struc}}(\mathbf{r})\). Up to having neglected the orientation-dependence of the ideal gas contribution and the assumption of the specific simple form of \(\mathbf{f}_{\mathrm{flow}}(\mathbf{\omega})\), the framework thus far developed is exact and we have to resort to approximations to make further progress. We first turn to the adiabatic contribution. The adiabatic state is simply the equilibrium WCA model, which per se has no gas-liquid coexistence due to its lack of interparticle attraction. Treating fluid states of repulsive spheres is straightforward. We approximate the system by hard spheres and use a modified Carnahan-Starling equation of state [53], which correctly accounts for the behaviour at very high densities, as is relevant for ABPs in the parameter regime considered here. The corresponding bulk excess free energy \(A_{\mathrm{ad}}\) is given by \[\frac{A_{\mathrm{ad}}}{Nk_{B}T} =\frac{3\eta}{1-\eta}\] \[+\eta\left\{(1-\eta)\left[(1-\eta\left(1+\frac{1-\eta_{j}}{\eta_{j }}\mathrm{e}^{b(\eta-\eta_{j})}\right)\right]\right\}^{-1} \tag{7}\] where \(\eta=\pi\sigma^{3}\rho_{b}/6\) is the packing fraction, \(\eta_{j}=0.655\) is the densest possible packing fraction in this approximation and setting \(b=50\) is an empirical choice [53]. An analytical expression for the bulk chemical potential in the adiabatic system then follows from the standard identity \(\mu_{\mathrm{ad}}^{b}(\rho_{b})=[A_{\mathrm{ad}}+\eta\partial A_{\mathrm{ad}}/ \partial\eta]/N\). We use a local density approximation where we evaluate the bulk expression at the value of the local density profile, i.e. \(\mu_{\mathrm{ad}}(\mathbf{r})=\mu_{\mathrm{ad}}^{b}(\bar{\rho}(\mathbf{r}))\). In order to approximate the equation of state of the adiabatic crystal we resort to the cell theory [54; 55; 56; 57]. This yields the chemical potential of the fcc crystal as \(\beta\mu_{\mathrm{cell}}=\ln(\sqrt{2})+3\ln(\Lambda/\sigma)-3\ln(\xi-1)+\xi/( \xi-1)\), where \(\xi=[\pi\sqrt{2}/(6\eta)]^{1/3}\) and \(\Lambda\) is the thermal de Broglie wavelength which we set to \(\Lambda=\sigma\). We only take account of the mean crystal density, and set \(\mu_{\mathrm{ad}}+\mu_{\mathrm{id}}=\mu_{\mathrm{cell}}\) for the treatment of the crystalline phase. The remaining task is to approximate the superadiabatic structural chemical potential contribution, \(\mu_{\mathrm{struc}}(\mathbf{r})\). Here we resort to the "quiet life" approximation, which was successfully used to describe active gas-liquid phase separation in two dimensions, along with the force balance across the free interface between the active bulk states [43; 44]. This approximation takes into account, in arguably the simplest correct way, the dependence on both the local density and on the local velocity. As the force is structural, it is necessarily even in powers of the velocity. A simple choice which is linear in density and quadratic in velocity [43; 44] reads as: \[\mu_{\mathrm{struc}}(\mathbf{r})=\frac{e_{1}\gamma}{6D_{\mathrm{rot}}}v^{2} \frac{\bar{\rho}(\mathbf{r})}{\rho_{j}}, \tag{8}\] where \(e_{1}=0.285\) is a constant that determines the overall strength. Crucially, we use the same approximation (8) for \(\mu_{\mathrm{struc}}\) in all three phases. Nonequilibrium phase coexistence is obtained via the mechanical balance of the total force, which in our framework implies equality of the values of the chemical potential, see Eq. (6), in the coexisting phases, as well as the equality of the pressure. The pressure is obtained from integrating the standard relation \(\rho_{b}\partial\mu(\rho_{b})/\partial\rho_{b}=\partial P(\rho_{b})/\partial\rho_ {b}\). The resulting phase diagram is shown in Fig. 1 as a function of the bulk density \(\rho_{b}\) and either the mean swim speed \(v_{b}\) (a) or the free swim speed \(s\) (b), as expressed in scaled form by the Peclet number \(\mathrm{Pe}=s\sigma\gamma/(k_{B}T)\). The topology of the phase diagram matches that obtained in simulation work [12, 13] and the quantitative agreement with the simulation results by Turci and Wilding [12] is very satisfactory; their results for the phase diagram are displayed in Fig 1(c). Our theory reproduces the marginal stability [13] of active gas-liquid coexistence with respect to freezing into a dense fcc crystal. The coexisting gas has relatively high density, in stark contrast to the strong dilution of the coexisting gas that occurs quickly in equilibrium phase separation when moving away from the triple point. On the basis of the similarity of their simulation results for the active system to depletion-induced phase behaviour in equilibrium (comparing the main plot and inset of Fig. 1(c)), Turci and Wilding [12] draw conclusions about the presence and relevance of effective many-body interactions that govern the active system. While it is well-established that in active systems the Peclet number plays a role similar to that of temperature in equilibrium, the proposal by Turci and Wilding leaves open whether one should think of the activity as only generating many-body effects that are akin to those of depletants or whether the active system contains actual degrees of freedom that have not been properly appreciated. Based on the success of our theory, we argue that the latter is the case and that besides the density, the velocity field is an intrinsic degree of freedom that the nonequilibrium system can regulate freely and self-consistently. To demonstrate the validity of this concept, we use the actual mean velocity \(v_{b}\) instead of \(\mathrm{Pe}\) as a state variable in Fig. 1(a). The swim speed \(v_{b}\) is high in the coexisting gas, low in the coexisting liquid, and even lower in the coexisting crystal. The latter property is consistent with Caprini _et al._ reporting very low swim speeds in solid clusters of the two-dimensional ABP system, see the Supplemental Material of Ref. [52]. This behaviour is analogous to what is found in depletion-driven phase separation, when going from the reservoir density of the depletant to the actual depletant density in the system [10, 11]. The observed similarity in the form of the phase diagram is striking, compare the main plot and the inset of Fig. 1(a). We next investigate whether our proposed theory is predictive beyond the phase diagram. In their simulation work [12], Turci and Wilding have investigated the statistics of particle number fluctuations, as they occur in small virtual subboxes of the global system. The strength of fluctuations \(\chi(\rho_{b})\) is taken to be a proxy for the compressibility, as can in equilibrium be obtained from the thermodynamical derivative \(\partial\rho_{b}(\mu)/\partial\mu\), carried out in the grand ensemble where global particle number fluctuations occur. These fluctuations are absent in the present system, as the particle number is conserved in time [we recall the validity of even the locally resolved continuity equation (1)]. Within our nonequilibrium framework the partial derivative \(\chi(\mu_{b})=\partial\rho_{b}(\mu)/\partial\mu\) is well-defined. Here \(\mu\) is the total chemical potential and we recall its splitting (6) into adiabatic and superadiabatic contributions. We invert via \(\chi(\mu_{b})=[\partial\mu(\rho_{b})/\partial\rho_{b}]^{-1}\) with the derivative taken at \(T,\mathrm{Pe}=\mathrm{const}\). We normalize with respect to the low density behaviour, \(\chi(\rho_{b})/\chi(0)\), as has also been done in the simulations [12]. In order to create further common ground we scale the density axis by the respective value of the critical density \(\rho_{c}\). From the setup of the theory, we expect \(\chi(\rho_{b})/\chi(0)\) to be a measure of particle fluctuations and we show numerical results in Fig. 2(a) as a function of \(\rho_{b}/\rho_{c}\) for a range of different values of \(\mathrm{Pe}/\mathrm{Pe}_{c}<1\), where \(\mathrm{Pe}_{c}\) indicates the critical value of the Peclet number. We find that the theory produces Figure 2: Compressibility \(\chi(\rho_{b})\), scaled by its value \(\chi(0)\) in the active ideal gas, as a function of the scaled bulk density \(\rho_{b}/\rho_{c}\), where \(\rho_{c}\) is the density at the MIPS critical point. The theoretical results in panel (a) are obtained from differentiating \(\chi(\rho_{b})=\partial\rho(\mu)/\partial\mu|_{T,\mathrm{Pe}}\), where \(\mu\) is the (total) nonequilibrium chemical potential. Results are shown for a sequence of Péclet numbers (as indicated), scaled by the value at the critical point. Panel (b) shows the corresponding simulation results of Ref. [12]; for the purpose of this comparison, we take \(\mathrm{Pe}_{c}=36\), \(\rho_{c}=0.94\) in simulation [12] and \(\mathrm{Pe}_{c}=37.6\), \(\rho_{c}=0.71\) for the theory. The lines in (b) connect the data points to guide the eye. the same bell-shaped variation upon increasing density at fixed Pe, as is apparent in the simulation results reproduced in Fig. 2(b). The maximum becomes much more pronounced upon increasing Pe/Pe\({}_{c}\) and the theoretical prediction consistently diverges at the nonequilibrium critical point. The position of the maximum of \(\chi(\rho_{b})/\chi(0)\) traces a line in the phase diagram. The result is shown in Fig. 1, which again agrees very well with the simulation data (compare the orange line in in Fig. 1(b) with the orange symbols (c)). In conclusion we have investigated the nonequilibrium phase behaviour of ABPs in three dimensions, based on power functional concepts. The central assumption is that the formally exact nonequilibrium force balance relationship contains a nonequilibrium structural force contribution, as obtained by the negative spatial gradient of a corresponding superadiabatic chemical potential, Eq. (8). We have shown that the theory predicts the phase diagram correctly and that nonequilibrium particle number fluctuations are described in agreement with the observations in simulations. We envisage that going beyond the simple cell theory for the description of the crystal is possible with classical density functional theory [58] based on fundamental measure theory [26, 2] as used to study the direct correlation function in crystals [59]. Given the recent progress in measurement of intercolloidal forces in gel states [60], it seems not inconceivable that experiments can shed further light on active forces. ###### Acknowledgements. We thank Francesco Turci for sending us the simulation data of Ref. [12] and him, Nigel Wilding, and Daniel de las Heras for useful and inspiring discussions.
2304.01143
Use Your Head: Improving Long-Tail Video Recognition
This paper presents an investigation into long-tail video recognition. We demonstrate that, unlike naturally-collected video datasets and existing long-tail image benchmarks, current video benchmarks fall short on multiple long-tailed properties. Most critically, they lack few-shot classes in their tails. In response, we propose new video benchmarks that better assess long-tail recognition, by sampling subsets from two datasets: SSv2 and VideoLT. We then propose a method, Long-Tail Mixed Reconstruction, which reduces overfitting to instances from few-shot classes by reconstructing them as weighted combinations of samples from head classes. LMR then employs label mixing to learn robust decision boundaries. It achieves state-of-the-art average class accuracy on EPIC-KITCHENS and the proposed SSv2-LT and VideoLT-LT. Benchmarks and code at: tobyperrett.github.io/lmr
Toby Perrett, Saptarshi Sinha, Tilo Burghardt, Majid Mirmehdi, Dima Damen
2023-04-03T17:09:47Z
http://arxiv.org/abs/2304.01143v1
# Use Your Head: Improving Long-Tail Video Recognition ###### Abstract This paper presents an investigation into long-tail video recognition. We demonstrate that, unlike naturally-collected video datasets and existing long-tail image benchmarks, current video benchmarks fall short on multiple long-tailed properties. Most critically, they lack few-shot classes in their tails. In response, we propose new video benchmarks that better assess long-tail recognition, by sampling subsets from two datasets: SSV2 and VideoLT. We then propose a method, Long-Tail Mixed Reconstruction (LMR), which reduces overfitting to instances from few-shot classes by reconstructing them as weighted combinations of samples from head classes. LMR then employs label mixing to learn robust decision boundaries. It achieves state-of-the-art average class accuracy on EPIC-KITCHENS and the proposed SSV2-LT and VideoLT-LT. Benchmarks and code at: _tobyperrett.github.io/lmr_ ## 1 Introduction Advances in deep learning have been driven by increasing quantities of data to train larger and more sophisticated models. Landmark recognition datasets such as ImageNet [17] and Kinetics [10], amongst others, have fulfilled this need for data by first defining a taxonomy, and then scraping or crowd-sourcing until a sufficient number of examples are obtained for each class. They typically aim for balanced, or nearly balanced, class distributions. However, in practice, collecting enough examples for every object or action, including rare ones, remains challenging. Naturally occurring data is known to come from long-tail distributions, where it is often not possible to obtain a sufficient number of samples from classes in the tail. In order to encourage methods to train effectively on long-tail data, image-recognition benchmarks include multiple naturally-collected1[24] as well as curated long-tail datasets [64, 6, 15, 37, 6]. In contrast, long-tail video recognition has been a less explored field. In Fig. 1, we compare image and video benchmarks, showcasing that none of the curated video datasets to date contain any few-shot classes [1, 21, 71]. This is a critical oversight, as seminal research has highlighted that long-tail methods must "_learn accurate few-shot models for classes in the tail of the class distribution_"[64] and "_deal with imbalanced classification, few-shot learning"[37]. In this paper, we follow the approach from [37] and re-sample videos to introduce long-tail versions of two video datasets. Footnote 1: We use the term ‘naturally’ to focus on the data collection. It does not imply footage of nature. We hope this footnote prevents any confusion. We evaluate current long-tail recognition methods on our re-sampled long-tail video datasets and the naturally-collected EPIC-KITCHENS-100 dataset [16]. Unsurprisingly, when confronted with few-shot classes, current methods perform poorly due to a lack of sample diversity in the few-shot classes. We thus propose a new method that focuses on improving the performance on few-shot classes. Long-Tail Mixed Reconstruction (LMR) reconstructs few-shot samples as weighted combinations of head samples within the batch. A residual connection, weighted by the class size, is used to combine instances with their reconstructions. We use pairwise label mixing on these reconstructed samples to help learn robust class decision boundaries. Our key contributions are as follows: * We compare image and video long-tail datasets, by providing a consistent definition of properties for long-tail Figure 1: Long-tail image recognition datasets (top) [9, 37] aimed to curate similar distributions to the naturally-collected iNaturalist class distributions. * We curate new long-tail video benchmarks (-LT) which better test long-tail recognition performance. * We propose a method, LMR, which increases the diversity in few-shot classes. It achieves highest average class accuracy across 3 benchmarks: naturally-collected EPIC-KITCHENS-100 and the two proposed curated benchmarks SSv2-LT and VideoLT-LT. Sec. 2 reviews works which investigate long-tail characteristics, leading to the introduction of a set of properties and the comparison of existing long-tail benchmarks. Sec. 3 introduces new benchmarks and demonstrates experimentally the value of these long-tail properties. Sec. 4 summarises prior long-tail and few-shot video recognition approaches. Sec. 5 introduces LMR, our method for long-tail video recognition. Comparative analysis is given in Sec. 6. Finally, ablations on LMR are performed in Sec. 7. ## 2 Properties of Long-Tail Benchmarks Established benchmarks for long-tail image recognition [37] have shaped the progress of long-tail methods. These followed earlier efforts that investigated the desired data distribution characteristics for long-tail benchmarks. In [6], experiments were performed with class counts that decay linearly or decay with a step-function. They noted that a larger imbalance between majority (now known as 'head') and minority (i.e. 'tail') classes increases difficulty and that a longer tail negatively affects classifier performance for both linear and step class count decays. Interestingly, imbalance was shown to affect higher complexity tasks (_e.g_. CIFAR) significantly more than lower complexity tasks (_e.g_. MNIST). Step and exponential class count decays were also investigated in [9], with similar conclusions. In [15], multiple long-tail versions of CIFAR [29] were curated by changing the minimum class size. Distribution characteristics were not explored numerically, but a drop in performance was reported as the number of samples per class decreased. Despite the richness of these early findings, imbalance (_i.e_. the ratio between the largest and smallest class sizes) has become the primary metric for characterising long-tail benchmarks. However, imbalance ignores other critical characteristics such as the number of few-shot classes. To reflect this, we define three properties which together allow a more informed comparison of long-tail benchmarks. These are visualised in Fig. 2: * **Head Length (H%):** The percentage of classes that formulates the majority of samples in the dataset. When classes are ranked by their size in the training set, these are the largest classes that together contribute \(x\)% of the training samples. While different values can be used for \(x\), we follow prior work that used 50% of the data to represent head classes [4, 52]. We consider the head length as the ratio of head classes to all classes. * **Few-Shot Length (F%):** The percentage of few-shot classes in the dataset, where a few shot class contains \(\leq x\) training samples. Prior works use values between 5 and 50 for \(x\)[2, 8, 41, 46, 48, 59, 61, 69, 75]. We follow long-tail image works and use 20 as the threshold for few-shot classes [37, 72]. * **Imbalance (I):** Previously used in [15], imbalance is the ratio between the size of the largest and smallest classes. Note that this metric alone does not provide a measure of how long-tailed a dataset is. These three properties are distribution agnostic, _i.e_. they can describe the properties of any benchmark whether the data is naturally-collected, or when it is sampled, no matter what distribution function is used. Using these three properties (H%, F%, I), we now quantitatively compare long-tail datasets across images and videos. ### Long-Tail Image Datasets The definitive example of a naturally-collected long-tail image recognition dataset is iNaturalist 2018 [24]. It is constructed from image and label contributions of plants and animals in the wild. As some species are rare, it would be very difficult to acquire more examples of these few-shot classes. As shown in Tab. 1, the iNaturalist image dataset has a head length of 7% (_i.e_. the 7% largest classes contribute 50% of the data), a few-shot length of 40% (_i.e_. 40% of the classes have 20 or fewer training examples) and an imbalance of 500. Thus, for methods to perform well on naturally-collected data, they must be good at learning a large number of few-shot classes. Methods also evaluate on curated long-tail versions of large-scale datasets to avoid over-specialisation on iNaturalist. The widely used ImageNet-LT [37], Places-LT [37] and CIFAR-LT [15] re-sample from the original datasets and have comparable properties to the naturally-collected iNaturalist, making them suitable for evaluating methods that target long-tail recognition. As shown in Tab. 1, these Figure 2: Visualisation of long-tail distribution properties: head length (H%), few-shot length (F%) and imbalance (I). Previous works have relied solely on imbalance, or used the terms “head”, “mid” and “tail” to describe different parts of the distribution with arbitrarily chosen sizes. In this paper, we use consistent properties to compare long-tail benchmarks across images and videos. have few-shot lengths of \(14\%,19\%\) and \(30\%\) respectively and head lengths of \(\leq\) 16\(\%\). ### Long-Tail Video Datasets By analogy, one naturally-collected large-scale and long-tail video dataset is EPIC-KITCHENS-100 [16]. Collection was unscripted recording of several days of kitchen activities. The number of samples of an action class roughly correlates to the how frequently the action occurs in daily activities. Table 1 shows EPIC-KITCHENS-100 has a head length of 3% and a few-shot length of 19%. There have been two attempts at curating video datasets to specifically test long-tail methods, Youtube8M [1] and VideoLT [71]. While these are appreciated efforts, they are far from ideal as long-tail benchmarks. Table 1 shows neither of these contain any few-shot classes (F% = 0), and VideoLT has a significantly smaller imbalance of \(43\) compared to \(100-996\) for long-tail image datasets. We build on this effort to propose long-tail benchmarks that satisfy all the desired properties. ## 3 Proposed Long-Tail Video Benchmarks Having identified weaknesses in current benchmarks used for long-tail video recognition, we first propose to use EPIC-KITCHENS-100 as it is naturally-collected and satisfies the long-tail properties (as defined in Sec. 2). We also propose to resample public video datasets, so their properties are in line with curated long-tail image datasets. SSv2 [21] is chosen as it is widely considered to be a good test of temporal understanding and has previously been re-purposed for evaluating few-shot works [8, 77]. Similarly, VideoLT [71] targets fine-grained classes. We call these curated versions SSv2-LT and VideoLT-LT, and resample these following the recipe used in [37] for ImageNet-LT and Places-LT (sampling from a Pareto distribution with \(\alpha=6\)). Table 1 demonstrates these curated versions match the desired properties as visualised in Fig. 1. For additional details including sampled number of instances per class, see Appendix A. Before proceeding to the method, ablations are first performed at a dataset level, where different curated versions of SSv2-LT are compared to demonstrate the impact on long-tail properties and the effect of few-shot classes. Full implementation details of models and metrics will be given in Sec. 6, but for these ablations it suffices to say that Motionformer [39] is trained with cross-entropy, reporting average class accuracy over the test set, as well as over few-shot, tail and head classes. ### Importance of Long-Tail Properties In Sec. 2, we noted that prior works use Imbalance (I) to identify a dataset as being long-tailed [15, 50]. We quantitatively showcase that imbalance alone is insufficient by constructing four variants of SSv2-LT (A, B, C, D), with a fixed training set size = 50.4k and a fixed imbalance I = 500. We vary the head length H% and the few-shot length F% as shown in Fig. 3. Variant C (which uses an identical decay to ImageNet-LT and Places-LT [37]), highlighted in blue, is the version used throughout this paper and proposed as the long-tail benchmark SSv2-LT. As H% decreases and F% increases (A \(\rightarrow\) D), there are significant drops in few-shot, tail and overall accuracy (up to 9%), whereas head performance improves. This is indicative of the distribution becoming more long-tailed. Because this behaviour occurs with fixed I, it can be concluded that H% and F% are indeed necessary for comparison of long-tail distributions. ### Effect of Few-Shot Classes To showcase the importance of few shot classes, i.e. classes with \(\leq 20\) samples in training, we increment all classes in SSv2-LT with a fixed number of additional samples \(+x\). We evaluate the performance over few-shot/tail/head classes2 as we add \(\{10,20,30,40,50\}\) samples per class. Fig. 4 shows that the accuracy on few-shot \begin{table} \begin{tabular}{l l l l c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{Proposed Properties} & \multicolumn{3}{c}{Class size} & \multicolumn{1}{c}{Num} & \multicolumn{1}{c}{Balanced} & \\ \cline{4-10} & Source & Dataset & Year & H\% & F\% & I & Max & Min & classes & test & Content \\ \hline \multirow{6}{*}{\begin{tabular}{l} \end{tabular} } & Natural & iNaturalist [24] & 2018 & 7 & 40 & 500 & 1000 & 2 & 8142 & ✓ & Photos of species \\ & Resampled & ImageNet-LT [37] & 2019 & 16 & 14 & 256 & 1280 & 5 & 1000 & ✓ & Image recognition \\ & Resampled & Places-LT [37] & 2019 & 8 & 19 & 996 & 4980 & 5 & 365 & ✓ & Photos of scenes \\ & Resampled & Cifar-LT-100 [15] & 2019 & 15 & 30 & 100 & 500 & 5 & 100 & ✓ & Image recognition \\ \hline \multirow{6}{*}{ \begin{tabular}{l} \end{tabular} } & Natural & EPIC-KITCHENS-100 Verbs [16] & 2020 & 3 & 19 & 14848 & 14848 & 1 & 97 & ✗ & Egocentric actions \\ & Collected & Youtube-8M [1] & 2016 & 2 & 0 & 6409 & 788288 & 123 & 3862 & ✗ & Youtube \\ & Collected & Something-Something V2 [21] & 2017 & 26 & 0 & 79 & 3234 & 41 & 174 & ✗ & Temporal reasoning \\ & Collected & VideoLT [71] & 2021 & 23 & 0 & 43 & 1912 & 44 & 1004 & ✗ & Youtube (fine-grained) \\ \cline{1-1} & Resampled & SSv2-LT (proposed) & 2022 & 9 & 32 & 500 & 2500 & 5 & 174 & ✗ & Temporal reasoning \\ \cline{1-1} & Resampled & VideoLT-LT (proposed) & 2022 & 12 & 38 & 110 & 550 & 5 & 772 & ✓ & Youtube (fine-grained) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of datasets against long-tail properties: Head Length (H%), Few-Shot Length (F%) and Imbalance (I). Red highlighted rows contain naturally-collected datasets. The bottom two rows (blue) contain our proposed VideoLT-LT and SSv2-LT, which are curated to better match naturally-collected data than other video benchmarks. classes significantly increases when adding a small number of samples per class. The effect is smaller for tail classes and marginal for head. The maximum improvement of few-shot classes occurs around +20 samples/class, when no few-shot classes remain in training. To address this challenge, it is thus important to have a sufficient number of few-shot classes in long-tail benchmarks. Recall that naturally-collected datasets contain significant few-shot length (40% for iNaturalist and 19% for EPIC-KITCHENS-100). ## 4 Related Methods Having justified our proposed benchmarks and before introducing our method, we first review long-tail and few-shot video recognition methods. ### Long-Tail Methods There are two main approaches to tackle long-tail recognition: re-weighting and re-balancing. **Re-weighting** approaches impose a higher penalty when misclassifying samples from tail classes. This can be done by directly adjusting logits [40, 47, 57, 66, 32] or weighting the loss by class size [15, 53] or individual sample difficulty [18, 25, 31, 35, 43, 50]. Alternative approaches include label smoothing [73] and enforcing separation between class embeddings [49, 27, 34]. Re-weighting can also be achieved by enabling more experts to specialise on tail classes and combining predictions [7, 13, 30, 62, 72, 76]. **Re-balancing** approaches instead adjust the frequency at which examples from different classes are used in training, without adjusting the loss function. This can be achieved using a class-equalising feature bank [38] or more commonly by equal sampling from each class [22] or by instance difficulty [51, 67]. It has become standard practice to first train the representation with instance-balanced sampling [55] followed by class-balanced sampling [3, 26, 70]. Augmentations are known to introduce diversity into tail samples [33]. They combine the sample with a nearby class prototype [11], or create feature _clouds_ to expand tail classes [36]. Further augmentation approaches include combining class-specific and class-generic features [12], using a separate classifier to identify head samples that can be adjusted and re-labeled as tail classes [28], or pasting tail foreground objects onto backgrounds from head classes [42]. Contrastive learning has also been used to improve representations [14, 74]. For video, Framestack [71] proposes temporally mixing up samples, frame-wise, based on average-precision during training. Our proposed method, LMR, belongs to the re-balancing category. It is related to approaches for augmentation but differs in that it uses samples from _multiple_ other classes, weighting the reconstruction by the class count and jointly reconstructing all samples in the batch. ### Few-Shot Video Recognition Despite the infancy of the long-tail video recognition field, the related field of few-shot video recognition has been more widely studied [5, 8, 20, 45, 63, 65, 69, 75, 77, 78]. Instead of learning a long-tailed class distribution, few-shot methods learn to distinguish between a limited number of balanced few-shot classes (_e.g_. 5-way 5-shot). Few-shot video methods rely on attention between frames of the query video and all samples in the support set of each class [45, 63, 65, 78]. This requires the support set to be held in memory, which makes few-shot methods unsuitable for direct application to long-tail learning. Further, due to their design around balanced benchmarks, these methods cannot handle imbalance. Our method takes inspiration from few-shot works in designing an approach for long-tail video recognition. In particular, image [19] and video [45] few-shot methods use a reconstruction technique to measure the similarity between a query and a class. A similar technique is used in [44] as input to a text captioning module. Each video is reconstructed from similar videos in the batch, using a cross-modal embedding space. In contrast to these works, we apply reconstruction _across classes_ using multiple head samples to benefit those in the tail or those which are few shot. We also Figure 4: Effect of adding \(+x\) samples per class on SSV2-LT. Average class accuracy is reported overall and for head, tail and few-shot classes. Per-case improvement reported next to arrow. Figure 3: We compare four variants of SSV2-LT (A, B, C, D) with different H% and F% properties, while fixing I = 500, and the training dataset size = 50.4k. Top: percentage of head, tail and few-shot classes in each variant. Bottom: average class accuracy over the long-tail distribution. Variant C, highlighted in blue, is the proposed version used throughout the rest of this paper. make use of a residual connection to maintain knowledge of the sample itself. We detail our method next. ## 5 Method When performing class-balanced sampling, instances from the tail are oversampled. This is particularly problematic for few-shot classes, where insufficient sample diversity results in overfitting. We propose Long-Tail Mixed Reconstruction (LMR), which aims to recover this diversity by computing a linear combination of the sample itself and weighted combinations of similar samples in the batch, weighted by the class size and followed by pairwise label mixing. In contrast to standard augmentation techniques, reconstructions are more representative of examples likely to be seen at test time, since they make use of visually similar samples from within the training set. We first describe how classes are treated differently based on their count. We then proceed to describe our reconstruction and pairwise label mixing. ### Long-Tailed Class Contribution We consider the long-tailed class distribution of samples in the training set, and take \(C_{y}\) as the count of the class with label \(y\). We define a contribution function \(\mathbf{c}(y)\), per class, which we use later for reconstructing instances. We first calculate \(\tilde{C}_{y}\) as the weight of class \(y\): \[\tilde{C}_{y}=\frac{1}{\log\left(C_{y}d+\epsilon\right)}, \tag{1}\] where \(d\) controls the decay, and \(\epsilon\) is a constant which ensures a positive denominator. These class weights can then be used to calculate the contribution function (low for head classes, high for tail): \[\mathbf{c}(y)=\frac{\tilde{C}_{y}-\min(\tilde{C}_{y})}{\max(\tilde{C}_{y})- \min(\tilde{C}_{y})}l. \tag{2}\] Here, \(0\leq l\leq 1\) is a hyperparameter controlling the contribution for the lowest class count. Note that these class contributions are established for the classes based on the training set, and not changed during training. ### Long-Tail Mixed Reconstruction **Setup.** Recognition methods combine a feature encoder \(\mathbf{f}(\cdot)\) and a classifier \(\mathbf{g}(\cdot)\). Data is fed to the model for training in the form of batches, where a batch \(X\) contains \(B\) videos \(X=\{x_{i}:i=1...B\}\) with associated labels \(Y=\{y_{i}:i=1...B\}\). Given the class contribution function from Eq. 2, we look up \(\mathbf{c}(Y)\) for the samples in the batch, given their class labels. To start, features for the batch are computed in the forward pass as \(Z=\mathbf{f}(X)\). We propose a mixed reconstructor \(\mathbf{mr}(\cdot,\cdot)\), which acts on features \(Z\) and labels \(Y\), and returns a new reconstructed representation with an updated label for every video in the batch. **Sample reconstruction.** We calculate cosine similarity \(\mathbf{s}\) between all features within the batch, \(S_{ij}=\mathbf{s}\left(Z_{i},Z_{j}\right)\). Note that here, \(i\) denotes the feature to be reconstructed, and \(j\) denotes the feature being used for the reconstruction. We then calculate an exclusion mask \(E\), avoiding self-weighting, i.e. samples should not contribute to their own reconstructions, and samples from few-shot classes are also avoided as these are already oversampled. The exclusion mask \(E\) is visualised in Fig. 4(a), and calculated as: \[E_{ij}=\begin{cases}0&\text{if }(i=j)\text{ or }(C_{y_{i}}\leq\omega)\\ 1&\text{otherwise}\end{cases} \tag{3}\] where \(\omega=20\) is the few-shot threshold. Next, we apply a softmax operation over non-masked elements per row (_i.e_. one softmax per \(i\)), which calculates reconstruction weights \(W\): \[W_{ij}=\frac{\exp(S_{ij})E_{ij}}{\sum_{k=1}^{B}\exp(S_{ik})E_{ik}}. \tag{4}\] We use a residual connection weighted by the class contribution - the smaller the class, the more the weighted features \(WZ\) contribute to the reconstruction of samples from that class. Specifically: \[R=\mathbf{c}(Y)WZ+(1-\mathbf{c}(Y))Z\, \tag{5}\] where \(\mathbf{c}(Y)\) (Eq. 2) are the contribution functions of the class labels in the batch and \(R\) are the reconstructed features. For few shot classes, the reconstruction is mostly formed from the weighted combination of other _similar_ samples in the batch. Note that these reconstructions have the same class labels \(Y\) as the features \(Z\) they replace. **Pairwise label mixing.** Once the reconstructions \(R\) are obtained, we take a step further by performing stochastic pairwise mixing (Fig. 4(b)). We use a mixing mask \(M\) such that: Figure 4: LMR overview: reconstruction (a) and label-mixing (b). \[M_{ij}=\begin{cases}\alpha_{i}&\text{if }(i=j)\\ 1-\alpha_{i}&\text{if }(j=\beta_{i})\\ 0&\text{otherwise}\end{cases} \tag{6}\] where \(\alpha\) is a \(B\)-dimensional set of mixing weights, one for each sample. Following standard mixing, \(\alpha_{i}=1\) with probability 0.5, and randomly \(0\leq\alpha_{i}\leq 1\) otherwise. \(\beta\) is a \(B\) dimensional sample selector, that selects a different sample from the batch. \(1\leq\beta_{i}\leq B,\beta_{i}\neq i\) and \(\beta_{i}\in\mathbb{N}\). We apply the mixing mask \(M\) to our reconstructions \(R\) and their labels \(Y\) such that \[\mathbf{mr}(Z,Y)=(MR,MY). \tag{7}\] We then pass these reconstructed and mixed features with the corresponding mixed labels to the classifier \(\mathbf{g}\) to train. ### Training and Inference As customary [26], the classifier \(\mathbf{g}\), acting on the backbone \(\mathbf{f}\), is first pre-trained with instance-based sampling and cross-entropy. Afterwards, \(\mathbf{g}\) is reset. LMR is then trained with class-balanced sampling and cross-entropy on \(\mathbf{g}\). This is backpropagated through the mixed reconstructor \(\mathbf{mr}\) and feature extractor \(\mathbf{f}\). At inference, \(\mathbf{mr}\) is discarded, as a suitable feature extractor \(\mathbf{f}\) and classifier \(\mathbf{g}\) have been learned for long-tail recognition. Each test sample/video is processed independently, _i.e_. there is no reconstruction, and labels and class counts are not used. ## 6 Experiments We first perform comparative analysis on EPIC-KITCHENS-100, SSv2-LT and VideoLT-LT. **Metrics.** The primary metric for long-tail video recognition is average class accuracy (Avg C/A), as it provides a fair evaluation when the test set is unbalanced. When the test set is balanced, as in the case of SSv2-LT and VideoLT-LT, Avg C/A and overall accuracy (Acc) are identical metrics. EPIC-KITCHENS-100 has an unbalanced test set so overall accuracy is also provided for reference. We also report average class accuracy for few-shot (marked "few" in tables), tail and head classes, as defined by the properties in Sec. 2. **Baselines.** We compare against the following methods, also identified in [71] as suitable for long-tail video recognition: * **CE:** Standard cross entropy trained with instance-balanced sampling. * **EQL:** As in **CE**, but using an Equalization Loss [54], which reduces the penalty for misclassifying a head class as a tail class. This baseline is currently used by video transformer works to address class imbalance [65]. * **cRT:** Classifier Retraining [26]. This is now the standard practice of instance-balanced sampling, followed by a classifier reset and class-balanced sampling. * **Mixup [68]:** Pairs of samples and their labels are mixed. * **Framestack [71]:** Mixes up video frames based on a running total of class average precision. **Implementation Details.** For all experiments on EPIC-KITCHENS-100 and SSv2-LT, we use Motionformer [39], a spatio-temporal transformer with attention guided by trajectories which achieves strong results on EPIC-KITCHENS-100 and SSv2. We use the default configuration of 16 frame input and 224\(\times\)224 resolution with 16\(\times\)16 patches. We train on 8\(\times\)V100 GPUs, with a distributed batch of 56 samples. To enable processing on multiple GPUs, we maintain a feature bank of previous iterations per GPU. Other details (architecture, optimisation _etc._) follow the default code of Motionformer and are noted in Appendix B. For all methods apart from CE and EQL, we follow the cRT disentanglement approach [26]. We first train end-to-end using instance-balanced sampling with a cross-entropy loss. We then reset the classifier and switch to class-balanced sampling for a full training run. For VideoLT-LT experiments, we use the codebase provided with the original dataset and accompanying method Framestack [71] to be directly comparable to prior works. It uses pre-extracted ResNet-50 [23] frame features with a non-linear classifier and score aggregation. We use the default batch size of 128 samples trained on 1\(\times\)P100 GPU. For LMR, the few-shot threshold is \(\omega=20\). Decay and scaling parameters for the contribution function are \(d=0.25\) and \(l=0.6\) for SSv2-LT and VideoLT-LT, and \(d=0.15\) and \(l=1.0\) for EPIC-KITCHENS-100 as it has a smaller minimum class size. **Results.** Table 2 shows the results for EPIC-KITCHENS-100, SSv2-LT and VideoLT-LT. LMR performs best on all datasets for average class accuracy. Note that prior results were reported on datasets that did not contain any few shot classes (see Sec 2.1). By evaluating on EPIC-KITCHENS-100, and proposing benchmarks with few-shot classes, we can expose the limitations of these methods previously deemed competitive for long-tail video recognition. LMR also obtains the best results on few-shot classes (highlighted in green) on all datasets. For tail classes, LMR performs comparably or outperforms prior baselines. For head classes, LMR performs comparably to long-tail baselines on EPIC-KITCHENS-100 and SSv2-LT, but takes a bigger hit on VideoLT-LT. We do not change any of the hyperparameters across datasets for fairer comparison, but consider results can be further improved if optimised per dataset. Figure 6 shows class improvements of LMR compared to CE on EPIC-KITCHENS-100. Significant improvements are seen on smaller classes (few-shot and end of tail). Some head classes drop in performance, particularly the largest. Similar trends were found on SSv2-LT and VideoLT-LT. Figure 7 shows selected examples from all datasets. CE tends to predict few-shot classes as visually similar head classes. For example, on EPIC-KITCHENS-100, CE misclassifies the few-shot "carry" as the head class "put" due to visual similarity of holding the cup. Consistently, LMR predicts the few-shot class correctly. A failure case is shown for SSv2-LT, where LMR predicts the head class "throwing something" as the tail class "throwing something in the air and letting it fall." ## 7 Ablations We perform all ablations on EPIC-KITCHENS-100 and SSv2-LT using the Motionformer backbone. **LMR Ablation.** Table 3 ablates the design choices of LMR against the full version (first row). First, class contributions are replaced by a constant (0.5 in A and 1 in B). When reconstructions are used solely, without the residual connection (B), performance decreases dramatically. Using label mixing without reconstructions is shown in (C) as well as reconstructions without label mixing (D). Interestingly, label mixing has a bigger impact on performance for SSv2-LT than EPIC-KITCHENS-100. **Contribution parameters.** Reconstructions are combined with original representations according to the contribution function \(\mathbf{c}(\mathbf{Y})\) in Eq. 5, which maps class count to a contribution between 0 and 1. It is parameterised by the decay \(d\) and the contribution \(l\) for the lowest class count. First, \(d\) is fixed at \(0.25\) and \(l\) is varied between 0.0 and 1.0. Results are shown in Tab. 4, where 0.6 performs best on the few-shot classes and overall. Next, \(l\) is fixed at 0.6 and \(d\) is varied, with results shown in Tab. 5. In both cases, results have a region of stability, with the best combination being \(l=0.6\) and \(d=0.25\). **Number of Samples Used for Reconstruction.** We assess the impact of the number of samples in the batch used in the reconstruction process (\(B\)). Table 6 shows how varying the number of samples affects overall performance on SSv2-LT. Best performance is reported at our default of 56 samples. **Threshold for Masked Classes in Reconstruction.** The threshold \(\omega\), used for masking in Eq. 3, is by default set to 20, which is the threshold for few-shot classes. The masking is used to prevent few-shot samples contributing to \begin{table} \begin{tabular}{l||c c c c|c c c|c c c|c c c} \hline \hline & \multicolumn{4}{c|}{**EPIC-KITCHENS-100**} & \multicolumn{4}{c|}{**SSv2-LT**} & \multicolumn{4}{c}{**VideoI-LT**} \\ Method & Few & Tail & Head & Avg C/A & Acc & Few & Tail & Head & Avg C/A = Acc & Few & Tail & Head & Avg C/A = Acc \\ \hline CE & 0.0 & 12.3 & **55.2** & 21.2 & 63.5 & 2.0 & 38.9 & **75.2** & 29.7 & 17.4 & 51.1 & **75.9** & 41.0 \\ EQL [54] & 0.0 & 12.4 & 55.0 & 21.1 & 63.3 & 3.1 & 39.0 & **75.2** & 30.1 & 17.4 & 51.0 & 75.4 & 40.9 \\ cRT [26] & 21.4 & 35.0 & 51.1 & 36.9 & 50.1 & **14.9** & **45.6** & 58.6 & 36.5 & 30.5 & **56.9** & 64.0 & 47.5 \\ Mixup [68] & 25.8 & 33.8 & 51.7 & 36.8 & 51.7 & 17.4 & **46.6** & 57.1 & 37.8 & 15.8 & 48.9 & 72.5 & 38.9 \\ Framestack [71] & 23.0 & 33.6 & 52.1 & 36.5 & 52.5 & 15.5 & 46.1 & 61.9 & 37.2 & 18.2 & 51.8 & 74.5 & 41.5 \\ LMR & **35.7** & **36.8** & 51.1 & **39.7** & 51.3 & **17.9** & 46.5 & 61.0 & **38.3** & **34.8** & 56.8 & 62.1 & **48.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Long-tail results on EPIC-KITCHENS-100 Verbs Val set [16], SSv2-LT and VideoLT-LT. Note that average class accuracy (Avg C/A) is the same as overall accuracy (Acc) for balanced test sets (SSv2-LT and VideoLT-LT). EPIC-KITCHENS-100 has an unbalanced test set, so overall accuracy, which favours over-prediction of head classes, is provided for reference. LMR obtains the highest average class accuracy on all datasets, as well as the highest average class accuracy over few-shot classes. Figure 6: Improvements of LMR over CE on EPIC-KITCHENS-100. Classes are ordered by size and marked as head/tail/few-shot. Figure 7: Qualitative examples from all benchmarks comparing CE, cRT and the proposed LMR. Blue, pink and green indicate whether the prediction is from a head, tail or few-shot class. the reconstruction of other samples. Table 7 shows the effect of varying \(\omega\). Best performance is obtained at \(\omega=20\). **Visualising LMR.** Fig. 8 shows t-SNE [60] projections of representations without LMR (_i.e_. cRT) and with. cRT pushes the few shot classes (green) to the periphery. LMR results in larger, _i.e_. more diverse, few-shot clusters towards the centre of the projection. This indicates a higher proximity to head and tail classes which creates robust class boundaries and better generality to unseen test samples. ## 8 Conclusion In this paper, we defined a set of properties, enabling quantitative comparison of long-tail distributions. We showcased that curated long-tail image datasets are comparable to naturally-collected ones, while previously proposed video datasets fall short. Based on these findings, we proposed new benchmarks, SSv2-LT and VideoLT-LT, and suggested their use, alongside EPIC-KITCHENS-100, for evaluating long-tail video recognition. We proposed LMR, a method for long-tail video recognition, which reconstructs few-shot samples as weighted combinations of other samples in the batch. A residual connection, weighted by the class size, combines instances with their reconstructions, followed by pairwise label mixing. LMR reduces overfitting to instances from few-shot classes, and outperforms prior methods on the three benchmarks. We hope our proposed benchmarks and method will provide a foundation for long-tail video recognition, and encourage further contributions applicable to naturally-collected data. **Acknowledgments.** We use publicly available datasets and publish our proposed benchmarks. Research is funded by EPSRC UMPIRE (EP/T004991/1), EPSRC SPHERE Next Steps (EP/R005273/1), EPSRC DTP and EPSRC PG Visual AI (EP/T028572/1). We acknowledge the use of HPC Tier 2 Facility Jade 2 and Bristol's Blue Crystal 4 facility. \begin{table} \begin{tabular}{c c
2310.03813
Cold-start Bundle Recommendation via Popularity-based Coalescence and Curriculum Heating
How can we recommend cold-start bundles to users? The cold-start problem in bundle recommendation is crucial because new bundles are continuously created on the Web for various marketing purposes. Despite its importance, existing methods for cold-start item recommendation are not readily applicable to bundles. They depend overly on historical information, even for less popular bundles, failing to address the primary challenge of the highly skewed distribution of bundle interactions. In this work, we propose CoHeat (Popularity-based Coalescence and Curriculum Heating), an accurate approach for cold-start bundle recommendation. CoHeat first represents users and bundles through graph-based views, capturing collaborative information effectively. To estimate the user-bundle relationship more accurately, CoHeat addresses the highly skewed distribution of bundle interactions through a popularity-based coalescence approach, which incorporates historical and affiliation information based on the bundle's popularity. Furthermore, it effectively learns latent representations by exploiting curriculum learning and contrastive learning. CoHeat demonstrates superior performance in cold-start bundle recommendation, achieving up to 193% higher nDCG@20 compared to the best competitor.
Hyunsik Jeon, Jong-eun Lee, Jeongin Yun, U Kang
2023-10-05T18:02:03Z
http://arxiv.org/abs/2310.03813v3
# Accurate Cold-start Bundle Recommendation via Popularity-based Coalescence and Curriculum Heating ###### Abstract. How can we accurately recommend cold-start bundles to users? The cold-start problem in bundle recommendation is critical in practical scenarios since new bundles are continuously created for various marketing purposes. Despite its importance, no previous studies have addressed cold-start bundle recommendation. Moreover, existing methods for cold-start item recommendation overly rely on historical information, even for unpopular bundles, failing to tackle the primary challenge of the highly skewed distribution of bundle interactions. In this work, we propose CoHeat (Popularity-based Coalescence and Curriculum Heating), an accurate approach for the cold-start bundle recommendation. CoHeat tackles the highly skewed distribution of bundle interactions by incorporating both historical and affiliation information based on the bundle's popularity when estimating the user-bundle relationship. Furthermore, CoHeat effectively learns latent representations by exploiting curriculum learning and contrastive learning. CoHeat demonstrates superior performance in cold-start bundle recommendation, achieving up to 193% higher nDCG@20 compared to the best competitor. cold-start bundle recommendation; curriculum learning; contrastive learning + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems ## 1. Introduction _How can we accurately recommend cold-start bundles to users?_ Bundle recommendation has garnered significant attention in both academia and industry since it enables providers to offer items to users with one-stop convenience (Kang et al., 2018). In particular, recommending new bundles to users (i.e. cold-start bundle recommendation) is important in practical scenarios because the new bundles are constantly created for various marketing purposes (Kang et al., 2018). In recent years, bundle recommendation has seen advancements through matrix factorization-based approaches (Chen et al., 2018; Li et al., 2018; Li et al., 2018) and graph learning-based approaches (Chen et al., 2018; Li et al., 2018; Li et al., 2018). However, they have been developed for a warm-start setting, where all bundles possess historical interactions with users. Consequently, they fail to effectively perform in a cold-start setting, where certain bundles are devoid of historical interactions. This is because warm-start methods rely highly on historical information to learn bundle representations. On the other hand, the cold-start problem in item recommendation has been extensively studied, with a focus on aligning behavior representations with content representations. For instance, generative methods have aimed to model the generation of item behavior representations using mean squared error (Krizhevsky et al., 2015) and adversarial loss (Chen et al., 2018). Dropout-based methods (Krizhevsky et al., 2015; Li et al., 2018) have aimed to bolster robustness to behavior information by randomly dropping the behavior embedding in the training phase. More recently, contrastive learning-based methods (Zhu et al., 2018; Li et al., 2018) have shown superior performance by reducing the discrepancy between the distributions of behavior and content information of items. However, none of the existing works have explicitly considered the skewed distribution of interactions which is a pivotal aspect in bundle recommendation as shown in Figure 0(a). For unpopular bundles, aligning behavior representations from insufficient historical information with content representations amplifies inherent biases and makes it difficult to learn meaningful representations; this results in sacrificing the performance on a warm-start setting to improve the performance on a cold-start setting (see Figure 2). In this paper, we propose CoHeat (Popularity-based Coalescence and Curriculum Heating), an accurate method for cold-start bundle recommendation. CoHeat constructs representations of users and bundles using two distinct graph-based views: history-view and affiliation-view. The history-view graph is grounded in historical interactions between users and bundles, whereas the affiliation-view graph captures information rooted in bundle affiliations. To handle the extremely skewed distribution, CoHeat strategically leverages both views in its predictions, emphasizing affiliation-view for less popular bundles since they provide richer information than Figure 1. (a) Extremely skewed distribution of bundle interactions in real-world datasets (data statistics are summarized in Table 1). (b-c) For an unpopular bundle, history-view provides insufficient information while affiliation-view provides sufficient information. the sparse history-view, as shown in Figures 0(b) and 0(c). In addition, to effectively learn the affiliation-view representations which are fully used for cold-start bundles, CoHeat exploits a curriculum learning approach that gradually shifts the training focus from the history-view to the affiliation-view. CoHeat further exploits a contrastive learning approach to align the representations of the two views effectively. Our contributions are summarized as follows: * **Problem.** To our knowledge, this is the first work that tackles the cold-start problem in bundle recommendation, a challenging problem of significant impact in real-world scenarios. * **Method.** We propose CoHeat, an accurate method for cold-start bundle recommendation. CoHeat effectively treats the extremely skewed distribution of interactions in order to accurately recommend cold-start bundles based on their affiliations. * **Experiments.** We experimentally show that CoHeat provides the state-of-the-art performance achieving up to 193% higher nDCG@20 compared to the best competitor in cold-start bundle recommendation (see Figure 2). ## 2. Preliminaries ### Problem Definition The problem of cold-start bundle recommendation is defined as follows. Let \(\mathcal{U}\), \(\mathcal{B}\), and \(\mathcal{I}\) be the sets of users, bundles, and items, respectively. Among the bundles, \(\mathcal{B}_{w}\subset\mathcal{B}\) refers to the warm-start bundles that have at least one historical interaction with users, while \(\mathcal{B}_{c}=\mathcal{B}\setminus\mathcal{B}_{w}\) represents the cold-start bundles that lack any historical interaction with users. The observed user-bundle interactions, user-item interactions, and bundle-item affiliations are respectively defined as \(\mathcal{X}=\{(u,b)|u\in\mathcal{U},b\in\mathcal{B}_{w}\}\), \(\mathcal{Y}=\{(u,i)|u\in\mathcal{U},i\in I\}\), and \(\mathcal{Z}=\{(b,i)|b\in\mathcal{B},i\in I\}\). Given \(\{\mathcal{X},\mathcal{Y},\mathcal{Z}\}\), our goal is to recommend \(k\) bundles from \(\mathcal{B}\) to each user \(u\in\mathcal{U}\). Note that the given interactions are observed only for warm bundles but the objective includes recommending also cold bundles to users. The central challenge in cold-start bundle recommendation, compared to traditional bundle recommendation, lies in accurately predicting the relationship between a user \(u\in\mathcal{U}\) and a cold-start bundle \(b\in\mathcal{B}_{c}\) in the absence of any historical interactions of \(b\). Hence, the crux of addressing the problem is to effectively estimate the representations of cold-start bundles using their affiliation information. ### Curriculum Learning Curriculum learning, inspired by human learning, structures training from simpler to more complex tasks, unlike standard approaches that randomize task order (Bang et al., 2019; Li et al., 2020). Its effectiveness has been proven in various domains, including computer vision (Song et al., 2019; Wang et al., 2020), natural language processing (Liu et al., 2019; Wang et al., 2020), robotics (Liu et al., 2019; Wang et al., 2020), and recommender systems (Liu et al., 2019; Li et al., 2020). In this work, we harness curriculum learning to enhance the learning process of user-bundle relationships. We initiate with a focus on the more straightforward history-view embeddings and then progressively shift attention to the intricate affiliation-view embeddings. This strategy stems from the ease of learning history-view embeddings, which directly capture collaborative signals from historical interactions. In contrast, affiliation-view embeddings are more complicated due to their dependence on the representations of affiliated items. ### Contrastive Learning Contrastive learning aims to learn meaningful embeddings by distinguishing between similar and dissimilar data samples. It has consistently demonstrated superior performance across a range of research fields, including computer vision (Wang et al., 2019; Li et al., 2020), natural language processing (Liu et al., 2019; Li et al., 2020), and recommender systems (Liu et al., 2019; Wang et al., 2020). Specifically, CrossCBR (Liu et al., 2019) recently achieved a good performance in bundle recommendation by regularizing embeddings of users and bundles using InfNoCE (Wang et al., 2020) between history-view and affiliation-view. However, CrossCBR aligns the two views while equally treating them in prediction. In contrast, our work adaptively modulates the weights of these views based on bundle popularity, thereby facilitating information transfer from the more informative view Figure 2. Performance comparison between CoHeat and competitors on three real-world datasets: Youshu, NetEase, and iFashion. The performance is evaluated through Recall@20 for all experiments. We mark cold-start methods as orange, and warm-start methods as red. CoHeat demonstrates superior performance over existing methods in both cold and warm settings, with a notable advantage in outperforming competitors. to its sparser counterpart. Additionally, in our contrastive learning approach, we utilize the alignment and uniformity loss (Krizhevsky et al., 2014). This has been shown to surpass InfoNCE in various applications (Krizhevsky et al., 2014; Krizhevsky et al., 2014), as it directly optimizes the core perspectives of contrastive learning. ## 3. Proposed Method ### Overview We address the following challenges to achieve high performance on cold-start bundle recommendation. 1. **Handling highly skewed interactions.** Previous works overly depend on history-view representations, which are unreliable if bundles have sparse interactions. How can we effectively learn the representations from highly skewed interactions? 2. **Effectively learning affiliation-view representations.** Despite the ample information provided by the affiliation-view, multiple items in a bundle complicate learning of these representations. How can we effectively learn the affiliation-view representations? 3. **Bridging the gap between two view representations.** Aligning history-view and affiliation-view is crucial, as we estimate future interactions of cold bundles only using their affiliations. How can we effectively reconcile these two view representations? To address these challenges, we propose CoHeat (Popularity-based Coalescence and Curriculum Heating) with the following main ideas. 1. **Popularity-based coalescence.** For the score between users and bundles, we propose the coalescence of two view scores, with less popular bundles relying more on affiliation-view scores and less on history-view scores. 2. **Curriculum heating.** We propose a curriculum learning approach that focuses initially on training representations using the history-view, gradually shifting the focus to the affiliation-view. 3. **Representation alignment and uniformity.** We exploit a representation alignment and uniformity approach to effectively reconcile the history-view and affiliation-view representations. Figure 3 depicts the schematic illustration of CoHeat. Given user-bundle interactions, user-item interactions, and bundle-item affiliations, CoHeat forms two graph-based views. Then, it predicts user-bundle scores by coalescing scores from both views based on bundle popularity. During training, CoHeat prioritizes history-view initially, transitioning progressively to affiliation-view via curriculum heating. CoHeat also exploits alignment and uniformity loss to regularize both views. ### Two Graph-based Views The objective of bundle recommendation is to estimate the relationship between users and bundles by learning their latent representations. We utilize graph-based representations of users and bundles to fully exploit the given user-bundle interactions, user-item interactions, and bundle-item affiliations. We construct history-view and affiliation-view graphs and use LightGCN (Hu et al., 2017) to obtain embeddings of users and bundles (Hu et al., 2018). **History-view representation and score.** In history-view, we aim to capture the behavior signal between users and bundles. Specifically, we construct a bipartite graph using user-bundle interactions, and propagate the historical information using a LightGCN. Figure 3. Overview of CoHeat (see Section 3 for details). The \(k\)'th layer of the LightGCN is computed as follows: \[\begin{split}\mathbf{h}_{u}^{(k)}&=\sum_{b\in\mathcal{ N}_{u}}\frac{1}{\sqrt{|\mathcal{N}_{u}|}\sqrt{|\mathcal{N}_{b}|}}\,\mathbf{h}_{b}^{(k-1)}, \\ \mathbf{h}_{b}^{(k)}&=\sum_{u\in\mathcal{N}_{b}} \frac{1}{\sqrt{|\mathcal{N}_{b}|}\sqrt{|\mathcal{N}_{u}|}}\,\mathbf{h}_{u}^{(k -1)},\end{split} \tag{1}\] where \(\mathbf{h}_{u}^{(k)},\mathbf{h}_{b}^{(k)}\in\mathbb{R}^{d}\) are the embeddings of user \(u\) and bundle \(b\) at \(k\)'th layer, respectively; \(\mathcal{N}_{u}\) and \(\mathcal{N}_{b}\) are the sets of user \(u\)'s neighbors and bundle \(b\)'s neighbors in the user-bundle graph, respectively. \(\mathbf{h}_{u}^{(0)},\mathbf{h}_{b}^{(0)}\in\mathbb{R}^{d}\) are randomly initialized before the training of the model. We obtain the history-view representations of user \(u\) and bundle \(b\) by aggregating the embeddings from all layers with a weighting approach that places greater emphasis on the lower layers as follows: \[\mathbf{h}_{u}=\sum_{k=0}^{K}\frac{1}{k+1}\mathbf{h}_{u}^{(k)},\mathbf{h}_{ b}=\sum_{k=0}^{K}\frac{1}{k+1}\mathbf{h}_{b}^{(k)}, \tag{2}\] where \(\mathbf{h}_{u},\mathbf{h}_{b}\in\mathbb{R}^{d}\) are the history-view embeddings of user \(u\) and bundle \(b\), respectively; \(K\) denotes the last layer. Finally, the history-view score between user \(u\) and bundle \(b\) is defined as \(\mathbf{h}_{ub}=\mathbf{h}_{u}^{\mathrm{T}}\mathbf{h}_{b}\). **Affiliation-view representation and score.** In affiliation-view, we aim to learn the relationship between users and bundles from the perspective of item affiliations. Specifically, we construct a bipartite graph using user-item interactions, and propagate the historical information using another LightGCN. Then, we obtain bundle representations by aggregating the affiliated items' representations. The \(k\)'th layer of the LightGCN is computed as follows: \[\begin{split}\mathbf{a}_{u}^{(k)}&=\sum_{i\in \mathcal{N}_{u}}\frac{1}{\sqrt{|\mathcal{N}_{u}^{\prime}|}\sqrt{|\mathcal{N}_{ i}|}}\,\mathbf{a}_{i}^{(k-1)},\\ \mathbf{a}_{i}^{(k)}&=\sum_{u\in\mathcal{N}_{i}} \frac{1}{\sqrt{|\mathcal{N}_{i}|}}\,\mathbf{a}_{u}^{(k-1)},\end{split} \tag{3}\] where \(\mathbf{a}_{u}^{(k)},\mathbf{a}_{i}^{(k)}\in\mathbb{R}^{d}\) are the embeddings of user \(u\) and item \(i\) at \(k\)'th layer, respectively; \(\mathcal{N}_{u}^{\prime}\) and \(\mathcal{N}_{i}\) are the sets of user \(u\)'s neighbors and item \(i\)'s neighbors in the user-item graph, respectively. \(\mathbf{a}_{u}^{(0)},\mathbf{a}_{i}^{(0)}\in\mathbb{R}^{d}\) are randomly initialized before the training. We obtain the affiliation-view representations of user \(u\) and item \(i\) by aggregating the embeddings from all layers with a weighting approach as follows: \[\begin{split}\mathbf{a}_{u}=\sum_{k=0}^{K}\frac{1}{k+1}\mathbf{a }_{u}^{(k)},\mathbf{a}_{i}=\sum_{k=0}^{K}\frac{1}{k+1}\mathbf{a}_{i}^{(k)}, \end{split} \tag{4}\] where \(\mathbf{a}_{u},\mathbf{a}_{i}\in\mathbb{R}^{d}\) are the affiliation-view embeddings of user \(u\) and item \(i\), respectively; \(K\) indicates the last layer. We then obtain the affiliation-view representations of bundle \(b\) by an average pooling as \(\mathbf{a}_{b}=\frac{1}{|\mathcal{N}_{b}^{\prime}|}\sum_{i\in\mathcal{N}_{b}} \mathbf{a}_{i}\), where \(\mathcal{N}_{b}^{\prime}\) is the set of bundle \(b\)'s affiliated items. Finally, the affiliation-view score between user \(u\) and bundle \(b\) is defined as \(a_{ub}=\mathbf{a}_{u}^{\mathrm{T}}\mathbf{a}_{b}\). ### Popularity-based Coalescence For recommending bundles to users, our objective is to estimate the final score \(\hat{y}_{ub}\in\mathbb{R}\) between user \(u\) and bundle \(b\) using scores \(h_{ub}\) and \(a_{ub}\), derived from the two distinct views. However, real-world datasets present an inherent challenge of handling the extremely skewed distribution of interactions between users and bundles, as illustrated in Figure 0(a). While both views are informative, many unpopular bundles are underrepresented in the history-view due to the insufficient interactions as illustrated in Figure 0(b). In contrast, they are often sufficiently represented in the affiliation-view, as depicted in Figure 0(c). A uniform weighting strategy for both views, as in CrossCBR, risks amplifying biases inherent to the history-view, especially for the unpopular bundles. This predicament is further exacerbated for cold-start bundles devoid of history-view data. To deal with this challenge, we propose two desired properties for the user-bundle relationship score \(\hat{y}_{ub}\). _Property 1_ (History-view influence mitigation): The influence of history-view score should be mitigated as a bundle's interaction number decreases, i.e. \(\frac{\partial\hat{y}_{ub}}{\partial h_{ub}}<\frac{\partial\hat{y}_{ub^{\prime }}}{\partial h_{ub^{\prime}}}\) if \(n_{b}<n_{b^{\prime}}\) where \(n_{b}\) is the number of interactions of bundle \(b\). _Property 2_ (Affiliation-view influence amplification): The influence of affiliation-view score should be amplified as a bundle's interaction number decreases, i.e. \(\frac{\partial\hat{y}_{ub}}{\partial h_{ub}}>\frac{\partial\hat{y}_{ub^{\prime }}}{\partial a_{ub^{\prime}}}\) if \(n_{b}<n_{b^{\prime}}\) where \(n_{b}\) is the number of interactions of bundle \(b\). Properties 1 and 2 are crucial in achieving a balanced interplay between the history-view and affiliation-view scores based on bundle popularities. Specifically, they ensure a heightened emphasis on the affiliation-view over the history-view for less popular bundles. We propose the user-bundle relationship score \(\hat{y}_{ub}\) that satisfies the two desired properties by weighting the two scores \(h_{ub}\) and \(a_{ub}\) based on bundle popularities as follows: \[\hat{y}_{ub}=\gamma_{b}h_{ub}+(1-\gamma_{b})a_{ub}, \tag{5}\] where \(\gamma_{b}\in[0,1]\), which is defined in the next subsection, denotes a weighting coefficient such that \(\gamma_{b}>\gamma_{b^{\prime}}\) if \(n_{b}>n_{b^{\prime}}\). A smaller value of \(\gamma_{b}\) (i.e. a smaller value of \(n_{b}\)) ensures that the score \(\hat{y}_{ub}\) is predominantly influenced by the affiliation-view score \(a_{ub}\). We show in Lemmas 3.1 and 3.2 that Equation (5) satisfies all the desired properties. **Lemma 3.1**.: _Equation (5) satisfies Property 1._ Proof.: \(\frac{\partial\hat{y}_{ub}}{\partial h_{ub}}=\gamma_{b}\). Thus, \(\frac{\partial\hat{y}_{ub}}{\partial h_{ub}}<\frac{\partial\hat{y}_{ub^{\prime }}}{\partial h_{ub^{\prime}}}\) if \(n_{b}<n_{b^{\prime}}\) because \(\gamma_{b}<\gamma_{b^{\prime}}\). **Lemma 3.2**.: _Equation (5) satisfies Property 2._ Proof.: \(\frac{\partial\hat{y}_{ub}}{\partial a_{ub}}=1-\gamma_{b}\). Thus, \(\frac{\partial\hat{y}_{ub}}{\partial a_{ub}}>\frac{\partial\hat{y}_{ub^{\prime}}}{ \partial a_{ub^{\prime}}}\) if \(n_{b}<n_{b^{\prime}}\) because \(1-\gamma_{b}>1-\gamma_{b^{\prime}}\). ### Curriculum Heating Despite the ample information provided by the affiliation-view, multiple items in a bundle complicate the learning of affiliation-view representations. This difficulty arises because accurate representation of a bundle necessitates well-represented embeddings of its all affiliated items. On the other side, the history-view representation is relatively straightforward to learn. This simplicity arises because we encapsulate each bundle's historical characteristics into a single embedding rather than understanding the intricate composition of the bundle. Hence, we modify Equation (5) by exploiting a curriculum learning approach that focuses initially on training history-view representations, and gradually shifts the focus to the affiliation-view representations as follows: \[\hat{g}_{ub}^{(t)}=Y_{b}^{(t)}h_{ub}+(1-Y_{b}^{(t)})a_{ub}, \tag{6}\] where \(\hat{g}_{ub}^{(t)}\in\mathbb{R}\) is the estimated relationship score between user \(u\) and bundle \(b\) at epoch \(t\). \(Y_{b}^{(t)}\in\mathbb{R}\) is defined as \(Y_{b}^{(t)}=\tanh\left(\frac{n_{b}}{\hat{y}^{(t)}}\right)\), where \(n_{b}\) is the number of interactions of bundle \(b\), and \(\hat{y}^{(t)}>0\) is the temperature at epoch \(t\). Note that \(Y_{b}^{(t)}\) lies within the interval \([0,1]\) because \(\frac{n_{b}}{\hat{y}^{(t)}}\geq 0\). Then, we incrementally raise the temperature \(\hat{y}^{(t)}\) up to the maximum temperature as follows: \[\hat{y}^{(t)}=e^{t/T},t\cdot 0\to T, \tag{7}\] where \(t,T\in\mathbb{R}\) are the current and the maximum epochs of the training process, and \(\epsilon>1\) is the hyperparameter of the maximum temperature. In the initial epochs of training, \(Y_{b}^{(t)}\) is large since \(t\) is small. As a result, the score \(\hat{y}_{ub}^{(t)}\) relies more heavily on \(h_{ub}\) than \(a_{ub}\). However, as the training progresses and \(t\) increases, \(Y_{b}^{(t)}\) diminishes, shifting the emphasis from \(h_{ub}\) to \(a_{ub}\). This heating mechanism is applied to all bundles regardless of their popularity. Furthermore, we show in Lemmas 3.3 and 3.4 that Equation (6) still satisfies the two desired properties. **Lemma 3.3**.: _Equation (6) satisfies Property 1._ Proof.: \(\frac{\partial\hat{g}_{ub}^{(t)}}{\partial h_{ub}}=\tanh\left(\frac{n_{b}}{ \hat{y}^{(t)}}\right)\). Thus, \(\frac{\partial\hat{g}_{ub}^{(t)}}{\partial h_{ub}}<\frac{\partial\hat{g}_{ub ^{(t)}}^{(t)}}{\partial h_{ub^{\prime}}}\) if \(n_{b}<n_{b^{\prime}}\) because \(\hat{y}^{(t)}\) is the same for all bundles at epoch \(t\) and \(tanh(\cdot)\) is an increasing function. **Lemma 3.4**.: _Equation (6) satisfies Property 2._ Proof.: \(\frac{\partial\hat{g}_{ub}^{(t)}}{\partial h_{ub}}=1-\tanh\left(\frac{n_{b}}{ \hat{y}^{(t)}}\right)\). Thus, \(\frac{\partial\hat{g}_{ub}^{(t)}}{\partial a_{ub}}>\frac{\partial\hat{g}_{ub ^{(t)}}^{(t)}}{\partial a_{ub^{\prime}}}\) if \(n_{b}<n_{b^{\prime}}\) because \(\hat{y}^{(t)}\) is the same for all bundles at epoch \(t\) and \(1-tanh(\cdot)\) is a decreasing function. ### Representation Alignment and Uniformity While the history-view and affiliation-view are crafted to capture distinct representations, aligning the two views is essential, especially when predicting future interactions of cold bundles solely based on affiliation-view representations. To achieve this, we exploit a contrastive learning-based approach that reconciles the two views. Specifically, we use the alignment and uniformity loss (Srivastava et al., 2015) as a regularization for the representations of the two views. We firstly \(l_{2}\)-normalize the embeddings of the two views as follows: \[\hat{\mathbf{h}}_{u}=\frac{\mathbf{h}_{u}}{\|\mathbf{h}_{u}\|_{2}},\hat{ \mathbf{a}}_{u}=\frac{\mathbf{a}_{u}}{\|\mathbf{a}_{u}\|_{2}},\hat{\mathbf{h}} _{b}=\frac{\mathbf{h}_{b}}{\|\mathbf{h}_{b}\|_{2}},\hat{\mathbf{a}}_{b}=\frac{ \mathbf{a}_{b}}{\|\mathbf{a}_{b}\|_{2}}, \tag{8}\] where \(\mathbf{h}_{u},\mathbf{h}_{b}\in\mathbb{R}^{d}\) are history-view representations of user \(u\) and bundle \(b\), respectively; \(\mathbf{a}_{u},\mathbf{a}_{b}\in\mathbb{R}^{d}\) are affiliation-view representations of user \(u\) and bundle \(b\), respectively. Then, we define an alignment loss as follows: \[l_{align}=\mathop{\mathbb{E}}_{u\text{-}puser}\|\hat{\mathbf{h}}_{u}-\hat{ \mathbf{a}}_{u}\|_{2}^{2}+\mathop{\mathbb{E}}_{b\text{-}pbundle}\|\hat{ \mathbf{h}}_{b}-\hat{\mathbf{a}}_{b}\|_{2}^{2}, \tag{9}\] where \(p_{user}\) and \(p_{bundle}\) are the distributions of users and bundles, respectively. The alignment loss makes the embeddings of the two views close to each other for each user and bundle. We also define a uniformity loss as follows: \[l_{uniform} =\log\mathop{\mathbb{E}}_{u\text{-}puser}e^{-2\|\hat{\mathbf{h}}_{u }-\hat{\mathbf{h}}_{u^{\prime}}\|_{2}^{2}}\] \[+\log\mathop{\mathbb{E}}_{u\text{-}puser}e^{-2\|\hat{\mathbf{h}}_{ u}-\hat{\mathbf{a}}_{u^{\prime}}\|_{2}^{2}}\] \[+\log\mathop{\mathbb{E}}_{b\text{,}b^{\prime}}e^{-2\|\hat{\mathbf{ h}}_{b}-\hat{\mathbf{h}}_{b^{\prime}}\|_{2}^{2}}\] \[+\log\mathop{\mathbb{E}}_{b\text{,}b^{\prime}}e^{-2\|\hat{ \mathbf{h}}_{b^{\prime}}-\hat{\mathbf{a}}_{b^{\prime}}\|_{2}^{2}}, \tag{10}\] where \(u^{\prime}\) and \(b^{\prime}\) denote a user and a bundle distinct from \(u\) and \(b\), respectively. The uniformity loss ensures distinct representations for different users (or bundles) by scattering them across the space. Finally, we define the contrastive loss for the two views as follows: \[\mathcal{L}_{AU}=l_{align}+l_{uniform}. \tag{11}\] ### Objective Function and Training To effectively learn the user-bundle relationship, we utilize Bayesian Personalize Ranking (BPR) loss (Kang et al., 2017), which is the most widely used loss owing to its powerfulness, as follows: \[\mathcal{L}_{BPR}^{(t)}=\mathop{\mathbb{E}}_{(u,b^{\prime},b^{\prime})\text{-}p _{data}}-\ln\sigma(\hat{y}_{ub^{\prime}}^{(t)}-\hat{y}_{ub^{\prime}}^{(t)}), \tag{12}\] where \(p_{data}\) is the data distribution of user-bundle interactions, with \(u\) denoting a user, \(b^{+}\) indicating a positive bundle, and \(b^{-}\) representing a negative bundle. We define the final objective function as follows: \[\mathcal{L}^{(t)}=\mathcal{L}_{BPR}^{(t)}+\lambda_{1}\mathcal{L}_{AU}+\lambda_{2} \|\mathbf{\phi}\|_{2}, \tag{13}\] where \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) are balancing hyperparameters for the terms, and \(\Theta\) denotes trainable parameters of CoHeat. For the distributions \(p_{user}\) and \(p_{bundle}\), we use in-batch sampling which selects samples from the training batch of \(p_{data}\) rather than the entire dataset. This approach has empirically demonstrated to mitigate the training bias in prior studies (Srivastava et al., 2015; Wang et al., 2016). All the parameters are optimized in an end-to-end manner through the optimization. We also adopt an edge dropout (Kang et al., 2017; Wang et al., 2016) while training to enhance the performance robustness. ## 4. Experiments In this section, we perform experiments to answer the following questions. 1. **Comparison with cold-start methods.** Does CoHeat show superior performance in comparison to other cold-start methods in bundle recommendation? 2. **Comparison with warm-start methods.** Does CoHeat show similar performance in warm-start bundle recommendation compared with baselines, although CoHeat is a cold-start bundle recommendation method? 3. **Ablation study.** How do the main ideas of CoHeat affect the performance? 4. **Effect of the maximum temperature.** How does the maximum temperature \(\epsilon\), the critical hyperparameter, affect the performance of CoHeat? ### Experimental Setup **Datasets.** We use three real-world bundle recommendation datasets as summarized in Table 1. Youshu (2018) comprises bundles of books sourced from a book review site; NetEase (2018) features bundles of music tracks from a cloud music service; iFashion (2018) consists of bundles of fashion items from an outfit sales platform. **Baseline cold-start methods.** We compare CoHeat with existing cold-start item recommendation methods because they can be easily adapted for bundle recommendation by considering bundle-item affiliations as content information. DropoutNet (Srivastava et al., 2015) is a robustness-based method with a dropout operation. CB2CF (Beng et al., 2015) and Heater (2018) are constraint-based methods that regularize the alignment. GAR (Chen et al., 2017) is a generative method with two variants GAR-CF and GAR-GNN. CVAR (Xu et al., 2018) is another generative method with a conditional decoder. CLCRec (Zhu et al., 2018) and CCFCRec (Zhu et al., 2018) are contrastive learning-based methods. We use bundle-item multi-hot vectors as their content information. **Baseline warm-start methods.** We also compare CoHeat with previous warm-start recommendation methods. MFBPR (Zhu et al., 2018) and LightGCN (Hu et al., 2019) are item recommendation methods with the modelings of matrix factorization and graph learning, respectively. SGL (Xu et al., 2018), SimGCL (Xu et al., 2018), and LightGCL (Chen et al., 2017) are the improved methods of item recommendation with contrastive learning approaches. DAM (Dai et al., 2018) is a bundle recommendation method with the modeling of matrix factorization. BundleNet (Hu et al., 2019), BGCN (Chen et al., 2017; Chen et al., 2017), and CrossCBR (Chen et al., 2017) are other bundle recommendation methods with the modeling of graph learning. **Evaluation metrics.** We use Recall@\(k\) and nDCG@\(k\) metrics as in previous works (Hu et al., 2019; Wang et al., 2019). Recall@\(k\) measures the proportion of relevant items in the top-\(k\) list, while nDCG@\(k\) weighs items by their rank. We set \(k\) to 20. In tables, bold and underlined values indicate the best and second-best results, respectively. **Experimental process.** We conduct experiments in warm-start, cold-start, and all-bundle scenarios as in previous works (Wang et al., 2019). For the warm-start scenario, interactions are split into 7:1:2 subsets for training, validation, and testing. In the cold-start scenario, bundles are split in 7:1:2 ratio. In the all-bundle scenario, interactions are split in 7:1:2 ratio with a half for warm-start and the other half for cold-start bundles. We report the best Recall@20 and nDCG@20 within 100 epochs, averaged over three runs. **Hyperparameters.** We utilize the baselines with their official implementations and use their reported best hyperparameters. We implement CoHeat with PyTorch. We set the dimensionality \(d\) of node embeddings as 64. The other hyperparameters are grid-searched: the learning rate in {0.001, 0.0001, 0.00001}, \(\lambda_{1}\) in {0.1, 0.2, 0.5, 1.0}, \(\lambda_{2}\) in {0.00004, 0.0001, 0.0004, 0.001}, \(K\) in {1, 2}, and the maximum temperature in {10\({}^{1}\), 10\({}^{2}\), 10\({}^{3}\), 10\({}^{4}\), 10\({}^{5}\), 10\({}^{6}\)}. ### Comparison with Cold-start Methods (Q1) In Table 2, we compare CoHeat with baseline cold-start methods. The results show that CoHeat consistently surpasses the baselines across all datasets and settings, verifying its superiority. Notably, CoHeat achieves 193% higher nDCG@20 compared to CCFCRec, the best competitor, on the iFashion dataset in the all-bundle scenario. ### Comparison with Warm-start Methods (Q2) Table 3 compares CoHeat with baseline warm-start methods in the warm-start scenario. Even though CoHeat is primarily designed for cold-start bundle recommendation, it surpasses all the \begin{table} \begin{tabular}{l|r r baselines in the warm-start scenario. This indicates CoHeat effectively learns representations from both history-view and affiliation-view by treating the extremely skewed distribution of user-bundle interactions. For the baselines, the performance improves when contrastive learning is used as exemplified in SGL, SimGCL, Light-GCL, and CrossCBR. Additionally, graph-based models such as LightGCN, SGL, SimGCL, LightGCL, BundleNet, BGCN, and CrossCBR excel over other non-graph-based models. In light of these observations, CoHeat strategically exploits a graph-based modeling approach and harnesses the power of contrastive learning. This makes CoHeat robustly achieve the highest performance across diverse scenarios. ### Ablation Study (Q3) Table 4 provides an ablation study that compares CoHeat with its three variants CoHeat-_PC_, CoHeat-_CH_, and CoHeat-_AU_. This study is conducted in the cold-start scenario, which is the primary focus of our work. In CoHeat-_PC_, we remove the influence of popularity-based coalescence by setting the value of \(\gamma_{b}^{(t)}\) in Equation (5) to a constant 0.5. For CoHeat-_CH_, we exploit an anti-curriculum learning strategy. The temperature in Equation (7) is defined as \(t:T\to 0\), initiating the learning process with the affiliation-view and gradually shifting the focus to the history-view. For CoHeat-_AU_, we omit \(\mathcal{L}_{AU}\) from Equation (13), thereby excluding the contrastive learning between the two views. As shown in the table, CoHeat consistently outperforms all the variants, which verifies all the main ideas help improve the performance. In particular, CoHeat-_PC_ shows a severe performance drop, justifying the importance of satisfying Properties 1 and 2 when addressing the extreme skewness inherent in cold-start bundle recommendation. ### Effect of the Maximum Temperature (Q4) The maximum temperature \(\epsilon\) in Equation (7) is the most influential hyperparameter of CoHeat since it directly affects both popularity-based coalescence and curriculum heating. Accordingly, we analyze the influence of \(\epsilon\) in cold-start scenario on real-world datasets, as depicted in Figure 4. As shown in the figure, CoHeat shows low performance for the extreme low temperature because the representations of affiliation-view are not sufficiently learned. For the extreme high temperature, the performance degrades because the speed of the curriculum is too fast to fully learn the representation of the two views. As a result, we set \(\epsilon\) to \(10^{4}\) for all datasets since it shows the best performance. ## 5. Related Works **Bundle recommendation.** Our work focuses on the cold-start problem in bundle recommendation. Previous works can be categorized based on their modeling structures: matrix factorization-based models (Krizhevsky et al., 2014; He et al., 2015; He et al., 2016) and graph learning-based models (Krizhevsky et al., 2014; He et al., 2015; He et al., 2016; He et al., 2017). Such methods operate under the assumption that all bundles have historical interactions, which makes them ill-suited for tackling the cold-start problem. However, in real-world scenarios, new bundles are introduced daily, leading to an inherent cold-start challenge. Our work addresses this significant yet overlooked issue, recognizing its potential impact on the field. **Cold-start recommendation.** The cold-start problem, a long-standing challenge in recommender systems, focuses on recommending cold-start items that have yet to be interacted with users. Existing works are mainly divided into generative methods (Krizhevsky et al., 2014; He et al., 2015; He et al., 2016; He et al., 2017), dropout-based methods (Krizhevsky et al., 2014; He et al., 2015; He et al., 2016), meta-learning methods (He et al., 2016), and constraint-based methods (Krizhevsky et al., 2014; He et al., 2015; He et al., 2016; He et al., 2017). However, such prior works have not explicitly addressed the highly skewed distribution of interactions, a critical aspect in bundle recommendation. Thus, our work excels over these methods in cold-start bundle recommendation by effectively considering the skewed distribution during training. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{3}{*}{**Model**} & \multicolumn{2}{c|}{**Youshu**} & \multicolumn{2}{c|}{**NetEase**} & \multicolumn{2}{c}{**iFashion**} \\ & Recall & nDCG & Recall & nDCG & Recall & nDCG \\ & @20 & @20 & @20 & @20 & @20 & @20 & @20 \\ \hline MFBPR (He et al., 2016) &.1959 &.1117 &.0355 &.0181 &.0752 &.0542 \\ LightGCN (He et al., 2016) &.2286 &.1344 &.0496 &.0254 &.0837 &.0612 \\ SGL (He et al., 2016) &.2568 &.1527 &.0687 &.0368 &.0933 &.0690 \\ SimGCL (He et al., 2016) &.2691 &.1593 &.0710 &.0377 &.0919 &.0677 \\ LightGCL (He et al., 2016) &.2712 &.1607 &.0722 &.0388 &.0943 &.0686 \\ \hline DAM (He et al., 2016) &.2082 &.1198 &.0411 &.0210 &.0629 &.0450 \\ BundlesNet (He et al., 2016) &.1895 &.1125 &.0391 &.0201 &.0626 &.0447 \\ BGCN (Krizhevsky et al., 2014) &.2347 &.1345 &.0491 &.0258 &.0733 &.0531 \\ CrossCBR (He et al., 2016) &.2776 &.1641 &.0791 &.0433 &.1133 &.0875 \\ \hline **CoHeat (ours)** & **.2804** & **.1646** & **.0847** & **.0455** & **.1156** & **.0876** \\ \hline \hline \end{tabular} \end{table} Table 3. Performance comparison of CoHeat and baseline warm-start methods on three real-world datasets. Figure 4. Effect of the maximum temperature \(\epsilon\). \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{3}{*}{**Model**} & \multicolumn{2}{c|}{**Youshu**} & \multicolumn{2}{c|}{**NetEase**} & \multicolumn{2}{c}{**iFashion**} \\ & Recall & nDCG & Recall & nDCG & Recall & nDCG \\ & @20 & @20 & @20 & @20 & @20 & @20 \\ \hline MFBPR (He et al., 2016) &.1959 &.1117 &.0355 &.0181 &.0752 &.0542 \\ LightGCN (He et al., 2016) &.2286 &.1344 &.0496 &.0254 &.0837 &.0612 \\ SGL (He et al., 2016) &.2568 &.1527 &.0687 &.0368 &.0933 &.0690 \\ SimGCL (He et al., 2016) &.2691 &.1593 &.0710 &.0377 &.0919 &.0677 \\ LightGCL (He et al., 2016) &.2712 &.1607 &.0722 &.0388 &.0943 &.0686 \\ \hline DAM (He et al., 2016) &.2082 &.1198 &.0411 &.0210 &.0629 &.0450 \\ BundlesNet (He et al., 2016) &.1895 &.1125 &.0391 &.0201 &.0626 &.0447 \\ BGCN (Krizhevsky et al., 2014) &.2347 &.1345 &.0491 &.0258 &.0733 &.0531 \\ CrossCBR (He et al., 2016) &.2776 &.1641 &.0791 &.0433 &.1133 &.0875 \\ \hline **CoHeat (ours)** & **.2804** & **.1646** & **.0847** & **.0455** & **.1156** & **.0876** \\ \hline \hline \end{tabular} \end{table} Table 4. Ablation study of CoHeat in cold-start scenario which is our main target. ## 6. Conclusion We propose CoHeat, an accurate method for cold-start bundle recommendation. CoHeat strategically leverages history and affiliation views to handle the extremely skewed distribution of bundle interactions. By emphasizing the affiliation-view for less popular bundles, CoHeat effectively captures richer information than the often sparse history-view. The incorporation of curriculum learning further enhances the learning process, starting with the simpler history-view embeddings and gradually transitioning to the more intricate affiliation-view embeddings. In addition, the contrastive learning of CoHeat bolster the learning of representations of the two views. Extensive experiments show that CoHeat provides the state-of-the-art performance in cold-start bundle recommendation, achieving up to 193% higher nDCG@20 compared to the best competitor. ###### Acknowledgements. This work was supported by Jung-Hun Foundation. This work was also supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) [No.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)] and [No.2021-0-02068, Artificial Intelligence Innovation Hub (Artificial Intelligence Institute, Seoul National University)]. The Institute of Engineering Research and ICT at Seoul National University provided research facilities for this work. U Kang is the corresponding author.
2305.16716
A spectral-timing study of the inner flow geometry in MAXI J1535--571 with $Insight$-HXMT and NICER
We have performed a spectral-timing analysis on the black hole X-ray binary MAXI J1535--571 during its 2017 outburst, with the aim of exploring the evolution of the inner accretion flow geometry. X-ray reverberation lags are observed in the hard-intermediate state (HIMS) and soft-intermediate state (SIMS) of the outburst. During the HIMS, the characteristic frequency of the reverberation lags $\nu_0$ (the frequency at which the soft lag turns to zero in the lag-frequency spectra) increases when the spectrum softens. This reflects a reduction of the spatial distance between the corona and accretion disc, when assuming the measured time lags are associated with the light travel time. We also find a strong correlation between $\nu_0$ and type-C Quasi Periodic Oscillation (QPO) centroid frequency $\nu_{QPO}$, which can be well explained by the Lense-Thirring (L-T) precession model under a truncated disk geometry. Despite the degeneracy in the spectral modellings, our results suggest that the accretion disc is largely truncated in the low hard state (LHS), and moves inward as the spectrum softens. Combine the spectral modelling results with the $\nu_0$ - $\nu_{QPO}$ evolution, we are inclined to believe that this source probably have a truncated disk geometry in the hard state.
Wei Yu, Qing-Cui Bu, He-Xin Liu, Yue Huang, Liang Zhang, Zi-Xu Yang, Jin-Lu Qu, Shu Zhang, Li-Ming Song, Shuang-Nan Zhang, Shu-Mei Jia, Xiang Ma, Lian Tao, Ming-Yu Ge, Qing-Zhong Liu, Jing-Zhi Yan, Xue-Lei Cao, Zhi Chang, Li Chen, Yong Chen, Yu-Peng Chen, Guo-Qiang Ding, Ju Guan, Jing Jin, Ling-Da Kong, Bing Li, Cheng-Kui Li, Ti-Pei Li, Xiao-Bo Li, Jin-Yuan Liao, Bai-Sheng Liu, Cong-Zhan Liu, Fang-Jun Lu, Rui-Can Ma, Jian-Yin Nie, Xiao-Qin Ren, Na Sai, Ying Tan, You-Li Tuo, Ling-Jun Wang, Peng-Ju Wang, Bai-Yang Wu, Guang-Cheng Xiao, Qian-Qing Yin, Yuan You, Juan Zhang, Peng Zhang, Wei Zhang, Hai-Sheng Zhao, Shi-Jie Zheng, Deng-Ke Zhou
2023-05-26T08:03:35Z
http://arxiv.org/abs/2305.16716v2
A spectral-timing study of the inner flow geometry in MAXI J1535-571 with \(Insight\)-HXMT and NICER ###### Abstract We have performed a spectral-timing analysis on the black hole X-ray binary MAXI J1535-571 during its 2017 outburst, with the aim of exploring the evolution of the inner accretion flow geometry. X-ray reverberation lags are observed in the hard-intermediate state (HIMS) and soft-intermediate state (SIMS) of the outburst. During the HIMS, the characteristic frequency of the reverberation lags \(\nu_{0}\) (the frequency at which the soft lag turns to zero in the lag-frequency spectra) increases when the spectrum softens. This reflects a reduction of the spatial distance between the corona and accretion disc, when assuming the measured time lags are associated with the light travel time. We also find a strong correlation between \(\nu_{0}\) and type-C Quasi Periodic Oscillation (QPO) centroid frequency \(\nu_{QPO}\), which can be well explained by the Lense-Thirring (L-T) precession model under a truncated disk geometry. Despite the degeneracy in the spectral modellings, our results suggest that the accretion disc is largely truncated in the low hard state (LHS), and moves inward as the spectrum softens. Combine the spectral modelling results with the \(\nu_{0}\) - \(\nu_{QPO}\) evolution, we are inclined to believe that this source probably have a truncated disk geometry in the hard state. X-rays: binaries - X-rays: individual: MAXI J1535-571 - Accretion, accretion disks + Footnote †: journal: Accepted for publication in ApJ ## 1 Introduction Black hole low mass X-ray binaries (BH-LMXB) are mostly transient systems, in which a black hole accretes matters from its companion star via an accretion disc (Shakura & Sunyaev, 1973). Black hole transients (BHTs) spend most of their lifetimes in a quiescent state, and show occasional outbursts that last from weeks to months. The outbursts could be triggered due to the instability of the system (Cannizzo et al., 1995; Lasota, 2001). During an outburst, the source luminosity can reach the Eddington limit, while both the energy spectral properties and fast variability change dramatically, allowing for the classification of different spectral states (Belloni, 2010). A unified pattern of X-ray spectral evolution of BHTs is found in most systems during an outburst, which is known as the hardness-intensity diagram (HID). The system transitions from the quiescent state to the low hard state (LHS) at the initial of the outburst. The X-ray emission during the LHS is dominated by non-thermal coronal photons, which are thought to arise from the inverse Compton scattering between the soft disk photons and the hot electrons in the corona. The X-ray spectrum in this state can be described by a phenomenological power-law with a high energy cutoff (Zdziarski & Gierlinski, 2004; Remillard & McClintock, 2006; Done et al., 2007). Strong band-limited noise and low-frequency quasi-periodic oscillations (LFQPOs) are detected in the power density spectrum (PDS). As the luminosity gradually increases, the source will evolve into the high soft state (HSS) where the spectrum is dominated by the thermal disk emission. The X-ray spectrum in the HSS can be well described by a multi-temperature disk-blackbody component (Remillard & McClintock, 2006; You et al., 2016), while a power-law shaped red noise is observed in the corresponding PDS. The transitions between the LHS and HSS are named as the intermediate states, which are further divided into the hard intermediate states (HIMS) and the soft intermediate states (SIMS) based on the X-ray timing properties (Homan & Belloni, 2005; Belloni et al., 2005). Typically, such transitions are often found to be accompanied by the changes in the types of LFQPOs, with type-C QPOs appearing mainly in the HIMS, whereas type-B and type-A QPOs appear only in the SIMS. Despite the massive studies on BHTs, the evolution of the accretion disk/corona geometry is still under debate (Kara et al., 2019; You et al., 2021). In the soft state, it is widely accepted that a geometrically thin and optically thick accretion disk has reached the innermost stable circular orbit (ISCO) of a BH. However, the disk/corona geometry in the LHS and HIMS is still an open question. The truncated disk geometry is most often proposed for the LHS and HIMS. Within the truncated disk model, the disk is assumed to be truncated at a radius that is larger than the ISCO and interior to which is the Comptonizing corona during the hard and intermediate states (Esin et al., 1997). The transition from the hard state to the soft state may correspond to the decrease of the truncation radius. The truncated disc model has succeeded in explaining plenty of observed X-ray spectral and timing properties from BHXRBs, such as the hard-to-soft spectral transitions and the decreasing of the characteristic variability time-scale (see (Done et al., 2007), and references therein). Different from the truncated disk model, the lamppost model (Martocchia & Matt, 1996) assumes that a compact hard X-ray corona locates on the axis of the accretion disc. Under this scenario, the evolution of the source is associated with the vertical expansion or contract of the corona (Kara et al., 2019; Buisson et al., 2019; Wang et al., 2021). Another open question is the dynamical/radiative origin of LFQPOs observed in the LHS and HIMS. Several models have been proposed to explain the dynamical origin of LFQPOs considering either the instability in the accretion flow or a geometric effect of the accre Figure 1: The hardness-intensity diagrams (HIDs) and hardness-rms diagrams (HRDs) of MAXI J1535-571. Left panel: _NICER_ HID&HRD. Intensity is the count rate in 0.2-12.0 keV. Hardness is defined as the ratio of count rates between 4.0-10.0 keV and 2.0-4.0 keV. Right panel: _Insight_-HXMT HID&HRD. Intensity is the count rate in 1-10.0 keV from LE, while the hardness is defined as 4.0-10.0 keV to 2.0-4.0 keV counts ratio. Fractional averaged rms corresponds to the frequency range 0.01-64 Hz to the full energy range. Orange dots, blue squares and navy triangles represent the SIMS, HIMS, LHS, respectively. tion flow, among which the most promising model is the Lense-Thirring (L-T) precession model that assumes that LFQPOs are generated by the relativistic precession of an inner hot accretion flow (Ingram et al., 2009; You et al., 2018, 2020) or a small-scale jet (Ma et al., 2021). Unless otherwise noted, the L-T precession mentioned in the rest of this paper all refers to the former one. In the L-T precession model, the precession frequency is set by parameters including the inner radius of a truncated accretion disk. As the source spectra softens, the inner disk radius decreases and QPO frequency increases. For the radiative origin of LFQPOs, Ingram and van der Klis (2015) reconstructed the QPO-phase dependent waveforms considering the rms and lags of the QPOs. This gave a description of the iron line energy shift at different QPO phases in GRS 1915+105. A time-dependent Comptonization model, vKompth, was proposed by Karpouzas et al. (2020) and Bellavita et al. (2022), to explain the energy dependent rms and lag spectra of the QPOs and measure the corona geometry (Karpouzas et al., 2021; Mendez et al., 2022). From the measurements of this model, the corona geometry of BHTs can be slab-like or jet-like and connected to the jet behavior during the HIMS-to-SIMS transition in MAXI J1535\(-\)571 (Zhang et al., 2022, 2023; Rawat et al., 2023), MAXI J1348-630 (Garcia et al., 2021), and GX 339\(-\)4 (Peirano et al., 2023). In general, one way to study the geometry of the inner accretion flow is through the reflection spectrum. The hard photons from the corona could irradiate the accretion disk and are further reprocessed to produce a reflection component on the spectrum, i.e., the relativistic broadened Fe-K\(\alpha\) emission line and the Compton hump component (Fabian et al., 2000; Miller, 2007). Plenty of efforts have been made by fitting the time-averaged reflection spectrum to estimate the BH spin, accretion disk inclination, and other characteristics. Another way to study the geometry of the inner accretion flow is to analyze the reverberation lags. Since the reflected disk photons have to travel a longer distance to the observer than the direct coronal photons, there will be a time delay between the two components, known as reverberation lags (e.g., Uttley et al., 2014). By applying the Fourier timing method, we are able to measure the time lags between these two components (e.g., Nowak et al., 1999; Uttley et al., 2014). Subsequently, through the measured reverberation lags, we can estimate the distance between the illuminated and reflected regions (De Marco et al., 2015, 2021). Combining the spectral analysis and reverberation lags study together gives us a better understanding of the accretion flow geometry. X-ray reverberation lags are usually observed in radio-quiet active galactic nuclei (AGN) but rarely in BHXRBs. The time-scale of the lags scales linearly with the black hole mass. Since the mass of BHXRBs is much smaller than that of AGNs, the light travel time corresponding to one \(R_{\rm g}\) is very short. The signal-to-noise ratio will be significantly reduced due to the small number of photons detected during the light travel time. Thus, reverberation lag detection is difficult in BHXRBs. The first detection of thermal X-ray reverberation lags in BHXRBs is in GX 339-4 (Uttley et al., 2011). Further work by De Marco et al. (2015) showed that the reverberation lag of GX 339-4 decreases with increasing source luminosity and disk-fraction, which possibly supports a truncated disk geometry. To date, reverberation lags have been detected in several BHXRBs (De Marco and Ponti, 2016; De Marco et al., 2017; Kara et al., 2019; De Marco et al., 2021; Wang et al., 2020, 2021, 2022). MAXI J1535-571 was discovered as a new uncatalogued hard X-ray transient located near the Galactic plane by Monitor of All-Sky X-ray Image (_MAXI_) on September 02, 2017 (Negoro et al., 2017). Follow-up observations were made by _Swift_/BAT, _INTEGRAL_, _Insight_-HXMT, _NuSTAR_, and _NICER_. Due to its behavior observed in X-Ray and Radio bands, MAXI J1535-571 is classified as a bright BHXRB candidate (Negoro et al., 2017). LFQPOs have been detected by _Insight_-HXMT and _NICER_ in the LHS, HIMS, and SIMS (Huang et al., 2018; Stiele and Kong, 2018). The spectral analysis of the _NuSTAR_ observations gives a black hole spin a \(>0.84\) and an inclination angle i \(=57^{+1^{\circ}}_{-2}\)(Xu et al., 2018). Chauhan et al. (2019) estimated a distance of \(4.0\pm 0.2\) kpc for the source by studying the HI absorption from gas clouds along the line-of-sight. In this paper, we study the accretion flow geometry of MAXI J1535-571 by applying two independent methods: broadband energy spectrum fitting and reverberation lags analysis. The data analyzed in this paper are taken from _Insight_-HXMT and _NICER_, covering both the hard and intermediate states during the 2017 outburst. Considering _NICER_'s high time resolution and large area in the soft energy band (0.2-10 keV), we mainly use _NICER_ data for timing analysis. The _NICER_ observations also cover the entire transition states. On the other hand, considering _NICER_'s narrower energy band and its calibration uncertainty below 3 keV (Miller et al., 2018), we think that _Insight_-HXMT has more advantages in broad band spectral fitting, especially above 20 keV. Therefore, we choose _Insight_-HXMT data for energy spectrum fitting. This paper is organized as follows. Section 2 describes the observations and data reduction. Section 3 provides the time lag analysis with _NICER_ data. The details of broad energy band spectrum fitting with _Insight_-HXMT data are described in Section 4. Discussions and conclusions are presented in Section 5. ## 2 Data Reduction The data set analyzed in this paper includes 29 _NICER_ and 28 _Insight_-HXMT observations carried out between September 12th and October 11th, 2017. The selected _NICER_ ObsIDs are from 1050360104 to 1130360114 and _Insight_-HXMT ObsIDs are from P0114535001 to P0114535009. Table A1 and Table A2 list the log of the observations. The _NICER_ and _Insight_-HXMT hardness-intensity diagrams (HIDs) and hardness-rms diagrams (HRDs) are shown in Figure 1. The accretion state classifications of _Insight_-HXMT observations is taken from Huang et al. (2018). It can be seen from the HRD that the SIMS locates in the lower left of the diagram due to low variability and hardness. However, _Insight_-HXMT only covers observations before MJD 58020. Through _NICER_ observations, we can see that after MJD 58027 the data points return to the right top of the HRD. Meanwhile, according to the timing analysis of Stiele and Kong (2018), the QPO type changes from A to C, and the associated noise component changes from red noise to flat-top noise, all of which indicate that the source has returned to the HIMS (Belloni, 2010). As one of the signs of state transition, type-B QPOs were detected by _Insight_-HXMT at MJD 58016 (Huang et al., 2018), but were missed by _NICER_ due to the lack of observations (Stevens et al., 2018; Stiele and Kong, 2018). The _NICER_ data are processed with the NICERDAS tools in HEASOFT v.6.27.2 and CALDB v.20200722. The data are screened using the standard calibration tool NICERCAL and screening tool NIMAKETIME. We select events that are caught less than 54\({}^{{}^{\prime\prime}}\) offset in pointing, more than 40\({}^{\circ}\) away from the bright Earth limb, more than 30\({}^{\circ}\) away from the dark Earth limb, outside the South Atlantic Anomaly (SAA), not flagged as "overshoot" or "undershoot" resets (EVENT FLAGS=bxxxx00), and triggered the slow signal chain (EVENT FLAGS = bx1x000). A "trumpet" filter is also applied to eliminate known background events (Bogdanov, 2019). The Hard X-ray Modulation Telescope, known as _Insight_-HXMT (Zhang et al., 2014), consists of three groups of instruments: the high-energy X-ray telescope (HE, 20-250 keV, 5,100 cm\({}^{2}\)), the medium-energy X-ray telescope (ME, 5-30 keV, 952 cm\({}^{2}\)), and the low-energy X-ray telescope (LE, 1-15 keV, 384 cm\({}^{2}\)). HE contains 18 cylindrical NaI(Tl)/CsI(Na) phoswich detectors; ME is composed of 1728 Si-PIN detectors; and LE uses Swept Charge Device (SCD). There are three types of Field of View (FoV): 1\({}^{\circ}\)\(\times\) 6\({}^{\circ}\) (i.e., the small FoV), 6\({}^{\circ}\)\(\times\) 6\({}^{\circ}\) (i.e., the large FoV), and the blind FoV used to estimate the particle induced instrumental background. More details about _Insight_-HXMT can be found in Zhang et al. (2020). The _Insight_-HXMT data are processed with _Insight_-HXMT Data Analysis Software (HXMTDAS) version 2.03. The data are filtered using the criteria recommended by the _Insight_-HXMT team: the pointing offset angle is smaller than 0.04\({}^{\circ}\); the elevation angle is larger than 10\({}^{\circ}\); the value of the geomagnetic cutoff rigidity is larger than 8; data are used at least 300 s before and after the South Atlantic Anomaly (SAA) passage. The energy bands chosen for energy spectral analysis are 2-10 keV (LE), 10-27 keV (ME), and 27-80 keV (HE). The XSPEC v12.11.0c software package (Arnaud, 1996) is used to perform spectral fitting. All parameter uncertainties are estimated at the 90% confidence level. For _NICER_ data, we generate the cross-spectrum using standard techniques (Nowak et al., 1999; Uttley et al., 2014) to compute the X-ray lags as a function of Fourier-frequency. The energy bands we select to compute the spectra are 0.5-2.5 keV (soft band) and 3-5 keV (hard band), in which the soft band is dominated by the reflected disk photons and the hard band is dominated by the coronal photons. To avoid interference from the iron K lines and iron edge, we select the energy bands below 5 keV. A positive (hard) lag means that the hard photons lag behind the soft ones. It is worth mentioning that 0.5-1 keV is often used as the soft band in the study of the reverberation lags, as the soft excess is more significant below 1 keV (Kara et al., 2019; De Marco et al., 2021). However, MAXI J1535-571 has a relatively higher interstellar absorption of \(N_{\rm H}=(3-8)\times 10^{22}cm^{-2}\)(Tao et al., 2018; Xu et al., 2018; Kong et al., 2020). The high absorption significantly diminishes the photons at soft X-rays (\(<1\) keV), which consequently leads the quality of the lag spectra too poor to confidently measure the reverberation lag and its characteristic frequency in the soft band of 0.5-1 keV. Extending the soft energy band to 0.5-2.5 keV can largely improve the signal-to-noise ratio, while according to Uttley et al. (2014), the soft excess generally exists below 3 keV. Moreover, since photons above 1 keV are less affected by absorption, the soft excess within 1-2.5 keV in MAXI J1535-571 could be more significant than within 0.5-1 keV. ## 3 Timing Analysis A number of examples of lag-frequency spectra are shown in Figure 2. The evolution of lag-frequency spec tra is not significant in the SIMS. Due to the low source variability level, the observations from SIMS are combined together. At low frequencies, we observed hard (positive) X-ray lags in all the analysed observations. These lags are commonly observed in BHXRBs (e.g., Miyamoto et al., 1988; Nowak et al., 1999; Pottschmidt et al., 2000). Previous studies suggest that a power-law model, with index of \(\sim-\) 0.7, is able to qualitatively describe the underlying decreasing trend of hard lags as a function of frequency in BHXRBs (De Marco et al., 2017). The low-frequency hard lags are usually interpreted as the inward propagation of fluctuations in the disc mass accretion rate (Kotov et al., 2001; Arevalo & Uttley, 2006; Ingram & Klis, 2013). At high frequencies, soft (negative) X-ray lags are clearly observed in all the observations. The soft lags evolve significantly in both frequency and amplitude throughout the outburst. The frequency of the soft lags increases with the decreasing hardness ratio, while the amplitude of them decreases with the decreasing hardness ratio. These high-frequency soft X-ray lags are usually introduced by reverberation caused by the light-crossing time delay between the continuum and reflected emission. However, since the lag is measured between two energy bands that cover both the irradiation and reflection components, the observed soft lags will consequently be diluted, which is known as the dilution effect (Uttley et al., 2014). Because of the dilution effect, the amplitude of soft lag cannot accurately reflect the intrinsic reverberation lags. Therefore, we adopt a method introduced by De Marco et al. (2021) by taking the frequency at which the soft lag first turns to zero (hereafter \(\nu_{0}\)) as the intrinsic time scale of reverberation lags. In this way, the dilution effects can be avoided. In Figure 3, the soft lag approaches zero at \(\sim\)5 Hz and then rapidly turns negative when frequency increases. Notably, for a mildly rebinned lag frequency spectrum, only one or two bins cross the zero-lag may be just a statistical fluctuation. In addition, as the critical frequency of phase wrapping, the lag above \(\nu_{0}\) is generally positive, which is different from what we have observed. However, in real cases, the phase wrapping is highly affected by the response function and dilution effect. For certain response functions, phase wrapping does not necessarily lead to positive lags (see Fig 21 in Uttley et al., 2014). On the contrary, the lags may change from zero to negative values above \(\nu_{0}\). In other words, the lag curve does not have to cross the zero-lag line and \(\nu_{0}\) is more like an inflection point, which is the case observed in MAXI J1535-571. Since there could be different response functions for different sources, we would see different lag curves for different sources. In addition, the inflection point around \(\sim\)5 Hz doesn't change with the dilution effect, which proves one of the main properties of the characteristic frequency \(\nu_{0}\). Considering that the positive lags near \(\nu_{0}\) is very small, if we measure \(\nu_{0}\) directly, it will be highly affected by the rebin factor. In order to quantitatively measure \(\nu_{0}\) in a model-independent way, we use a logarithm function \(f(x)=a+blnx\) to fit parts of the lag-frequency spectra near the high frequency zero point. And to re Figure 3: The 0.5–2.5 keV vs. 3–5 keV and 0.5–1 keV vs. 3–5 keV lag-frequency spectras of _NICER_ observation 1050360104. Figure 2: The 0.5–2.5 keV vs. 3–5 keV lag-frequency spectra of some of the analysed _NICER_ observations of MAXI J1535–571. A negative lag suggests that the soft band lags behind the hard ones. Green triangles, red squares, blue dots, and orange diamonds represent ObsID 1050360106, 1050360110, 1130360103 and the combined SIMS observations, respectively. duce the bias introduced by fitting, we only selected four bins to fit. Some fitting examples are shown in Fig A1. We further plot the \(\nu_{0}\) as a function of hardness ratio and QPO frequency. The QPO frequency values are taken from Stiele and Kong (2018). As shown in Figure 4, during the HIMS (blue dots), \(\nu_{0}\) is inversely correlated with the hardness ratio, while positively correlated with type-C QPO frequency. During the SIMS, \(\nu_{0}\) increases dramatically, reaching two or three times higher than in the HIMS for both plots. Since \(\nu_{0}\) is used as the time scale of the reverberation mapping, it can qualitatively describe the distance between the corona and the disk. Hence, during the HIMS, the distance between the disk and corona decreases as the source softens. When the source enters the SIMS, the distance significantly decreases. These behaviours suggest a possible moving inward disc scenario in the truncated disk geometry or a decrease of the corona height in the lamppost geometry. In the former case, the inner disc probably reaches the ISCO in the SIMS. A more detailed discussion is given in Section 5. ## 4 Spectral Analysis In order to further study the reflection characteristics of the source, we use the relativistic reflection model to fit the _Insight_-HXMT energy spectra of MAXI J1535-571. For all the spectra, a neutral Galactic absorption component, modeled with \(TBabs\) is added (Wilms et al., 2000). We adopt the abundances in Wilms et al. (2000) as appropriate for absorption by the Galactic interstellar medium and adopt the recommended cross-sections of Verner et al. (1996). Fluorescence lines due to the photoelectric effect of electrons in K-shell of silver are detected by the Si-PIN detectors of ME, which contributes to the spectra at 21-24 keV. Therefore, the data points in 21-24 keV are ignored. We select a HIMS observation and fit its spectra with model \(TBabs*relxillCp\) (Model 1). The relativistic reflection model \(relxillCp\)1 contains both the emission Figure 4: The characteristic frequency \(\nu_{0}\) as a function of spectral hardness (left panel) and QPO frequency (right panel). Figure 5: (Data-model)/error plots of the reflection modeling of the observation P011453500501. The gray points: LE (2 – 10 keV); the red points: ME (10 – 27 keV); the blue points: HE (27 – 80 keV). component of the corona and the reflection component of the disk (Dauser et al., 2014). The emission component is described by Comptonization model \(nthcomp\). Previous measurements have shown that MAXI J1535-571 has a high black hole spin of \(>0.84\) in Xu et al. (2018), and 0.994(2) in Miller et al. (2018). In order to reduce the parameter space, we fix the spin at its maximum value of 0.998, considering that the adoption of other spin values does not change our main conclusions from our attempt to fit the data. Our data cannot simultaneously constrain the \(q_{1}\), \(q_{2}\) and \(R_{\rm br}\). In order to allow the inner disk radius to fit to any physically allowed value, we use a simple powerlaw to describe the emissivity profile by linking \(q_{1}\) to \(q_{2}\). If assume the Newtonian case, i.e., \(q_{2}\) is fixed at 3, we get much worse fits than let \(q_{2}\) vary freely. The \(\Delta\chi^{2}\) between \(q_{2}\) fixed at 3 and \(q_{2}\) free to vary are 67.24, 52.61, 42.56 and 36.68 for Obs 106, 301, 501 and 901, respectively. An emissivity profile with \(q_{2}=3\) is usually considered as a standard scenario, under the assumption that the intensity of the hard radiation scattered back on to the disk by the corona is proportional to the local disk emissivity (Shakura & Sunyaev, 1973; Dauser et al., 2013). However, non-thermal coronal emission does not necessarily need to behave in the same way as the thermal dissipation of the disk. The interaction between the disc and the corona is more complicated, including the radiation and magnetic processes (Haardt & Maraschi, 1991; Czerny & Goosmann, 2004; Goosmann et al., 2006; Rozanska et al., 2011). \(N_{\rm H}\) is set free since it's affected by the environment around the compact star, such as the accretion disk, interstellar gas, and outflow matter. Figure 6: Spectral fittings with _Insight_-HXMT observations P011453500106, P011453500301, P011453500501 and P011453500901. The gray, red, and blue points are for LE, ME, and HE, respectively. The total model is shown in thick-solid line; the thermal emission _(diskbb)_ from the disc is shown in black dashed line; the Compotonization component _nthcomp_ is shown in blue dot-dashed line, and it is calculated internally by _relxillep_; the relativistic reflection component is shown in red dotted line. The HIMS X-ray spectra can be well fitted by Model 1 with a reduced chi-square \(\chi^{2}_{\nu}=1254/1172=1.07\). The residual is shown in Figure 5. Adding an extra \(diskbb\) component (Model 2) can neither improve the goodness-of-fit nor constrain the disk parameters. If we fix the disk temperature and norm to the values suggested by Kong et al. (2020), the reduced chi-square \(\chi^{2}_{\nu}\) increases to 1899/1231=1.54 (see Figure 5). Obviously, a disk component is not required for HIMS, we only add a \(diskbb\) component for the SIMS observations. We also try to fit the spectrum with a lamppost model, \(TBabs*relxilllpCp\) (Model 3). The model \(relxilllpCp\) is also a member of \(relxill\) family (Dauser et al., 2014). The parameter \(fixReflFrac\) is fixed to 1, so that the reflection fraction can be self-consistently calculated, according to the configurations of other parameters, e.g., the BH spin, inner disk radius, and the height of the lamppost source (Dauser et al., 2014). Model 3 also gives a good fit with a reduced chi-square of 1419/1207=1.18 (see Figure 5). However, by plotting the contour, we find that the corona height \(h\) is degenerated with the inner radius of the disk \(R_{\rm in}\) (see Figure 2) and we cannot determine \(h\) and \(R_{\rm in}\) independently from this model. Note that the degeneracy between \(R_{\rm in}\) and \(h\) is positively correlated here, which is in contrast to the regular case. We find it could be caused by the degeneracy among the other parameters. In Table 3, we give the best-fitting values of the parameters from HXMT observations using Model 3, from which it suggests that the evolution of \(h\) and \(R_{\rm in}\) is substantially affected by the degeneracy. Nevertheless, at the 90% confidence level, our results strongly suggest that the accretion disk is truncated in the HIMS. Since we are mainly concerned with the detailed geometric evolution of the source, Model 1 is finally adopted. Although Model 1 has no assumption about the coronal geometry, it can give information about the inner radius of the disk. In Figure 6, we show the spectra taken from the LHS, HIMS, and SIMS. The deep of the residuals around the Compton hump region is due to the calibration of the high energy detector. Due to the low S/N, the energy spectra of some observations could not constrain the parameters well, hence are removed from our analysis. The fitting parameters for all the rest observations are shown in Table 1. The evolution of the spectral parameters are given in Figure 7. The model gives an inclination angle i of \(51.2^{+0.6}_{-1.2}\) and an iron abundance AFe of \(0.52^{+0.03}_{-0.01}\) (in solar units). The evolution trend of the photon index \(\Gamma\) is consistent with the previous _Insight_-HXMT results given by Kong et al. (2020), and _Swift_ results given by Tao et al. (2018). The ionization parameter log\(\xi\) varies between 3.6 and 4.4. The column density \(N_{\rm H}\) (in units of \(10^{22}\)cm\({}^{-2}\)) has a value higher than the Galactic absorption column density \(N_{\rm H}=1.5\times 10^{22}\)cm\({}^{-2}\)(Kalberla et al., 2005), evolving in the range of 4.3 and 5.1. High \(N_{\rm H}\) and its variation of nearly 20% during the outburst have also been observed in previous studies (Xu et al., 2018; Tao et al., 2018; Kong et al., 2020). In Tao et al. (2018), they propose that when the accretion rate increases, the outflow from the disk will lead to the observed \(N_{\rm H}\) increase. The inner disk radius \(R_{\rm in}\) is truncated at about 25 \(R_{\rm ISCO}\) at the beginning of the LHS, then steadily decreases to 10 \(R_{\rm ISCO}\) at the end of the HIMS. After the source entered the SIMS, \(R_{\rm in}\) is close to the ISCO, as shown in Figure 7. These results suggest that MAXI J1535-571 most probably has a truncated disk. While the inner edge of the truncated disk moves inward to \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \({}^{a}\)ObsID & \(N_{\rm H}\) & \(T_{\rm in}\) & \(N_{\rm disk}\) & \(q\) & \(R_{\rm in}\) & \(\Gamma\) & log\(\xi\) & \(kT_{\rm e}\) & \(R_{\rm f}\) & \(N_{\rm rel}\) & \(\chi^{2}_{\rm red}(d.o.f)\) \\ & (\(10^{22}\)cm\({}^{-2}\)) & (keV) & (\(10^{3}\)) & & (ISCO) & & & (keV) & & & \\ \hline 106 & \(4.34^{+0.05}_{-0.10}\) & \(-\) & \(-\) & \(2.2^{+0.9}_{-0.1}\) & \(26^{+20}_{-13}\) & \(1.75^{+0.01}_{-0.01}\) & \(3.6^{+0.02}_{-0.03}\) & \(28.2^{+1.2}_{-1.1}\) & \(0.38^{+0.04}_{-0.04}\) & \(0.05^{+0.01}_{-0.01}\) & 1.06(1206) \\ 145 & \(4.96^{+0.10}_{-0.02}\) & \(-\) & \(-\) & \(2.6^{+0.5}_{-0.6}\) & \(38^{+9}_{-15}\) & \(2.31^{+0.01}_{-0.01}\) & \(3.87^{+0.04}_{-0.07}\) & \(33.9^{+2.4}_{-0.6}\) & \(0.24^{+0.04}_{-0.02}\) & \(0.28^{+0.02}_{-0.02}\) & 1.19(1174) \\ 301 & \(4.75^{+0.04}_{-0.03}\) & \(-\) & \(-\) & \(1.8^{+0.4}_{-0.4}\) & \(35^{+12}_{-19}\) & \(2.28^{+0.01}_{-0.01}\) & \(3.87^{+0.03}_{-0.04}\) & \(25.7^{+1.8}_{-2.8}\) & \(0.20^{+0.02}_{-0.03}\) & \(0.28^{+0.01}_{-0.02}\) & 1.07(1204) \\ 401 & \(4.74^{+0.03}_{-0.03}\) & \(-\) & \(-\) & \(1.2^{+0.8}_{-0.6}\) & \(34^{+17}_{-11}\) & \(2.34^{+0.01}_{-0.01}\) & \(4.26^{+0.02}_{-0.02}\) & \(34.8^{+1.6}_{-0.8}\) & \(0.32^{+0.02}_{-0.02}\) & \(0.26^{+0.01}_{-0.01}\) & 1.08(1206) \\ 501 & \(5.04^{+0.07}_{-0.01}\) & \(-\) & \(-\) & \(1.4^{+0.4}_{-0.3}\) & \(8.8^{+3.4}_{-3.1}\) & \(2.42^{+0.01}_{-0.01}\) & \(4.43^{+0.06}_{-0.02}\) & \(43.6^{+2.9}_{-0.7}\) & \(0.35^{+0.03}_{-0.01}\) & \(0.34^{+0.01}_{-0.01}\) & 1.07(1172) \\ 601 & \(5.05^{+0.08}_{-0.14}\) & \(-\) & \(-\) & \(1.4^{+0.2}_{-0.3}\) & \(10.3^{+3.0}_{-2.2}\) & \(2.43^{+0.01}_{-0.01}\) & \(4.27^{+0.06}_{-0.01}\) & \(42.9^{+7.8}_{-0.5}\) & \(0.37^{+0.06}_{-0.05}\) & \(0.35^{+0.03}_{-0.03}\) & 1.06(1150) \\ 901 & \(5.10^{+0.11}_{-0.07}\) & \(1.20^{+0.01}_{-0.03}\) & \(1.93^{+0.04}_{-0.05}\) & \(6.2^{+0.5}_{-0.2}\) & \(1.14^{+0.14}_{-0.08}\) & \(2.64^{+0.02}_{-0.01}\) & \(4.34^{+0.31}_{-0.16}\) & \(289^{+30}_{-30}\) & \(25.0^{+0.03}_{-0.02}\) & \(0.58^{+0.02}_{-0.04}\) & 1.15(1204) \\ 906 & \(4.62^{+0.08}_{-0.05}\) & \(1.15^{+0.01}_{-0.01}\) & \(4.38^{+0.08}_{-0.06}\) & \(7.7^{+0.4}_{-0.3}\) & \(1.61^{+0.16}_{-0.21}\) & \(2.76^{+0.01}_{-0.01}\) & \(3.71^{+0.23}_{-0.07}\) & \(400^{+0}_{-15}\) & \(0.55^{+0.02}_{-0.01}\) & \(0.34^{+0.02}_{-0.02}\) & 1.13(1204) \\ 912 & \(4.64^{+0.12}_ wards the BH, the relative distance between the corona and the disk decreases, which is also supported by the results of reverberation lag analysis in Section 3. ## 5 Summary and Discussion In this paper, we have performed a detailed spectral-timing analysis of the BHXRB MAXI J1535-571 during its 2017 outburst, using observations from _NICER_ and _Insight_-HXMT. We find that the geometry of the inner accretion flow has evolved significantly from the LHS to the SIMS. In particular, the characteristic frequency \(\nu_{0}\) of reverberation lags increases during the HIMS, as shown in Figure 4, suggesting that the relative distance between the disk and the corona decreases when the spectra softens. We further studied the reflection characteristics in a broad energy band with _Insight_-HXMT data. We propose that the disk is truncated in the hard state and reach the ISCO in the soft intermediate state. During the HIMS, the reverberation characteristic frequency \(\nu_{0}\) shows a positive correlation with the type-C QPOs frequency, as shown in Figure 4. In the L-T precession model, when the inner disk radius moves inwards, the QPO frequency increases, meanwhile the relative distance between the disk and the corona decreases, leading to a decrease of the characteristic frequency \(\nu_{0}\) of reverberation lags. According to Ingram and Motta (2014), the QPO frequency in the L-T precession model can be calculated by the following formula: \[\frac{f_{\rm QPO}}{f_{\rm K}^{*}}=\left(1-\sqrt{1-\sqrt{2}ar^{3/2}+0.75a^{2}r ^{2}}\right) \tag{1}\] \[f_{\rm K}^{*}=\left(\frac{c}{\pi R_{\rm g}}\right)\left[\left(\frac{2}{r} \right)^{3/2}+a\right]^{-1}, \tag{2}\] where \(r=R_{\rm g}/R_{\rm in}\), \(a\) is black hole spin, \(M\) is the black hole mass, and \(f_{\rm K}^{*}\) is the Keplerian frequency. The characteristic frequency \(\nu_{0}\) is inversely proportional to the intrinsic soft lag amplitude, and further proportional to \(1/R_{\rm in}\) under the truncated disk geometry (De Marco et al., 2021). In order to testify whether the geometry suggested by the evolution of \(\nu_{0}\) is consistent with the prediction of the L-T precession model, we multiply \(1/r\) by a constant \(A\) and use the deformed formula to fit the \(\nu_{0}\)-QPO frequency relation. The fitting result is given in Figure 8 (\(A=132\pm 24\)), assuming a black hole mass of ten solar masses and a spin of 0.998. \(\nu_{0}\) and \(1/\)\(R_{\rm in}\) show a high degree of consistency in terms of type-C QPO frequency. This correlation provides strong evidence of the L-T precession under the truncated disk geometry. It is worth noting that, the rigid body precession model is not used here because it involves many variable parameters. What we concern more is whether the \(R_{\rm in}\) obtained by the reverberation mapping and the \(R_{\rm in}\) predicted by the L-T precession model have the same evolutionary trends. Therefore, we used the simplified test-particle precession model to avoid the interference from other parameters. We intend to make a qualitative conclusion rather than strict quantitative calculations. In the SIMS, \(\nu_{0}\) shows a dramatic increase in Figure 4, implying that the relative distance between the disk and corona is significantly smaller than is the case in the HIMS. Previous studies have suggested that when the disk reaches the ISCO, the collapses of the inner flow could trigger a similar collapse of the radio emission (Done et al., 2007). The inner disk probably has reached the ISCO in the SIMS for this source, given that radio flares have been observed in the SIMS of this source (Chauhan et al., 2019). It is interesting to mention that a recent work, performed on the systematic study of reverberation lags of multiple BHXRBs, shows an opposite trend from ours on MAXI J1535-571 (Wang et al., 2022). The difference could be caused by the selections of energy bands, the criteria of state classification, or the method for measuring characteristic frequencies. In addition, the strong absorption of MAXI J1535-571 makes it more complicated to measure its soft lags and characteristic frequencies. It should be emphasized that the rebin factor can significantly change the directly measured values of \(\nu_{0}\). Thus, we use a logarithmic function to measure the characteristic frequencies \(\nu_{0}\). Our results also suggest that the soft lags evolution of MAXI J1535-571 seems to be different compared to MAXI J1820+070. De Marco et al. (2021) also found that \(\nu_{0}\) increases during the hard state but decreases during the transition to the soft state in MAXI J1820+070. They interpreted this as the emission from a ballistic jet becomes significant so that a larger area of the disk may be irradiated. MAXI J1535-571 also showed significant jet activity in the SIMS (Russell et al., 2019, 2020; Vincentelli et al., 2021), but \(\nu_{0}\) increased compared to the HIMS. The different evolution trends of \(\nu_{0}\) in the SIMS may indicate that MAXI J1535-571 has a different disk-corona geometry than MAXI J1820+070. In particular, the jet base of MAXI J1535-571 might remain close to the black hole during the SIMS, causing the hard photons to mainly irradiate the inner part of the disk. Our spectral fitting results suggest that during the outburst, the inner disk moves inwards to the BH, from 38 \(R_{\rm ISCO}\) to \(<2\)\(R_{\rm ISCO}\), which corresponds to the increase of \(\nu_{0}\). In order to compare the relation between \(R_{\rm in}\) and \(1/\nu_{0}\), we use a linear function to fit it (see Fig 9). The figure shows that when \(R_{\rm in}\) reaches the ISCO, \(1/\nu_{0}\) decreases significantly. These suggest that the accretion disk is truncated during the HIMS while reaches the ISCO during the SIMS. It is worth noting that despite the high degeneracy between the corona height \(h\) and the inner disk radius \(R_{\rm in}\) (see Figure 11), the lamppost model \(relxilllpCp\) also prefers the disk to be truncated (see Table 12). A shrinking corona can also bring the changes in the reverberation mapping lags. However, it's hard to tell whether the corona is shrinking independently from \(R_{\rm in}\) because of the parameter degeneracy. On the contrary, \(relxillCp\) model doesn't show obvious degeneracy. However, if considering that the type-C QPOs are produced by the L-T precession, a truncated disk geometry is a more reasonable scenario, since a lamppost geometry can not give a precession corona. In conclusion, we prefer the interpretation with the lower height of the corona Figure 8: \(\nu_{0}\) as a function of QPO frequency. The red line shows the best fit of the L-T precession model (95% confidence level). Figure 7: The evolution of Insight-HXMT spectral parameters. The parameters are listed in Table 1. and the larger truncation of the inner disk. Of course, we cannot completely rule out the interpretations of other models. This work has made use of the data from the _Insight_-HXMT mission, a project funded by China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS), and data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Astrophysics Science Division at NASA/GSFC. This work is supported by the National Key RD Program of China (2021YFA0718500) and the National Natural Science Foundation of China (NSFC) under grants U1838201, U1838202, 11733009, 11673023, U1838111, U1838108, U1938102, U2038104, U1838110, U1838113, U1838115, U2031205, 12133007, and 12233002. We thank Mariano Mendez and Yuexin Zhang for helpful discussions. Figure 9: The reverberation time-scale as a function of the inner disk radius \(R_{\rm in}\). We select the quasi-simultaneous observations of _NICER_ and _Insight_-HXMT ( \(\pm\) 1 day). The blue dashed line shows the best linear fit (95% confidence level). ## Appendix Figure 16: The 0.5–2.5 keV vs. 3–5 keV lag-frequency spectra of all selected NICER observations of MAXI J1535–571. The red dashed line represents the best logarithmic fit. Figure A2: Two dimensional projection of the posterior probability distributions derived from the MCMC analysis for the parameters in \(relaxilllpCp\). The contours plot 1-, 2- and 3-\(\sigma\) confidence. This illustration corresponds to the spectral fitting of ObsID P011403500501. The figure is produced using the corner package (Foreman-Mackey, 2016).
2305.19425
Optical Truss Interferometer for the LISA Telescope
The LISA telescopes must exhibit an optical path length stability of $\frac{\mathrm{pm}}{\sqrt{\mathrm{Hz}}}$ in the mHz observation band to meet mission requirements. The optical truss interferometer is a proposed method to aid in the ground testing of the telescopes, as well as a risk-mitigation plan for the flight units. This consists of three Fabry-Perot cavities mounted to the telescope which are used to monitor structural displacements. We have designed and developed a fiber-based cavity injection system that integrates fiber components, mode-matching optics, and a cavity input mirror into a compact input stage. The input stages, paired with return mirror stages, can be mounted to the telescope to form the optical truss cavities. We performed a thorough sensitivity analysis using various simulation methods to support the fabrication and assembly of three first-generation prototype cavities, each of which exhibited a satisfactory performance based on our models.
Kylan Jersey, Ian Harley-Trochimczyk, Yanqi Zhang, Felipe Guzman
2023-05-30T21:42:07Z
http://arxiv.org/abs/2305.19425v2
# Optical Truss Interferometer for the LISA Telescope ###### Abstract The LISA telescopes must exhibit an optical path length stability of \(\frac{\mathrm{pm}}{\sqrt{\mathrm{Hz}}}\) in the mHz observation band to meet mission requirements. The optical truss interferometer is a proposed method to aid in the ground testing of the telescopes, as well as a risk-mitigation plan for the flight units. This consists of three Fabry-Perot cavities mounted to the telescope which are used to monitor structural displacements. We have designed and developed a fiber-based cavity injection system that integrates fiber components, mode-matching optics, and a cavity input mirror into a compact input stage. The input stages, paired with return mirror stages, can be mounted to the telescope to form the optical truss cavities. We performed a thorough sensitivity analysis using various simulation methods to support the fabrication and assembly of three first-generation prototype cavities, each of which exhibited a satisfactory performance based on our models. (c)2023 Optica Publishing Group under the terms of the Open Access Publishing Agreement. Users may use, reuse, and build upon the article, or use the article for text or data mining, so long as such uses are for noncommercial purposes and appropriate attribution is maintained. All other rights are reserved. ## I Introduction The Laser Interferometer Space Antenna (LISA) [1; 2] will be the first space-borne gravitational wave observatory meant for the detection of gravitational waves emitted by low-frequency sources in the observation band between 0.1 mHz and 1 Hz. This mission is led by the European Space Agency (ESA), with contributions from the National Aeronautics and Space Administration (NASA) and an international consortium of scientists, to fly a constellation of three spacecraft, each separated by 2.5 million kilometers to form an equilateral triangle in their formation. The spacecraft will relay laser beams between each other to form a set of long baseline laser interferometers whose endpoints are free flying test masses housed by the spacecraft. The primary interferometric goal is to measure variations in the separation between the free flying test masses onboard different spacecraft with a sensitivity of around \(10\,\frac{\mathrm{pm}}{\sqrt{\mathrm{Hz}}}\) level at mHz frequencies[3]. The LISA telescopes are bidirectional optical systems used to expand and send the outgoing beams to be captured by the other spacecraft while also capturing a small fraction of large 15 km-wide incoming beams from the other spacecraft. Since these telescopes lie directly in the optical path of the long arm interferometers, their structure must be dimensionally stable at the \(\frac{\mathrm{pm}}{\sqrt{\mathrm{Hz}}}\) level within the mHz frequency band to allow for the proper detection of incoming gravitational wave signals, and as such, they are to be constructed with highly stable low-expansion materials. During ground testing, telescope prototypes must be measured and verified to meet the stability requirements. While the primary measurement is to probe the overall path length stability along the optical axis of the telescope, there is also a motivation to measure the length stability at multiple locations around the telescope aperture to reconstruct wavefront errors introduced by structural distortions. This can be done with an optical truss interferometer (OTI), a set of three Fabry-Perot cavities that can be mounted around the telescope to monitor structural distortions over time. The OTI system depicted in Figure 1 is composed of three optical truss cavities, each mounted length-wise at different lateral positions around the telescope. We have designed and developed compact fiber-based units that integrate single-mode polarization-maintaining (PM) fiber, mode matching optics, and a cavity input mirror into a modular input stage for each OTI cavity[4]. These input stages can be mounted around the primary mirror of the telescope by means of hydroxide catalysis bonding[5], while the return stages, which house the cavity return mirrors, can be attached in a similar manner around the secondary mirror of the telescope. Utilizing a Pound-Drever-Hall (PDH) [6] frequency locking scheme, displacements in each cavity along the optical axis will cause proportional variations in the frequency of light emitted from the corresponding 1064 nm laser. Ideally, variations in the cavity lengths will be solely due to displacements in the telescope structure along the three different lateral positions. Thus, monitoring the laser frequencies stabilized to each OTI cavity will effectively monitor the displacement noise in the structure on which the cavities are mounted. The optical truss interferometer serves as a risk-mitigation plan to aid in the verification of \(\frac{\mathrm{pm}}{\sqrt{\mathrm{Hz}}}\) stability in the telescope prototypes during ground testing and, if necessary, can be used to monitor the telescopes during flight. This
2308.09812
Reliability and Delay Analysis of 3-Dimensional Networks with Multi-Connectivity: Satellite, HAPs, and Cellular Communications
Aerial vehicles (AVs) such as electric vertical take-off and landing (eVTOL) aircraft make aerial passenger transportation a reality in urban environments. However, their communication connectivity is still under research to realize their safe and full-scale operation. This paper envisages a multi-connectivity (MC) enabled aerial network to provide ubiquitous and reliable service to AVs. Vertical heterogeneous networks with direct air-to-ground (DA2G) and air-to-air (A2A) communication, high altitude platforms (HAPs), and low Earth orbit (LEO) satellites are considered. We evaluate the end-to-end (E2E) multi-hop reliability and network availability of the downlink of AVs for remote piloting scenarios, and control/telemetry traffic. Command and control (C2) connectivity service requires ultra-reliable and low-latency communication (URLLC), therefore we analyse E2E reliability and latency under the finite blocklength (FBL) regime. We explore how different MC options satisfy the demanding E2E connectivity requirements taking into account antenna radiation patterns and unreliable backhaul links. Since providing seamless connectivity to AVs is very challenging due to the line-of-sight (LoS) interference and reduced gains of downtilt ground base station (BS) antennas, we use coordinated multi-point (CoMP) among ground BSs to alleviate the inter-cell interference. Furthermore, we solve an optimization problem to select the best MC path under the quality of service (QoS) constraints. We maximize spectral efficiency (SE) to specify the optimum MC path with the minimum number of required links. Based on the simulation results, we find out that even with very efficient interference mitigation, MC is the key enabler for safe remote piloting operations.
Fateme Salehi, Mustafa Ozger, Cicek Cavdar
2023-08-18T20:40:41Z
http://arxiv.org/abs/2308.09812v1
Reliability and Delay Analysis of 3-Dimensional Networks with Multi-Connectivity: Satellite, HAPs, and Cellular Communications ###### Abstract Aerial vehicles (AVs) such as electric vertical take-off and landing (eVTOL) aircraft make aerial passenger transportation a reality in urban environments. However, their communication connectivity is still under research to realize their safe and full-scale operation. This paper envisages a multi-connectivity (MC) enabled aerial network to provide ubiquitous and reliable service to AVs. Vertical heterogeneous networks with direct air-to-ground (DA2G) and air-to-air (A2A) communication, high altitude platforms (HAPs), and low Earth orbit (LEO) satellites are considered. We evaluate the end-to-end (E2E) multi-hop reliability and network availability of the downlink of AVs for remote pilotim scenarios, and control/elementy traffic. Command and control (C2) connectivity service requires ultra-reliable and low-latency communication (URLLC), therefore we analyse E2E reliability and latency under the finite blocklength (FBL) regime. We explore how different MC options satisfy the demanding E2E connectivity requirements taking into account antenna radiation patterns and unreliable backhaul links. Since providing seamless connectivity to AVs is very challenging due to the line-of-sight (LoS) interference and reduced gains of downtilt ground base station (BS) antennas, we use coordinated multi-point (CoMP) among ground BSs to alleviate the inter-cell interference. Furthermore, we solve an optimization problem to select the best MC path under the quality of service (QoS) constraints. We maximize spectral efficiency (SE) to specify the optimum MC path with the minimum number of required links. Based on the simulation results, we find out that even with very efficient interference mitigation, MC is the key enabler for safe remote pilotim operations. reliability, network availability, multi-connectivity, aerial vehicles, URLLC, coordinated multi-point. ## I Introduction Future aerial communications (FACOM) is defined as the connectivity ecosystem incorporating emerging aerial use cases with different aerial vehicles (AVs) and range of connectivity solutions [1] such as high altitude platforms (HAPs), air-to-air (A2A), direct air-to-ground (DA2G) communication, and satellites. One critical emerging scenario is remote pilotim of AVs where connectivity has a key role to ensure the safe operations. In command and control (C2) communication links, short-packet control information needs to be transmitted with ultra-reliable and low-latency communications (URLLC). AVs have different mission and flight characteristics with diverse quality of service (QoS) requirements such as data rate, end-to-end (E2E) latency and communication reliability. AVs such as flying taxis and electric vertical take-off and landing (eVTOL) enable passengers to be transported over several tens of kilometers at low altitudes as an extension of the urban transportation system [1]. There is a growing interest in the design and performance analysis of FACOM to provide connectivity to AVs through different technologies. Performance of DA2G communication is studied to connect AVs directly with the ground cellular networks through beamforming and 5G networks [2, 3]. Network architectures and business models to provide high capacity DA2G communication is studied for passenger aircraft use case [4]. A scenario for beyond visual line-of-sight (BV-LoS) operation for remote pilotim of unmanned aerial vehicles (UAVs) in sub-6 GHz [5] and millimeter waves [6] is studied, which utilizes different technologies such as mobile edge computing and augmented reality. Multi-hop A2A communication is considered in [7] to extend DA2G communication without considering the reliability performance. In [8], the authors propose macro-diversity scheme considering a terrestrial and aerial hybrid network for ensuring URLLC services under the centralized RAN with ideal backhaul. The authors of [9] exploit the macro-diversity gain of the distributed multi-antenna systems and the array gain of the centralized multi-antenna systems. They maximize the availability of the C2 communication links between UAVs and a ground base station (BS) by optimizing the altitude of UAVs, the duration of the uplink and downlink phases, and the antenna configuration. HAPs communication is envisioned to be a part of non-terrestrial networks (NTNs) to ensure continuous, ubiquitous and scalable services [10]. HAPs may be used in several use cases ranging from extension of terrestrial coverage for white spot areas to disaster recovery support. They offer potential benefits such as high capacity links with large footprints and favorable line of sight (LoS) link conditions and computation offloading not only in suburban but also urban areas [11]. Beyond DA2G and A2A communications, HAPs communication serves as a connectivity option for FACOM. In addition to providing connectivity to terrestrial users, HAPs enable reliable connectivity to AVs [12]. One use case of HAPs is to support highly reliable and low latency communication for remote pilotim of AVs in a multi-link connectivity setting [13]. Furthermore, HAPs have more computational power than AVs, hence they can provide an intelligence layer in the sky for AVs [11]. One of the methodologies to provide URLLC services with out intervention in the physical layer design is to utilize multi-connectivity (MC). MC by introducing link/path diversity can improve both latency and reliability performance. There are various architectures for MC with different means of diversity such as BS diversity, network diversity, and technology diversity. Coordinated multi-point (CoMP) architecture belongs to the first category, i.e., BS diversity, where multiple BSs from the same network simultaneously serve an AV to improve the overall communication reliability. In this regard, the authors of [14] propose a 3D CoMP model for A2A communication, where UAVs were employed both as aerial BSs as well as aerial UEs. The authors of [15] use CoMP transmission for providing seamless connectivity to UAVs, and the coverage probability is studied for two scenarios with static hovering UAVs and mobile UAVs. In [16], CoMP in the sky is proposed for uplink communications and UAV placement and movement are optimized to maximize the network throughput. None of the above works consider CoMP for URLLC with E2E performance analysis. For MC with network diversity we can refer [17] and [18], where the authors conduct field measurements with a UAV to evaluate the improvements in reliability and latency over multiple mobile network operators (MNOs). They also report performance gain with multiple links compared with single link due to the performance variations of the MNOs at different altitudes and environments. Moreover, in [19], a combination of a public and dedicated cellular network with multipath transmission control protocol (MPTCP) is proposed for maritime search and rescue missions of UAVs. The results show that the multi-link protocol increases the range and improves the data rate performance. MC can also be considered using different radio access technologies (RATs). The authors of [20] aim to provide robust bandwidth allocation for retaining the continuous and stable connectivity among dynamic system components. To this end, they present an analytic modeling of MPTCP with a satellite link and WiFi access points to control a swarm of UAVs without guaranteeing the reliability and delay requirements. The authors of [21] present field measurements of triple-redundant multi-link architecture employing cellular, WiFi and LoRa for the C2 link connectivity with communication range and latency performance criteria. Their redundancy design employs a cellular network as the primary link and the other two as fallback links when there is no cellular coverage. None of the previous studies consider E2E paths for C2 communications containing backhaul links and network architectures. In [13], we consider a heterogeneous network of ground BSs, relay AVs, and a HAP to provide connectivity for AVs. We take into account practical antenna configurations with unreliable backhaul links. The automatic repeat-request (ARQ) mechanism and frequency diversity is employed to improve reliability of radio links. Mean-value analysis of E2E reliability and latency is considered for the performance evaluation. Reliability is a critical metric in BVLoS control of AVs. In this paper we capture both error rate and delay analysis, while we also define a service specific availability metric. Network availability is defined as the probability that both reliability and delay requirements can be met simultaneously. In our study, different from prior works in [2, 3, 4, 5, 6, 7, 8, 9, 14, 15, 16, 17, 18, 19, 20, 21], we aim to investigate the minimum required connectivity links and spectrum for the safe and full-scale remote piloting operation of BVLoS. As concepts of eVTOLs are of recent venture, to the best of our knowledge, the literature has not yet covered the connectivity needs and potential solutions for the C2 communication links of these aerial platforms. In this regard, we consider a rigorous analysis of E2E delay and reliability of communication paths, which includes different delay and error parameters in wired backhaul links, transmitter's queue, and wireless links with small- and large-scale fading. 3-Dimensional (3D) MC consisting of DA2G, A2A, HAP, and low Earth orbit (LEO) satellite communications is considered as the enabler of stringent requirements of remote piloting operation. Additionally, to improve the reliability of DA2G communication we exploit CoMP in joint transmission (JT) mode among ground BSs. We characterize the effect of different parameters such as data rate, bandwidth, CoMP cluster size, interference, and backhaul failure on the latency, reliability, and network availability and finally investigate how multi-path connectivity of RAT diversity can guarantee the requirements for safe operation. The main contributions of this paper can be summarized as follows. * We consider RAT diversity of DA2G, A2A, HAP, and LEO satellite to provide seamless connectivity for remote piloting of AVs. * We utilize ground BS diversity, namely JT CoMP, for interference mitigation of DA2G communication and increase reliability. * We present the E2E analysis of latency and reliability of C2 communications under finite blocklength (FBL) regime, and automatic repeat-request (ARQ) mechanism is employed to improve reliability of radio links. * We consider network architecture with unreliable backhaul links and buffer queues, as well as, practical antenna configurations for ground BSs with downtilt antennas and HAP/satellite with multi-beam antenna patterns. * We compare the performance of different MC options from reliability and network availability perspective through numerical analysis with extensive Monte-Carlo simulations. * We solve an optimization problem based on the brute-force method to find the best MC path with a minimum number of links enable to ensure the QoS requirements. The paper is organized as follows. Section II presents the system model consisting of the considered scenario, key performance indicators (KPIs), and the methodology. Section III presents the channel models of communication links, antenna radiation pattern of ground BSs and HAP/satellite, and SINR calculation. Section IV presents E2E latency and reliability analysis of different RATs and communication paths. Section VI discusses the numerical results and investigate the requirements with the MC options to enable remote piloting operation in different system parameters. Section VII concludes the paper. ## II System Model In this section, we introduce the considered scenario with its requirements and related KPIs as well as MC as the methodology for providing them. ### _Remote Piloting of AVs and QoS Requirements_ BVLoS remote piloting of an AV requires a communication path between the remote pilot and the AV. In this concept, ground pilots remotely navigate an AV, which can supply pilots with a first-person view by on-board cameras and other useful sensor data. Remote piloting operation emphasizes the demand for resilient E2E communication paths from the remote pilots to the AVs. As eVTOLs and UAVs occupy the sky, they must coordinate with one another as well as other AVs to efficiently share the low-altitude sky. Unmanned traffic management (UTM) introduce the regulation of these vehicles in a more-autonomous manner compared with the air traffic management (ATM). Machine-type communications (MTC) can become the dominant connectivity type in UTM rather than the human-centric ATM communication in the future [1]. Based on [1], control/telemetry traffic for remote piloting operations of eVTOLs requires a data rate about \(0.25\sim 1\) Mbps, E2E latency less than \(10\sim 150\) ms, and the minimum communication reliability \(99.999\%\). ### _Key Performance Indicators_ The most important KPIs related to URLLC are latency, reliability, and network availability. **Latency** is defined as the delay a packet experiences from the ingress of a protocol layer at the transmitter to the egress of the same layer at the receiver [22]. In the URLLC literature, the **reliability** is reflected either by packet loss probability or by latency, which we call them error-based and delay-based reliability, respectively. The E2E packet loss probability, \(\mathcal{E}_{\rm E2E}\), includes different components such as backhaul failure probability, queueing delay violation, decoding error probability, and so on. Therefore, in _error-based reliability_, the reliability requirement which is defined by \[\mathcal{R}=1-\mathcal{E}_{\rm E2E}\, \tag{1}\] can be satisfied if the overall packet loss probability does not exceed \(\varepsilon^{\rm th}\). On the other hand, using the convention that dropped packets have infinite latency, authors of [22] define the reliability as the probability that the latency does not exceed a pre-defined threshold \(D^{\rm th}\). Thus, in _delay-based reliability_ \[\mathcal{R}=\Pr\left\{\mathcal{D}_{\rm E2E}\leq D^{\rm th}\right\}, \tag{2}\] where \(\mathcal{D}_{\rm E2E}\) is the E2E delay from the transmitter to the receiver. Different from latency and reliability, which are the QoS required by each user, **availability** captures the performance of the network how it can respond to the demands of the users, and is another key performance metric for URLLC. In the conventional systems, availability is specified by the packet loss probability which we call it _error-based network availability_, i.e., \[P_{\rm A}=\Pr\left\{\mathcal{E}_{\rm E2E}\leq\varepsilon^{\rm th}\right\}. \tag{3}\] However, for URLLC services, availability is defined as the probability that the network can support a service with a target QoS requirement on both latency and reliability [23]. Based on the above definitions, the availability for URLLC services can be described by the following equation which we call it as _delay-aware network availability_ \[P_{\rm A}=\Pr\left\{\mathcal{E}_{\rm E2E}\leq\varepsilon^{\rm th},\mathcal{D }_{\rm E2E}\leq D^{\rm th}\right\}. \tag{4}\] Here \(\varepsilon^{\rm th}\) and \(D^{\rm th}\) characterize the QoS requirements in terms of packet error and delay. ### _Multi-Connectivity_ MC using multiple communication paths simultaneously is the key technology to reduce latency and increase reliability to fulfill strict requirements of AVs' remote piloting. As shown in Fig. 1, the system model consists of an integration of multiple RATs including DA2G, A2A, HAP, and LEO satellite communication. For all the RATs, we assume particular frequency band with full frequency reuse such that each link experiences probabilistic interference from all the corresponding links. The E2E path of each RAT is illustrated in Fig. 2, a directive path starting with the core network, traversing the backhaul link and the radio link (downlink) to reach the destination AV, which is the AV that remote pilot wants to navigate. The communication links consist of ground BS-to-AV (G2A), HAP ground station-to-HAP (G2H), satellite ground station-to-LEO satellite (G2S), and AV/HAP/LEO satellite-to-AV (A2A/H2A/S2A). In Fig. 2, four different E2E paths are shown, i.e., the red line which illustrates "DA2G E2E path" includes the backhaul link to the ground BS and G2A link. "A2A E2E path", illustrated with orange line is defined as the path consisting of backhaul, G2A and A2A links. The green line illustrates the "HAP E2E path" defined as the path consisting of backhaul link to the HAP ground station, G2H and H2A links. Finally, the "LEO satellite E2E path" indicated with violet line includes the backhaul link to the satellite ground station, G2S and S2A links. ### _Transmission and Combining Strategy_ We consider packet cloning for transmitting the message from the remote pilot to the AV over independent links. In this approach, the source sends copies of the message through each of the available links [24]. The combining scheme is joint decoding, where each link is decoded individually. Thus, the overall packet loss probability of \(N\) parallel transmission paths is \[\mathcal{E}_{\rm E2E}=\prod_{i=1}^{N}\mathcal{E}_{\rm E2E}^{i}, \tag{5}\] where \(\mathcal{E}_{\rm E2E}^{i}\) is the error probability of the \(i\)th path, and \(i\in\{\rm g,a,h,s\}\) refers to different RATs including DA2G, A2A, HAP, and satellite communications, respectively. It also potentially reduces the delay, since only the packet that arrives earlier and is decoded correctly needs to be considered. Hence, the E2E delay of multi-RAT using the cloning scheme is calculated as [24] \[\mathcal{D}_{\rm E2E}=\min_{i=1,\ldots,N}\left\{\mathcal{D}_{\rm E2E}^{i} \right\}, \tag{6}\] where \(\mathcal{D}_{\mathrm{E2E}}^{i}\) is the E2E delay of the \(i\)th path. ## III Channel Models of Communication Links and Antennas Radiation Patterns To model a realistic propagation channel, we consider both large-scale fading and small-scale fading. ### _Large-Scale Fading_ #### Iii-A1 Path Loss of G2A Link We consider that the G2A link experiences LoS propagation with a probability of \(P_{\mathrm{LoS}}\), which is calculated as [25] \[P_{\mathrm{LoS}}=\prod_{j=0}^{k}\left[1-\exp\left(-\frac{\left[\hbar_{\mathrm{ g}}-\frac{\left(j+0.5\right)\left(R_{\mathrm{g}}-R_{\mathrm{a}}\right)}{k+1} \right]^{2}}{2\mathrm{q}_{3}^{2}}\right)\right], \tag{7}\] where \(k=\left\lfloor\frac{r_{\mathrm{ga}}\sqrt{\mathrm{q}_{1}\mathrm{q}_{2}}}{1000} -1\right\rfloor\), and \(r_{\mathrm{ga}}\) is the 2D distance between the ground BS and the AV, while \(\left\{\mathrm{q}_{1},\mathrm{q}_{2},\mathrm{q}_{3}\right\}\) are environment-dependent parameters set to \(\left\{0.3,500,20\right\}\) to model an urban scenario [25]. Moreover, \(\hbar_{\mathrm{g}}\) and \(\hbar_{\mathrm{a}}\) are the height of the ground BS and altitude of the AV, respectively. Thus, the average path loss of the G2A link is derived as \[PL_{\mathrm{ga}}=P_{\mathrm{LoS}}\times PL_{\mathrm{ga}}^{\mathrm{LoS}}+\left( 1-P_{\mathrm{LoS}}\right)\times PL_{\mathrm{ga}}^{\mathrm{NLoS}}, \tag{8}\] where \(PL_{\mathrm{ga}}^{\mathrm{LoS}}\) and \(PL_{\mathrm{ga}}^{\mathrm{NLoS}}\) are the path losses of the G2A channel under LoS and NLoS conditions, respectively. Based on the urban macro cells (UMa) scenario, \(PL_{\mathrm{ga}}^{\mathrm{LoS}}\) and \(PL_{\mathrm{ga}}^{\mathrm{NLoS}}\) are calculated as follows [26] \[PL_{\mathrm{ga}}^{\mathrm{LoS}}\left(\mathrm{dB}\right) =28+22\log_{10}\left(d_{\mathrm{ga}}\right)+20\log_{10}\left(f_{ \mathrm{c}}\right), \tag{9}\] \[PL_{\mathrm{ga}}^{\mathrm{NLoS}}\left(\mathrm{dB}\right) =-17.5+\left(46-7\log_{10}\left(\hbar_{\mathrm{a}}\right)\right) \log_{10}\left(d_{\mathrm{ga}}\right)\] \[+20\log_{10}\left(40\pi f_{\mathrm{c}}/3\right), \tag{10}\] where \(d_{\mathrm{ga}}\) is the 3D distance between the ground BS and the AV in meter, and \(f_{\mathrm{c}}\) is the carrier frequency in GHz. #### Iii-A2 Path Loss of A2A/H2A/S2A Link For A2A, H2A, and S2A links, the free space path loss (FSPL) channel model is used [27] \[\begin{split} PL_{\mathrm{xy}}\left(\mathrm{dB}\right)& =\textit{FSPL}(d_{\mathrm{xy}},f_{\mathrm{c}})\\ &=32.45+20\log_{10}\left(d_{\mathrm{xy}}\right)+20\log_{10}\left( f_{\mathrm{c}}\right),\end{split} \tag{11}\] where \(\mathrm{xy}\in\left\{\mathrm{aa},\mathrm{ha},\mathrm{sa}\right\}\) represents the A2A, H2A, and S2A link, respectively. \(d_{\mathrm{xy}}\) is the 3D distance between nodes x and y in meter, and \(f_{\mathrm{c}}\) is the carrier frequency in GHz. #### Iii-A3 Path Loss of G2H/G2S Links The path loss of the G2H/G2S link can be considered as the basic path loss model which accounts for the signal's FSPL, shadow fading (SF), and clutter loss (CL) [27] \[\textit{PL}_{\mathrm{xy}}\left(\mathrm{dB}\right)=\textit{FSPL}(d_{\mathrm{ xy}},f_{\mathrm{c}})+\textit{SF}+\textit{CL}\, \tag{12}\] where \(\mathrm{xy}\in\left\{\mathrm{gh},\mathrm{gs}\right\}\) represent the G2H link and the G2S link, respectively. SF is modeled by a log-normal distribution, i.e., \(\textit{SF}\sim N(0,\sigma_{SF}^{2})\). CL based on [27, Table 6.6.2-1] depends on the elevation angle between nodes x and y, the carrier frequency, and the environment. When there is LoS condition, CL is negligible and can be considered as 0 dB in the basic path loss model [27]. ### _Small-Scale Fading_ Due to the LoS path for all the mentioned links, small-scale channel fading between nodes x and y, i.e., \(\omega_{\mathrm{xy}}\), can be taken into account as the Rician model, where \(\mathrm{xy}\in\left\{\mathrm{ga},\mathrm{aa},\mathrm{gh},\mathrm{ha},\mathrm{ gs},\mathrm{sa}\right\}\). \[f_{\mathrm{\Omega}}\left(\omega_{\mathrm{xy}}\right)=\frac{\omega_{\mathrm{xy}} }{\sigma_{\mathrm{xy}}^{2}}\exp\left(\frac{-\omega_{\mathrm{xy}}^{2}-\rho_{ \mathrm{xy}}^{2}}{2\sigma_{\mathrm{xy}}^{2}}\right)I_{\mathrm{\Omega}}\left( \frac{\omega_{\mathrm{xy}}\rho_{\mathrm{xy}}}{\sigma_{\mathrm{xy}}^{2}} \right), \tag{13}\] Fig. 1: System model. Fig. 2: Illustration of multi-RAT and E2E communication paths. with \(\omega_{\rm xy}\geq 0\), and \(\rho_{\rm xy}\) and \(\sigma_{\rm xy}\) reflecting the strength of the LoS and the NLoS paths, respectively. \(I_{0}(.)\) denotes the modified Bessel function of the first kind and zero order. The Rice factor of X2Y link, \(K_{\rm xy}\), is defined as \[K_{\rm xy}({\rm dB})=10\log_{10}\left(\frac{\rho_{\rm xy}^{2}}{2\sigma_{\rm xy }^{2}}\right), \tag{14}\] which increases directly with different parameters such as altitude, elevation angle, and carrier frequency. The elevation angle plays a dominant role among the other factors [28]. ### _Antenna Gain_ We assume that all AVs are equipped with a single omni-directional antenna with unitary gain. However, we consider realistic antenna radiation patterns for the ground BSs and the HAP/satellite, which are given as follows. #### Iii-C1 Ground BS Antenna Pattern We assume that the ground BSs are equipped with a vertical, \(N_{\rm e}\)-element uniform linear array (ULA), where each element is omnidirectional in azimuth with a maximum gain of \(g_{\rm e}^{\rm max}\) and directivity as a function of the zenith angle \(\phi\)[25]: \[g_{\rm e}(\phi)=g_{\rm e}^{\rm max}\sin^{2}\phi. \tag{15}\] We assume that there is a half-wavelength spacing between the adjacent antenna elements. With a fixed downtilt angle \(\phi_{\rm t}\), the array factor of the ULA is given by [25] \[g_{\rm A}(\phi)=\frac{\sin^{2}\left(N_{\rm e}\pi\left(\cos\phi-\cos\phi_{\rm t }\right)/2\right)}{N_{\rm e}\sin^{2}\left(\pi\left(\cos\phi-\cos\phi_{\rm t} \right)/2\right)}. \tag{16}\] The total ground BS's antenna gain in linear scale is \[g_{\rm g}(\phi)=g_{\rm e}(\phi)\times g_{\rm A}(\phi). \tag{17}\] #### Iii-C2 HAP/Satellite Antenna Pattern For HAP and satellite, multi-beam antennas instead of uniform planar array antennas are considered, as uniform planar array configuration requires the design of a precoding matrix which is beyond the scope of this paper. It is assumed that each cell is served by one main beam [29, 30]. The following normalized antenna x gain pattern of one beam, \({\rm x}\in\{{\rm h,s}\}\), corresponding to a typical reflector antenna with a circular aperture with a radius of 10 wavelengths, is considered [27] \[g_{\rm x}(\theta)=\left\{\begin{array}{ll}1,&\mbox{for $\theta=0$},\\ 4\left|\frac{J_{1}(20\pi\sin\theta)}{20\pi\sin\theta}\right|^{2},&\mbox{for $0<| \theta|\leq 90^{\circ}$},\end{array}\right. \tag{18}\] where \(\theta\) is the angle with respect to antenna boresight, and \(J_{1}(.)\) is the Bessel function of the first kind and first order. ### _SINR Calculation_ One may obtain the channel coefficient between any two nodes \({\rm x}\) and \({\rm y}\) as \[h_{\rm xy}=(\frac{g_{\rm xy}}{PL_{\rm xy}})^{1/2}\omega_{\rm xy}, \tag{19}\] where \(g_{\rm xy}\) is the total antenna gain between nodes \({\rm x}\) and \({\rm y}\) given by the product of their respective antenna gains. Finally, the SINR of X2Y link with bandwidth \(B^{\rm xy}\), \({\rm xy}\in\{{\rm ga,aa}\}\), is calculated as follows \[\gamma^{\rm xy}=\frac{p_{\rm x}|h_{\rm xy}|^{2}}{P_{\rm interf}\sum\limits_{i \in\mathcal{N}_{\rm t}}p_{{\rm x}_{i}}|h_{{\rm x}_{i}y}|^{2}+B^{\rm xy}N_{0}}, \tag{20}\] where \(p_{\rm x}\) is the transmit power of node \({\rm x}\), and \(N_{0}\) is the noise spectral density. \(\mathcal{N}_{\rm t}\) is the set of interfering nodes and, \(h_{{\rm x}_{i}{\rm y}}\) indicates the channel coefficient between the interfering node \({\rm x}_{i}\) and node \({\rm y}\). We assume that interference cancellation techniques can harness interference [31, 32, 33], and it can be explicitly captured by interference probability denoted by \(P_{\rm interf}\). It points out that the higher the interference cancellation, the lower the interference probability. Hence, the effect of interference power on the network is affected by \(P_{\rm interf}\) due to the fact that each potential interferer is modeled as a Bernoulli random variable with a probability of \(P_{\rm interf}\). We also assume that the G2H and the G2S links are interference-free, while the interference on H2A/S2A links is due to the side lobes of HAP/satellite's antenna overlapping with the main lobes [29, 30]. ## IV Reliability and Latency Analysis ### _Preliminaries_ #### Iv-A1 Transmission Analysis in the FBL Regime The achievable data rate of the X2Y link, \(R^{\rm xy}\), with FBL coding and an acceptable Block Error Rate (BLER) \(\varepsilon_{\rm t}^{\rm xy}\), \({\rm xy}\in\{{\rm ga,aa,gh,ha,gs,sa}\}\), has an approximation as [34] \[R^{\rm xy}\approx B^{\rm xy}\left(C^{\rm xy}-\sqrt{\frac{V^{\rm xy}}{B^{\rm xy }D_{\rm t}^{\rm xy}}}\frac{Q^{-1}(\varepsilon_{\rm t}^{\rm xy})}{\ln 2}\right)\mbox{bits/s}\, \tag{21}\] where \(C^{\rm xy}=\log_{2}(1+\gamma^{\rm xy})\) is the Shannon capacity and \(V^{\rm xy}=1-(1+\gamma^{\rm xy})^{-2}\) is the channel dispersion. Moreover, \(D_{\rm t}^{\rm xy}\) is the transmission delay of the X2Y link, and \(Q^{-1}(\cdot)\) refers to the inverse Gaussian Q-function \(Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-\frac{t^{2}}{2}}\,{\rm d}t\). In the FBL regime, decoding error probability is given by \[\varepsilon_{\rm t}^{\rm xy}\approx Q\left(f(\gamma^{\rm xy},R^{\rm xy},D_{\rm t }^{\rm xy})\right)\, \tag{22}\] where \[f(\gamma^{\rm xy},R^{\rm xy},D_{\rm t}^{\rm xy})\triangleq\frac{(B^{\rm xy}C^{ \rm xy}-R^{\rm xy})\ln 2}{\sqrt{B^{\rm xy}V^{\rm xy}/D_{\rm t}^{\rm xy}}}. \tag{23}\] When transmitting a packet that contains \(b\) bits over the allocated channel, the decoding error probability can be obtained by substituting \(D_{\rm t}^{\rm xy}=\frac{b}{R^{\rm xy}}\) into (22). The above expressions are for AWGN channels which contain no fading. Here, we can assume our channel as a quasi-static flat fading channel such that at each realization, its characteristics remain the same. By adopting ARQ scheme, the packet is retransmitted until it is received correctly, and we assume that there is a reliable feedback from the AV to the transmitter as in [35]. Hence, the average transmission delay of the X2Y link is calculated as \[\overline{D}_{\rm t}^{\rm xy}=\frac{D_{\rm t}^{\rm xy}}{1-\varepsilon_{\rm t}^{ \rm xy}}. \tag{24}\] 2 Queueing Analysis As stated in [34], the packet arrival process to the BS in MTC, which is an aggregation of packets generated by multiple sensors, can be modeled as a Poisson process. The event that each sensor at any given instant has a packet to upload or not is modeled as a Bernoulli process. The probability that sensor \(m\) has a packet to upload is denoted by \(P_{m}\). Then, the arrival process to the BS is defined as a Poisson process, because the sensors are independent. Since MTC is the connectivity type in our scenario, each remote pilot resembles a sensor that at any time instant may deliver a packet to the AV of interest via node \(\mathrm{x}\). Therefore, if assume that \(M_{\mathrm{x}}\) AVs are served by node \(\mathrm{x}\), where \(\mathrm{x}\in\{\mathrm{g},\mathrm{a},\mathrm{h},\mathrm{s}\}\) refers to ground BS, relay AV, HAP, and LEO satellite, respectively, the average total arrival rate to node \(\mathrm{x}\) is \(\lambda_{\mathrm{x}}=\sum_{m=1}^{M_{\mathrm{x}}}P_{m}\) packets/s. Denote the packet dropping probability due to queueing delay violation as \[\varepsilon_{\mathrm{q}}^{\mathrm{x}}=\Pr\left\{D_{\mathrm{q}}^{ \mathrm{x}}>D_{\mathrm{q,max}}\right\}, \tag{25}\] where \(D_{\mathrm{q}}^{\mathrm{x}}\) is the queue delay of node \(\mathrm{x}\), and \(\mathrm{x}\in\{\mathrm{g},\mathrm{a},\mathrm{h},\mathrm{s}\}\). As described above, the packet arrival process to node \(\mathrm{x}\) can be modeled as a Poisson process with the average arrival rate of \(\lambda_{\mathrm{x}}\) packets/s. Then, the effective bandwidth of node \(\mathrm{x}\), which is the minimal constant packet service rate required to satisfy the queueing delay requirement \((D_{\mathrm{q,max}},\varepsilon_{\mathrm{q}}^{\mathrm{x}})\) can be expressed as follows [34] \[E_{\mathrm{BW}}^{\mathrm{x}}=\frac{\ln\left(1/\varepsilon_{\mathrm{q}}^{ \mathrm{x}}\right)}{D_{\mathrm{q}}^{\mathrm{x}}\ln\left[\frac{\ln\left(1/ \varepsilon_{\mathrm{q}}^{\mathrm{x}}\right)}{\lambda_{\mathrm{x}}D_{\mathrm{ q}}^{\mathrm{x}}}+1\right]}\text{ packets/s.} \tag{26}\] ### _E2E Delay and Packet Loss Probability_ #### Iii-B1 E2E Path through DA2G Communication The E2E delay of DA2G path consists of delay due to backhaul link, \(D_{\mathrm{b}}\), queue delay in the ground BS, \(D_{\mathrm{q}}^{\mathrm{g}}\), and the average transmission delay of the G2A link, \(\overline{D_{\mathrm{t}}^{\mathrm{g}}}\). Hence, the E2E delay requirement can be satisfied with the following constraint \[D_{\mathrm{b}}+D_{\mathrm{q}}^{\mathrm{g}}+\overline{D_{\mathrm{t}}^{\mathrm{ g}}}\leq D^{\mathrm{th}}. \tag{27}\] By deploying fiber optic backhaul links, we assume that the backhaul delay for remote piloting is around 1 ms1. Footnote 1: This value of backhaul delay corresponds to the propagation delay in a path with a length of 300 km. Correspondingly, the overall packet loss probability is due to the backhaul failure, packet dropping in the ground BS's queue with a probability of \(\varepsilon_{\mathrm{t}}^{\mathrm{g}}\), and decoding error of the G2A link with a probability of \(\varepsilon_{\mathrm{t}}^{\mathrm{g}}\). Thus, reliability can be guaranteed if \[1-(1-\varepsilon_{\mathrm{b}})(1-\varepsilon_{\mathrm{q}}^{\mathrm{g}})(1- \varepsilon_{\mathrm{t}}^{\mathrm{g}})\leq\varepsilon^{\mathrm{th}}. \tag{28}\] \(\varepsilon_{\mathrm{b}}\) is the failure probability of backhaul link, which is modeled by a Bernoulli process, and \(1-\varepsilon^{\mathrm{th}}\) is the required reliability. #### Iii-B2 E2 Communication Path of JT CoMP Here, we consider a CoMP cluster, consisting of \(N\) ground BSs that are serving \(M\) AVs, where \(M\leq N\). The E2E delay requirement of JT CoMP with a centralized architecture, introduced in [36], is given by \[D_{\mathrm{b}}+D_{\mathrm{c}}+D_{\mathrm{q}}^{\mathrm{g}}+ \overline{D_{\mathrm{t}}^{\mathrm{JT}}}\leq D^{\mathrm{th}}, \tag{29}\] where \(D_{\mathrm{b}}\) as before is the backhaul delay from the core network to the serving ground BSs, and \[D_{\mathrm{c}}=\max_{n}\left\{D_{\mathrm{f}}^{\mathrm{g_{n}}}+D_ {\mathrm{b}}^{\mathrm{C}}+D_{\mathrm{b}}^{\mathrm{D}}\right\}, \tag{30}\] is the delay due to CoMP, cf. Fig. 3, consisting of the delay that AV \(m\), \(m\in\{1,\cdots,M\}\), feeds back its channel state information (CSI) to its serving BS \(n\), \(n\in\{1,\cdots,N\}\), i.e., \(D_{\mathrm{f}}^{\mathrm{g_{n}}}\), and the backhaul delay between ground BS \(n\) and the control unit (CU) when ground BS \(n\) forwards the local CSI to the CU, i.e., \(D_{\mathrm{b}}^{\mathrm{C}}\), and the backhaul delay between CU and ground BS \(n\) when the CU distributes precoded data to ground BS \(n\), i.e., \(D_{\mathrm{b}}^{\mathrm{D}}\). The feedback delay as in [37] is considered a fixed value of \(5\) ms, and we assume the backhaul delay between the ground BS and CU as \(D_{\mathrm{b}}^{\mathrm{C}}=D_{\mathrm{b}}^{\mathrm{D}}=0.1\) ms2. Moreover, \(\overline{D}_{\mathrm{t}}^{\mathrm{JT}}=\frac{D_{\mathrm{b}}^{\mathrm{g}}}{ 1-\varepsilon_{\mathrm{t}}^{\mathrm{g}}}\) is the transmission delay of JT CoMP. Footnote 2: This value of backhaul delay corresponds to the propagation delay in a distance of \(30\) km where BSs are connected with one-hop backhaul [34]. The overall packet loss probability of JT with a CoMP cluster size of \(N\) can be calculated as \[1-(1-\varepsilon_{\mathrm{b}})(1-\prod_{n=1}^{N}\varepsilon_{ \mathrm{c}}^{\mathrm{g_{n}}})(1-\prod_{n=1}^{N}\varepsilon_{\mathrm{q}}^{ \mathrm{g_{n}}})(1-\varepsilon_{\mathrm{t}}^{\mathrm{JT}})\leq\varepsilon^{ \mathrm{th}}, \tag{31}\] Fig. 3: Illustration of centralized CoMP architecture with cluster size of \(N=3\). where \(\varepsilon_{\rm b}^{\rm g_{n}}\) is the probability that ground BS \(n\) fails to cooperate in its CoMP cluster and is given by [36] \[\varepsilon_{\rm e}^{\rm g_{n}}=\varepsilon_{\rm b}^{\rm D}+(1-\varepsilon_{\rm b }^{\rm D})\prod_{n=1}^{N}(\varepsilon_{\rm b}^{\rm C}+(1-\varepsilon_{\rm b}^{ \rm C})\varepsilon_{\rm f}^{\rm g_{n}}). \tag{32}\] \(\varepsilon_{\rm b}^{\rm D}\) is the failure probability of the backhaul link between the CU and ground BS \(n\) when the CU transmits precoded data to ground BS \(n\), and \(\varepsilon_{\rm b}^{\rm C}\) is the failure probability of the backhaul link between ground BS \(n\) and the CU when ground BS \(n\) forwards the local CSI to the CU. \(\varepsilon_{\rm f}^{\rm g_{n}}\) is the link failure probability of the access link between AV \(m\) and ground BS \(n\), when the AV feeds back the CSI to ground BS \(n\). We suppose that the CSI feedback is error free, i.e., \(\varepsilon_{\rm f}^{\rm g_{n}}\approx 0\), so the channel coefficients between all the AVs and their serving ground BSs are perfectly known at the CU. Finally, \(\varepsilon_{\rm t}^{\rm JT}\) denotes the decoding error probability of JT CoMP and is calculated by \(\varepsilon_{\rm t}^{\rm JT}\approx Q(f(\gamma^{\rm JT},R^{\rm g_{n}},D_{\rm t }^{\rm g_{n}}))\), where \(\gamma^{\rm JT}\) is the SINR of AV \(m\) given by \[\gamma^{\rm JT}=\frac{p_{m}}{P_{\rm interf}\sum\limits_{i\in\mathcal{N}_{i}}p_{ i}\left|h_{i}\right|^{2}+B^{\rm g_{n}}N_{0}}. \tag{33}\] \(p_{m}\) denotes the symbol power allocated to AV \(m\) and based on equal power strategy is derived as [38] \[p_{m}=\frac{P_{\rm max}}{\max\left[\left.\mathbf{WW^{*}}\right]_{j,j}\right.}. \tag{34}\] \(\mathbf{W}\) is the zero-forcing precoding obtained as the pseudo-inverse of the channel matrix, \(\mathbf{H}\in\mathbb{C}^{M\times N}\), available at the CU, i.e., \(\mathbf{W}=\mathbf{H}^{*}(\mathbf{H}\mathbf{H}^{*})^{-1}\) where \((.)^{*}\) denotes the conjugate transpose. We assume disjoint CoMP clusters with inter-cluster interference, where \(p_{i}\) in (33) is the transmit power of interfering BS \(i\), with ground BS's power constraint \(P_{\rm max}\). As the worst case of the SINR we assume \(p_{i}=P_{\rm max}\). Since we assume perfect CSI at the CU, the intra-cluster interference due to serving other AVs in the same CoMP cluster is canceled by the zero-forcing precoding. #### Iv-B3 E2E Path through A2A Communication For the scenario of deploying an AV as a relay to transmit data to the AV of interest, the packet in addition to the DA2G communication path goes across relay AV's queue, with a delay of \(D_{\rm q}^{\rm a}\), and A2A link, with an average delay of \(\overline{D_{\rm t}^{\rm aa}}\). Hence, the delay components should satisfy \[D_{\rm b}+D_{\rm q}^{\rm g}+\overline{D_{\rm t}^{\rm aa}}+D_{\rm q}^{\rm a}+ \overline{D_{\rm t}^{\rm aa}}\leq D^{\rm th}. \tag{35}\] Correspondingly, the reliability of the A2A communication path can be ensured if \[1-(1-\varepsilon_{\rm b})(1-\varepsilon_{\rm q}^{\rm g})(1- \varepsilon_{\rm t}^{\rm g})(1-\varepsilon_{\rm q}^{\rm aa})(1-\varepsilon_{ \rm t}^{\rm aa})\leq\varepsilon^{\rm th}. \tag{36}\] If we consider a swarm of parallel coordinated AVs with single-hop transmission to serve the desired AV with joint decoding strategy, the E2E error probability and delay can be calculated by (5) and (6), respectively. In fact, it helps increase reliability by exploiting path diversity in the A2A link. #### Iv-B4 E2E Path through HAP Communication For HAP, long distances of G2H and H2A links cause propagation delay in addition to previous delay components. Therefore, the E2E delay requirement of HAP is satisfied if \[D_{\rm b}+D_{\rm q}^{\rm g}+\overline{D_{\rm t}^{\rm gh}}+D_{\rm p}^{\rm gh}+D _{\rm q}^{\rm h}+\overline{D_{\rm t}^{\rm aa}}+D_{\rm p}^{\rm ha}\leq D^{\rm th}, \tag{37}\] where \(D_{\rm p}^{\rm gh}\) and \(D_{\rm p}^{\rm ha}\) are the propagation delay of the G2H link and the H2A link, respectively. \(\overline{D}_{\rm t}^{\rm ha}\) denotes the average transmission delay of the H2A link. The overall packet loss probability of the HAP communication, similar to the A2A communication, can be computed as \[1-(1-\varepsilon_{\rm b})(1-\varepsilon_{\rm q}^{\rm g})(1- \varepsilon_{\rm t}^{\rm gh})(1-\varepsilon_{\rm q}^{\rm h})(1-\varepsilon_{ \rm t}^{\rm ha})\leq\varepsilon^{\rm th}. \tag{38}\] #### Iv-B5 E2E Path through LEO Satellite Communication The E2E delay constraint of LEO satellite path, similar to the HAP communication, is given by \[D_{\rm b}+D_{\rm q}^{\rm g}+\overline{D_{\rm t}^{\rm g}}+D_{\rm p}^{\rm gs}+D_{ \rm q}^{\rm a}+\overline{D_{\rm t}^{\rm aa}}+D_{\rm p}^{\rm aa}\leq D^{\rm th}. \tag{39}\] where \(D_{\rm p}^{\rm gs}\) and \(D_{\rm p}^{\rm sa}\) are the propagation delay of the G2S and S2A links, respectively. \(\overline{D}_{\rm t}^{\rm aa}\) denotes the average transmission delay of the S2A link. Due to movement of LEO satellite, in addition to the aforementioned factors, the reliability depends on the availability of LEO satellite links and can be guaranteed if \[\begin{split} 1-&(1-\varepsilon_{\rm b})(1- \varepsilon_{\rm q}^{\rm g})(1-\varepsilon_{\rm l}^{\rm gs})\\ &(1-\varepsilon_{\rm t}^{\rm gs})(1-\varepsilon_{\rm q}^{\rm a})( 1-\varepsilon_{\rm l}^{\rm aa})(1-\varepsilon_{\rm t}^{\rm aa})\leq\varepsilon^{ \rm th}.\end{split} \tag{40}\] \(\varepsilon_{\rm l}^{\rm sy}\), \({\rm xy}\in\{{\rm g},{\rm sa}\}\) is the unavailability probability of LEO satellite X2Y link, which is defined as \(1-P_{\rm vis}^{\rm sy}\). Here, we approximate the link availability probability with visibility probability which is given by [39] \[P_{\rm vis}^{\rm sy}=1-\left(1-\frac{d_{\rm max}^{\rm sy}\ {}^{2}-\hbar_{\rm s}^{2}}{4R_{\rm e} \left(R_{\rm e}+\hbar_{\rm s}\right)}\right)^{n_{\rm s}}, \tag{41}\] where \(d_{\rm max}^{\rm sy}\) is the maximum distance between nodes x and y at the minimum elevation angle \(\vartheta_{\rm min}\). Moreover, \(R_{\rm e}\) is the Earth radius, \(\hbar_{\rm s}\) and \(n_{\rm s}\) are altitude and the number of LEO satellites, respectively. ## V Best Multi-Connectivity Path Selection This section discusses the selection of an MC path that maximizes the E2E spectral efficiency (SE). As redundant connections increase spectrum usage, we focus on SE, the ratio between the effective E2E data rate and the total bandwidth allocated to the desired AV, to minimize the number of required links. ### _Spectrum Efficiency in a Multi-Hop Multi-Connectivity Scenario_ By adopting ARQ scheme, the achieved data rate of the X2Y link can be expressed as \[\hat{R}^{\rm oxy}=\frac{b}{\overline{D_{\rm t}^{\rm xy}}}\ \text{bits/s}, \tag{42}\] where \(b\) is the number of bits, and \(\overline{D}_{\mathrm{t}}^{\mathrm{sy}}\) which is calculated by (24) indicates the average transmission delay of the X2Y link. In a multi-hop path, the E2E data rate is reflected by the bottleneck link, i.e., the link with the minimum SINR [40]. Let \(\mathcal{F}_{i}\) be the set of links in path \(i\), hence \(\min\left\{\hat{R}^{\mathrm{sy}},\forall\mathrm{xy}\in\mathcal{F}_{i}\right\}\) is the bottleneck rate. On the other hand, redundant connections through MC can lead to an increase in the effective data rate by considering the multi-hop path with the maximum E2E data rate, so as \(\max\left\{\min\left\{\hat{R}^{\mathrm{xy}},\forall\mathrm{xy}\in\mathcal{F}_ {i}\right\},\forall i\in\mathcal{G}_{j}\right\}\) where \(\mathcal{G}_{j}\) is the set of paths constituting the MC path \(j\)[41]. Consequently, the SE of MC path \(j\) is calculated as \[\mathit{SE}_{j}=\frac{\max\left\{\min\left\{\hat{R}^{\mathrm{xy}},\forall \mathrm{xy}\in\mathcal{F}_{i}\right\},\forall i\in\mathcal{G}_{j}\right\}}{ \sum_{\forall i\in\mathcal{G}_{j}}\sum_{\forall\mathrm{xy}\in\mathcal{F}_{i}}B ^{\mathrm{xy}}}\ \text{bps/Hz}. \tag{43}\] ### _Optimization Problem_ To select the best MC path with the minimum required connectivity links that ensure the safe remote piloting operation of BVLoS at a certain level, we formulate the SE maximization problem under the constraints of E2E reliability, E2E delay, and network availability as \[\max_{j\in\mathcal{H}}\ \sum_{j=1}^{|\mathcal{H}|}\alpha_{j} \mathit{SE}_{j}\] (44a) \[\mathrm{s.t.}\ \ \ \mathcal{E}_{\mathrm{E2E}}^{j}\leq\varepsilon^{ \mathrm{th}},\ \forall j\in\mathcal{H},\] (44b) \[\ \ located randomly with uniform distribution at a fixed altitude over the considered cells. We employ a swarm of at most \(3\) coordinated AVs, and \(6\) of AVs are interfering with the AV of interest. The location of the desired AV's serving BS and the HAP / LEO satellite projection on the ground is assumed at the origin. The horizontal distance of the HAP (LEO satellite) and its ground station is set as \(5\) (\(300\)) km. Altitude and number of LEO satellites in Table I are assumed based on Starlink constellation. In [28], the Rician \(K\)-factor was found to increase exponentially with elevation angle between two nodes. Here for simplicity, we assume that the Rician factor of each link increases linearly with the elevation angle. The elevation angles are considered from \(0^{\circ}\) to \(90^{\circ}\) with a \(10^{\circ}\) step, and the Rice factor is assumed to be constant in each interval. The experiments are provided to assess the reliability and network availability of different E2E paths and their parallel combinations for remote piloting of eVTOLs and investigate how we can achieve high E2E reliability and low E2E latency by MC along with adjusting system parameters such as data rate, bandwidth, CoMP cluster size, and interference level. ### _Performance Analysis of Single E2E Paths_ First, we analyse the E2E delay of different RATs. In Fig. 4, we draw the complementary cumulative distribution function (CCDF) of different paths' E2E delay, meaning the probability that E2E delay is greater than abscissa. It is observed that the least E2E delay is provided by DA2G and then by HAP communication. While JT CoMP results in more E2E delay due to coordination among ground BSs. Finally, LEO satellite, in both S-band and Ka-band, incur the most E2E latency owing to the large propagation delay. The solid lines that indicate the CCDF of E2E delay with the lowest required data rate of remote piloting, i.e., \(250\) kbps, reveal that DA2G, HAP, CoMP, and LEO Satellite with high probability result in E2E latency of \(\sim\)\(3\) ms, \(\sim\)\(5\) ms, \(\sim\)\(8\) ms, and \(\sim\)\(10\) ms, respectively. For the highest required data rate, i.e., \(1\) Mbps, since the transmission delay decreases, the dash lines indicate improvement of the E2E delay performance of different paths. It is observed that E2E delay of LEO satellite always exceeds the minimum threshold which is \(10\) ms. We note that the horizontal asymptote of latency CCDF is equal to the packet drop probability, according to the delay-based reliability in (2). It is clearly seen in Fig. 4 that there is a trade-off between the latency and reliability. It is observed that with increasing data rate from \(250\) kbps to \(1\) Mbps, the packet drop probability of DA2G (JT CoMP) increases from \(\sim\)\(0.2\) (\(\sim\)\(0.1\)) to \(\sim\)\(0.5\) (\(\sim\)\(0.3\)). For HAP (LEO satellite), it grows dramatically from \(\sim\)\(0\) (\(\sim\)\(0.01\)) to \(\sim\)\(0.03\) (\(\sim\)\(0.8\)). In Fig. 5(a), we show the relationship between two reliability definitions in Section II-B, i.e., error-based reliability in (1) and delay-based reliability in (2). It is observed that these two definitions are almost similar in high data rates that decoding error is the dominant factor of packet dropping. Moreover, delay-based reliability depends on \(D^{\rm th}\), such that the lower the delay threshold, the higher the packet drop probability. On the opposite, in low data rates that transmission delay increases and so E2E delay exceeds \(D^{\rm th}\), while decoding error Fig. 4: Comparison of CCDF of E2E delay in different RATs. The solid lines and the dash lines represent the CCDF with data rate of \(250\) kbps and \(1\) Mbps, respectively. Fig. 5: Comparison of (a) reliability and (b) network availability based on two definitions in different RATs. probability is low, the gap between the two definitions is huge. About DA2G and HAP communications, when \(D^{\rm th}=10\) ms, for data rates lower than \(\sim\)\(40\) kbps and \(\sim\)\(70\) kbps, respectively, which E2E delay is more than threshold, the performance gap grows. For LEO satellite, based on the previous results in Fig. 4, since E2E delay always exceeds \(10\) ms, the packet drop probability with \(D^{\rm th}=10\) ms is \(1\). It is observed that for certain data rate intervals in HAP/satellite communication, there is no value for delay-based reliability. Because in none of \(10\) million realizations of the experiment, E2E delay did not exceed the desired threshold. Fig. 5(b) indicates the relationship of the conventional network availability without delay constraint, i.e. error-based availability defined in (3), and with delay constraint, named delay-aware availability as in (4). It is obvious that they are equivalent unless E2E delay violates \(D^{\rm th}\). Hence, with strict delay threshold of \(10\) ms, as previous graph, in low data rates a gap arises between the two definitions. Furthermore, from Fig. 5(a), it is realized that in the range of desired data rates from \(0.25\sim 1\) Mbps, the reliability of DA2G and LEO satellite communication is not higher than \(\sim\)\(0.8\) and \(\sim\)\(0.99\), respectively, which are in accordance with the results in Fig. 4. At the same time, based on Fig. 5(b), their network availability is less than \(\sim\)\(0.8\) and \(\sim\)\(0.9\), respectively, which is not acceptable for the C2 application. On the other hand, HAP communication is the most reliable and available path that can satisfy the target reliability of \(0.99999\) with network availability as high as \(\sim\)\(0.9999\) just up to \(\sim\)\(300\) kbps. The empirical results verify that single path can not satisfy the stringent requirements, individually. Thus, in the following subsections, we evaluate the key performance metrics, i.e., reliability and network availability of multi-path connectivity by equations (1) and (4) with respect to some system parameters. ### _Impact of Data Rate on MC Performance_ Fig. 6 shows the overall error probability and network unavailability of different multi-path connectivity with respect to the data rate when the AVs' allocated bandwidth, \(B^{\rm xy}\), \({\rm xy}\in\{{\rm ga,aa,ha,sa}\}\), is \(0.8\) MHz. CoMP cluster size and probability of interference are set as \(3\) and \(0.05\), respectively. Fig. 6 depicts the performance gain of multiple communication paths connectivity with DA2G / JT CoMP as a master connectivity. It is observed that for the minimum required data rate of \(250\) kbps, the reliability of "DA2G + 3-A2A" and "DA2G + Sat-S/Ka" schemes is \(\sim\)\(0.99\), and their network availability is \(\sim\)\(0.97\) and \(\sim\)\(0.93\), respectively, which shows improvement compared to the single RAT transmission. Furthermore, "DA2G + HAP" and "DA2G + 3-A2A + HAP" schemes improve the target reliability of \(0.99999\) with network availability of \(\sim\)\(0.999\) up to \(\sim\)\(400\) kbps and \(\sim\)\(500\) kbps data rates, respectively. Additionally, it is shown that JT CoMP improves the reliability and network availability compared with DA2G communication because of combating the inter-cell interference by cooperation among ground BSs. The results show the cooperation of \(3\) adjacent ground BSs. For further improvements we can increase the CoMP cluster size, as its effect is investigated in the next subsection. ### _Impact of CoMP Cluster Size_ In Fig. 7, we investigate how the CoMP cluster size affects the reliability and network availability, when data rate and AV's allocated bandwidth are \(500\) kbps and \(0.8\) MHz, respectively, and \(P_{\rm interf}=0.05\). As shown in Fig. 7, the reliability and availability can be improved by increasing CoMP cluster size. In this figure, CoMP cluster size of \(1\) is equivalent to DA2G communication. The performance gap between the cluster size of \(1\) and \(2\), i.e., adopting DA2G or JT CoMP, is notable, especially when A2A links via JT CoMP are considered as the auxiliary communication path, such as "CoMP + 3-A2A", "CoMP + 3-A2A + Sat-S/Ka", and "CoMP + 3-A2A + HAP" schemes. Thus, utilizing JT CoMP along with A2A links and increasing CoMP cluster size can be a promising approach to achieve the target reliability and network availability. As it is observed, "CoMP + 3-A2A + HAP" scheme with cluster size of at least \(3\) can achieve the required reliability in the evaluated scenario. Fig. 6: (a) Reliability and (b) network availability performance of multi-path connectivity vs. data rate. ### _Effect of the Bandwidth Allocation_ The relation between the performance metrics, i.e., the reliability and network availability, and the AV's allocated bandwidth is illustrated in Fig. 8. The bandwidth of one RB is \(0.2\) MHz, and the total bandwidth allocated to each AV does not exceed the coherence bandwidth of \(1.2\) MHz. So, at most \(6\) consecutive RBs can be assigned to each AV. Unlike single paths of DA2G, CoMP, and satellite communication which seems not to achieve significant improvement in reliability and availability with respect to the allocated bandwidth, multi-path connectivity and especially HAP benefit significantly from this aspect. It is observed that HAP communication can individually achieve the target reliability of \(0.99999\), and availability of \(\sim\)\(0.999\) with allocating \(6\) RBs, while both of these values are less than \(\sim\)\(0.9\) with assigning \(1\) RB. ### _Effect of Interference_ In Fig. 9, we examine the effect of interference on the performance of different links and MC schemes, when data rate, bandwidth, and CoMP cluster size are \(500\) kbps, \(0.8\) MHz, and \(3\), respectively. For each RAT, we assume particular frequency band with full frequency reuse such that each X2Y link, \(\mathrm{xy}\in\{\mathrm{ga,aa,ha,sa}\}\), incurs interference with probability of \(P_{\mathrm{interf}}\) from all the corresponding links. The results in Fig. 9(a) show that DA2G and A2A links are highly interference limited due to LoS paths even in very low probabilities such as \(0.001\). Additionally, satellite S/Ka-band's performance becomes rapidly saturated with interference. As an example, by increasing the probability of interference from \(0.001\) to \(0.2\) the reliability of "DA2G + Sat-S/Ka" and "CoMP + Sat-S/Ka" schemes degrades from higher than 6-times (\(1-10^{-6}\)) to \(\sim\)\(0.2\) and \(\sim\)\(0.5\), respectively. Finally, HAP's performance diminishes gradually from higher than 6-times to \(\sim\)\(0.93\) and \(\sim\)\(0.96\), in "DA2G + HAP" and "CoMP + HAP" schemes, respectively, by increasing the interference probability from \(0.001\) to \(0.2\). Moreover, it is observed that with probability of interference greater than \(0.03\), none of the considered multiple paths can provide the target reliability of \(0.99999\) and network availability higher than \(0.9999\) in the evaluated scenario. Fig. 8: (a) Reliability and (b) network availability performance vs. AV’s allocated bandwidth. Fig. 7: (a) Reliability and (b) network availability performance vs. CoMP cluster size. ### _Best MC Path Selection_ In Fig. 10, we determine the optimal MC path with the minimum required links that fulfill the E2E delay of \(20\) ms under diverse E2E reliability and network availability requirements. The amount of data rate and the allocated bandwidth of different links are considered as \(500\) kbps and \(0.8\) MHz, respectively. The probability of interference and CoMP cluster size are \(0.05\) and \(3\), respectively. From this graph, it is observed that HAP communication solely is enough to fulfill the E2E reliability of \(0.9999\) with the target network availability of \(0.9\). Also, it can guarantee the reliability of \(0.99\) with the target network availability of \(0.99\). For higher reliability and/or network availability requirements, the optimum MC scheme demands more number of multiple paths. It is observed that a combination of all the RATs, i.e., "CoMP + 3-A2A + HAP + Sat-Ka", is able to ensure the target reliability of \(0.99999\) under the network availability of \(0.99\). Furthermore, it is observed that there are some specific cases that there is no MC path in the experiment to guarantee the service requirements. Such high reliability and network availability demand other investigations of design parameters such as bandwidth, CoMP cluster size, and effective interference mitigation techniques. ## VII Conclusion In this paper, we have studied the beyond visual line-of-sight (BVLoS) of remote piloting an aerial vehicle (AV) in finite blocklength (FBL) regime with multi-connectivity (MC) under practical antenna configurations. To this end, we have integrated multi radio access technologies (RATs) including direct air-to-ground (DA2G), air-to-air (A2A), high altitude platform (HAP), and low Earth orbit (LEO) satellite communications. A major challenge of DA2G communication is the management of severe line-of-sight (LoS) interference. Coordinated multi-point (CoMP) in joint transmission (JT) mode is a well known technique to overcome inter-cell interference, since base stations (BSs) cooperatively process signals. Hence, we exploit JT CoMP to improve the performance gain. Overall packet loss probability and end-to-end (E2E) latency are characterized as functions of the system parameters such as required data rate, AV's allocated bandwidth, CoMP cluster size, probability of interference, and backhaul failure probability. We evaluate the reliability, delay, and network availability of multiple communication path connectivity for command and control (C2) link. We have shown that the overall performance of different links under practical antenna settings is highly limited due to the LoS interference. Moreover, we have demonstrated that even with interference mitigation techniques, such as JT CoMP, MC is a key enabler for safe operation of a special type of AVs, i.e, electric vertical take-off and landing vehicles (eVTOLs). Moreover, we explored different MC options in order to figure out how to adjust system parameters to provide the quality of service requirements of the mission-critical scenario. Finally, we solved an optimization problem to select the best MC path under the service requirements constraints. We maximized spectral efficiency (SE) to specify the optimum MC path with the minimum number of required links and alleviate the spectrum usage of the MC scheme. As future work, we will investigate new approaches to fulfill higher Fig. 10: Best MC path with the minimum required links for different reliability and network availability demands. Fig. 9: (a) Reliability and (b) network availability performance vs. probability of interference. service requirements. Moreover, the effect of mobility of AVs and the blocking probability of wireless channels by clouds and rain will be studied.
2304.11350
Romanian Multiword Expression Detection Using Multilingual Adversarial Training and Lateral Inhibition
Multiword expressions are a key ingredient for developing large-scale and linguistically sound natural language processing technology. This paper describes our improvements in automatically identifying Romanian multiword expressions on the corpus released for the PARSEME v1.2 shared task. Our approach assumes a multilingual perspective based on the recently introduced lateral inhibition layer and adversarial training to boost the performance of the employed multilingual language models. With the help of these two methods, we improve the F1-score of XLM-RoBERTa by approximately 2.7% on unseen multiword expressions, the main task of the PARSEME 1.2 edition. In addition, our results can be considered SOTA performance, as they outperform the previous results on Romanian obtained by the participants in this competition.
Andrei-Marius Avram, Verginica Barbu Mititelu, Dumitru-Clementin Cercel
2023-04-22T09:10:49Z
http://arxiv.org/abs/2304.11350v2
Romanian Multiword Expression Detection Using Multilingual Adversarial Training and Lateral Inhibition ###### Abstract Multiword expressions are a key ingredient for developing large-scale and linguistically sound natural language processing technology. This paper describes our improvements in automatically identifying Romanian multiword expressions on the corpus released for the PARSEME v1.2 shared task. Our approach assumes a multilingual perspective based on the recently introduced lateral inhibition layer and adversarial training to boost the performance of the employed multilingual language models. With the help of these two methods, we improve the F1-score of XLM-RoBERTa by approximately 2.7% on unseen multiword expressions, the main task of the PARSEME 1.2 edition. In addition, our results can be considered SOTA performance, as they outperform the previous results on Romanian obtained by the participants in this competition. ## 1 Introduction The correct identification and handling of multiword expressions (MWEs) are important for various natural language processing (NLP) applications, such as machine translation, text classification, or information retrieval. For example, in machine translation, if an MWE is not recognized as such and is literally translated rather than as an expression, the resulting translation either is confusing or has the wrong meaning Zaninello and Birch (2020). In text classification, MWEs recognition can provide important information about the topic or sentiment of a text Catone et al. (2019), while in information retrieval, MWEs can clarify the meaning of a query and improve the accuracy of search results Englmeier and Contreras (2021). The PARSEME COST Action1 organized three editions Savary et al. (2017); Ramisch et al. (2018, 2020) of a shared task that aimed at improving the identification of verbal MWEs (VMWEs) in text. This work improves the results obtained in PARSEME 1.2 Ramisch et al. (2020) for the Romanian language. We investigate the advantages of using Romanian monolingual Transformer-based Vaswani et al. (2017) language models together with merging all the datasets for each language presented at the competition in a single corpus and then fine-tuning several multilingual language models on it. Additionally, for the latter, we aim to enhance the overall system's performance by generating language-independent features, with the help of two techniques, namely the lateral inhibition layer Pais (2022) on top of the language models and adversarial training Lowd and Meek (2005) between languages. Footnote 1: [https://typo.uni-konstanz.de/parseme/](https://typo.uni-konstanz.de/parseme/). Our experiments show that by employing these two algorithms, the results of the cross-lingual robustly optimized BERT approach (XLM-RoBERTa) Conneau et al. (2020) improve by 2.7% on unseen MWEs when trained on the combined dataset. Additionally, we report state-of-the-art (SOTA) results with the monolingual training of Romanian Bidirectional Encoder Representations from Transformer (RoBERT) Dumitrescu et al. (2020) in comparison with the results obtained at the PARSEME 1.2 edition, achieving an F1-score of 60.46%, an improvement of over 20%. ## 2 Dataset The PARSEME multilingual corpus was annotated with several types of VMWEs, to serve as training and testing material for the shared task. The quality of the manual annotation was further enhanced by a semi-automatic way of ensuring annotation consistency. For edition 1.2, the corpus contained 14 languages: Basque, Chinese, French, German, Hebrew, Hindi, Irish, Italian, Modern Greek, Polish, Portuguese, Romanian, Swedish, and Turkish. The types of VMWEs (i.e., universal, quasi-universal, and language-specific types) annotated therein are described in the annotation guidelines2. The types of VMWEs annotated for Romanian are as follows: VID (verbal idiom) like "fura somnul" (eng., "steal sleep-the", "fall asleep"), LVC.full (light verb construction with a semantically bleached verb) like "da citire" (eng., "give reading", "read"), LVC.cause (light verb construction in which the verb has a causative meaning) like "da foc" (eng., "give fire", "put on fire"), and IRV (inherently reflexive verb) like "se gandi" (eng., "Refl.Cl. think", "think"). Footnote 2: [https://parsemerf.lis-lab.fr/parseme-st-guidelines/1.2/](https://parsemerf.lis-lab.fr/parseme-st-guidelines/1.2/). The whole corpus version 1.2 contains 5.5 million tokens with 68k VMWEs annotations, split into train, dev, and test sets, on the one hand for controlling the distribution of unseen VMWEs both in dev with respect to test and in test with respect to train+dev, and on the other hand in ensuring a sufficient number of unseen VMWEs in the test set for each language. The Romanian training corpus contains 195k tokens in which 1,218 VMWEs are annotated. The Romanian dev set contains 134,340 tokens and 818 annotated VMWEs; the Romanian test set includes 685,566 tokens and 4,135 annotated VMWEs. The frequency of occurrence of VMWEs in Romanian ranges from 8% (for LVC.full) to 22% (for LVC.cause), with an average of 12%, thus being quite redundant (Barbu Miitelu et al., 2019). ## 3 System Description ### Monolingual Training We experiment with four BERT-based models (first two monolingual and last two multilingual) for MWE identification using only the Romanian part of the PARSEME 1.2 corpus, namely the RoBERT, the Distilled Romanian BERT (Distil-RoBERT) (Avram et al., 2022), the multilingual BERT (M-BERT) (Kenton and Toutanova, 2019), and the XLM-RoBERTa (Conneau et al., 2020). We follow the standard sequence tagging procedure described in the original BERT model and fine-tune the embeddings produced by the last layer for the input tokens to predict the corresponding MWE labels using a feed-forward layer. ### Multilingual Training Our second and principal line of work here combines all the training sets of the corpora. Therefore, we train the two multilingual language models on the resulting dataset and then evaluate the models on the Romanian test set of the PARSEME 1.2 shared task. In addition, we improve the performance of the system by forcing the embeddings of the respective language models to depend less on their source language and more on the semantic specificities of an MWE using a lateral inhibition layer and adversarial training. The general architecture of our multilingual training methodology is depicted in Figure 1. It is divided into three major components: a multilingual BERT model that acts as a feature extractor \(F\) and produces the embeddings of the tokens, a classifier \(C\) whose role is to identify the MWEs in the given texts, and a language discriminator \(LG\) whose role is to recognize the language of the input. We employ the lateral inhibition layer before feeding the embeddings to \(C\) and adversarially train \(LG\) by reversing its gradient before backpropagating through \(F\). Further details on these two methods are given below. ### Lateral Inhibition The neural inhibitory layer, modelled after the biological process of lateral inhibition in the brain, has been successfully used for the named entity recognition (NER) task in the past (Pais, 2022; Avram et al., 2022; Mitrofan and Pais, 2022). We envisage that since the terms recognised by NER are just a subset of the MWEs identification, both being grounded in sequence tagging, introducing this layer into our model would also bring improvements in the final performance of our system. However, in the previous work, the neural inhibitory layer was mainly used to enhance the quality of the extracted named entities. In contrast, in this work, we employ it to achieve language-independent embeddings out of the multilingual transformer models. The main idea behind the lateral inhibitory layer is quite simple. Given the embeddings \(X\) produced by a language model and a weight matrix \(W\) with a bias \(b\), the output \(Y\) of this layer is described in the following formula: \[Y=X*Diag(H(X*ZeroDiag(W^{T})+b)) \tag{1}\] where \(Diag\) is a function that creates a matrix whose main diagonal is the vector given as input, \(ZeroDiag\) is a function that sets a given matrix with the zero value on the main diagonal, and \(H\) is the Heaviside step function. Equation 1 works well for the forward pass. However, since the Heaviside step function is not differentiable, the lateral inhibition layer approximates the respective gradients with the gradients of the parameterized Sigmoid function (Wunderlich and Pehle, 2021), a technique known as surrogate gradient learning (Nefci et al., 2019). ### Adversarial Training Adversarial training of neural networks has been a highly influential area of research in recent years, particularly in fields such as computer vision with generative unsupervised models (Gui et al., 2021). Adversarial training has also been used to train predictive models (Zhao et al., 2022), and in recent research, both multilingual and cross-lingual adversarial neural networks were introduced (Hu et al., 2019; Guzman-Nateras et al., 2022). These networks are designed to learn discriminative representations that are invariant to language. In this study, we utilize the same methodology to learn task-specific representations in a multilingual setting, trying to improve the predictive capabilities of the employed multilingual transformer models. Our methodology closely follows the domain adversarial neural network algorithm (DANN) (Ganin et al., 2016), the difference here being that instead of reversing the gradient to create domain-independent features, we reverse it to generate language-independent embeddings out of the multilingual transformer models. As is the case for our system, DANN has in its composition a feature extractor \(F\), a label classifier \(C\), and a domain classifier \(D\) that is replaced in our work with a language classifier \(LG\). Thus, the gradient computation of each component can be formalized in the following equations: \[\begin{split}\theta_{C}=\theta_{C}-\alpha\frac{\partial L_{y}}{ \partial\theta_{C}}\\ \theta_{LG}=\theta_{LG}-\alpha\frac{\partial L_{lg}}{\partial \theta_{LG}}\\ \theta_{F}=\theta_{F}-\alpha(\frac{\partial L_{y}}{\partial\theta _{F}}-\lambda\frac{\partial L_{lg}}{\partial\theta_{F}})\end{split} \tag{2}\] where \(\theta_{C}\) are the parameters of the label classifier, \(L_{y}\) is the loss obtained by the label classifier when predicting the class labels \(y\), \(\theta_{LG}\) are the parameters of the language classifier, \(L_{lg}\) is the loss obtained by the language classifier when predicting the language labels \(d\), \(\theta_{F}\) are the parameters of the feature extractor, \(\lambda\) is the hyperparameter used to reverse the gradients, and \(\alpha\) is the learning rate. ## 4 Results ### Monolingual Training Table 1 shows the results of our monolingual training. We report both the overall scores (called global MWE) and the scores of the identified MWEs that do not appear in the training set (called unseen Figure 1: The multilingual training architecture. We use a multilingual BERT-based model to extract the embeddings from the input tokens (green). All these embeddings are fed into a classifier with a lateral inhibition layer to predict the MWE labels (blue) and into an adversarially trained language discriminator (orange). The block arrow depicts the forward pass, and the dotted arrow the backward pass. MWE), as well as the results of the best overall system (MTLB-STRUCT) Taslimipoor et al. (2020) and the results of the best system on Romanian (TRAVIS-mono) Kurfali (2020). All our monolingual models outperform the MTLB-STRUCT and TRAVIS-mono systems by more than 8% on unseen MWE, with RoBERT achieving an improvement of more than 20%. We believe that this is due to the more intensive hyperparameter search that we performed and the text preprocessing which consisted of things like replacing the letters with diacritics in Romanian to the standard used in pretraining or making sure that the tokenizer produces cased subtokens3. Footnote 3: These text preprocessing techniques are suggested at [https://github.com/dumitrescustefan/Romanian-Transformers](https://github.com/dumitrescustefan/Romanian-Transformers). Both the highest global MWE and unseen MWE performance were achieved by the monolingual RoBERT model, with F1-scores of 92.21% and 60.56%, respectively. The second highest performance was obtained by the XLM-RoBERTa model, although it is a multilingual model. Thus, XLM-RoBERTa outperformed the other monolingual model, Distil-RoBERT, by 2.1% on global MWE and 7% on unseen MWE. This phenomenon has also been noticed by Conneau et al. (2020), showing the raw power of multilingual models pre-trained on a large amount of textual data. ### Multilingual Training Table 2 shows the results for the multilingual training of both M-BERT and XLM-RoBERTa. As in the monolingual training case, XLM-RoBERTa achieves better performance, coming out on top with an F1-score of 58.16% in comparison with the 48.99% F1-score obtained by M-BERT. We also notice that the simple multilingual training (i.e., without lateral inhibition and adversarial training) improves the results of the two models when trained on the monolingual Romanian set. The adversarial training improves the performance of both M-BERT and XLM-RoBERTa in multilingual training. At the same time, the lateral inhibition layer brought improvements only to the later when it was combined with adversarial training. Thus, by merging the two methodologies, we outperform the XLM-RoBERTa's results trained \begin{table} \begin{tabular}{|l|c c c|c c c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**Global MWE**} & \multicolumn{3}{c|}{**Unseen MWE**} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline MTLB-STRUCT & 89.88 & 91.05 & 90.46 & 28.84 & 41.47 & 34.02 \\ TRAVIS-mono & **90.80** & 91.39 & 91.09 & 33.05 & 51.51 & 40.26 \\ \hline RoBERT & 90.73 & **93.74** & **92.21** & **52.97** & **70.69** & **60.56** \\ Distil-RoBERT & 87.56 & 90.40 & 88.96 & 41.06 & 62.77 & 49.65 \\ M-BERT & 90.39 & 90.11 & 90.25 & 46.82 & 51.09 & 48.86 \\ XLM-RoBERTa & 90.72 & 91.46 & 91.09 & 51.54 & 62.77 & 56.61 \\ \hline \end{tabular} \end{table} Table 1: The results of the models trained on the monolingual Romanian set. \begin{table} \begin{tabular}{|l|c c c|c c c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**Global MWE**} & \multicolumn{3}{c|}{**Unseen MWE**} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline M-BERT & **91.34** & 88.46 & **89.88** & **49.90** & 48.12 & 48.99 \\ M-BERT + LI & 90.78 & 88.85 & 89.81 & 45.06 & 45.15 & 45.10 \\ M-BERT + Adv & 89.14 & **90.13** & 89.63 & 46.27 & **56.44** & **50.85** \\ M-BERT + LI + Adv & 89.95 & 88.78 & 89.36 & 45.44 & 50.30 & 47.74 \\ \hline XLM-RoBERTa & **91.23** & 92.53 & **91.87** & 52.92 & **64.55** & 58.16 \\ XLM-RoBERTa + LI & 91.12 & 92.02 & 91.02 & 52.11 & 61.19 & 56.28 \\ XLM-RoBERTa + Adv & 89.45 & **92.87** & 91.12 & 54.91 & 63.96 & 59.09 \\ XLM-RoBERTa + Adv + LI & 90.49 & 92.61 & 91.53 & **55.01** & 64.47 & **59.36** \\ \hline \end{tabular} \end{table} Table 2: The results of the multilingual models trained on the multilingual combined dataset and evaluated on the Romanian set. LI means lateral inhibition, and Adv means multilingual adversarial training. on monolingual data (i.e., around 2.7% on unseen MWEs), which was the main target of the competition, being behind RoBERT with only 1.2%. ## 5 Conclusions The detection and processing of MWEs play an important role in various areas of NLP. This paper made notable improvements in unseen Romanian MWE identification by employing a lateral inhibition layer and adversarial training to multilingual large language models like XLM-RoBERTa. This way, we were able to improve the results of XLM-RoBERTa. In addition, we achieved SOTA results on this task with a simple fine-tuning of RoBERT that involved a better hyperparameter search and text preprocessing pipeline, respectively. Future work considers an analysis of the language-independent embeddings produced in the multilingual training, together with more experiments on other languages, to validate the generalization of this approach. In addition, we intend to add these results in LiRo - the public benchmark for Romanian NLP models Dumitrescu et al. (2021).
2307.08351
Neural Modulation Fields for Conditional Cone Beam Neural Tomography
Conventional Computed Tomography (CT) methods require large numbers of noise-free projections for accurate density reconstructions, limiting their applicability to the more complex class of Cone Beam Geometry CT (CBCT) reconstruction. Recently, deep learning methods have been proposed to overcome these limitations, with methods based on neural fields (NF) showing strong performance, by approximating the reconstructed density through a continuous-in-space coordinate based neural network. Our focus is on improving such methods, however, unlike previous work, which requires training an NF from scratch for each new set of projections, we instead propose to leverage anatomical consistencies over different scans by training a single conditional NF on a dataset of projections. We propose a novel conditioning method where local modulations are modeled per patient as a field over the input domain through a Neural Modulation Field (NMF). The resulting Conditional Cone Beam Neural Tomography (CondCBNT) shows improved performance for both high and low numbers of available projections on noise-free and noisy data.
Samuele Papa, David M. Knigge, Riccardo Valperga, Nikita Moriakov, Miltos Kofinas, Jan-Jakob Sonke, Efstratios Gavves
2023-07-17T09:41:01Z
http://arxiv.org/abs/2307.08351v1
# Neural Modulation Fields for Conditional Cone Beam Neural Tomography ###### Abstract Conventional Computed Tomography (CT) methods require large numbers of noise-free projections for accurate density reconstructions, limiting their applicability to the more complex class of Cone Beam Geometry CT (CBCT) reconstruction. Recently, deep learning methods have been proposed to overcome these limitations, with methods based on neural fields (NF) showing strong performance, by approximating the reconstructed density through a continuous-in-space coordinate based neural network. Our focus is on improving such methods, however, unlike previous work, which requires training an NF from scratch for each new set of projections, we instead propose to leverage anatomical consistencies over different scans by training a single _conditional_ NF on a dataset of projections. We propose a novel conditioning method where _local_ modulations are modeled per patient as a field over the input domain through a Neural Modulation Field (NMF). The resulting Conditional Cone Beam Neural Tomography (CondCBNT) shows improved performance for both high and low numbers of available projections on noise-free and noisy data. Machine Learning, ICML, Deep Learning, ICML ## 1 Introduction In inverse problems, the goal is to infer a certain quantity of interest from indirect observations. They arise in many scientific fields, medical imaging (Louis, 1992), biology (Karwowski, 2009; Sridharan et al., 2022), and physics (Romanov, 2018; Collaboration, 2019). Unfortunately, many inverse problems are inherently _ill-posed_, i.e., there exist multiple solutions that agree with the measurements and these do not necessarily depend continuously on the data (Kabanikh, 2008). These issues warrant further study, and tools from machine learning and deep learning in particular have attracted a lot of attention recently. In this work, we focus on Computed Tomography (CT) (Oldendorf, 1978), a medical imaging technique for reconstructing material density1 inside a patient, using the mathematical and physical properties of X-ray scanners. In CT, several X-ray scans-or _projections_-of the patient are acquired from various angles using a _detector_. An important variant of CT is Cone Beam CT (CBCT), which uses flat panel detectors to scan a large fraction of the volume in a single rotation. Unfortunately, CBCT reconstruction is harder in comparison to classical (helical) CT. This is caused by the inherent mathematical difficulty of Radon Transform inversion in the three-dimensional setting (Tuy, 1983), physical limits of the detector, and characteristics of the measurement process such as noise. Traditional reconstruction methods include FDK (Feldkamp et al., 1984), and iterative reconstruction (Kaipio and Somersalo, 2005). FDK filters the projections and applies other simple corrections to properly account for the physical geometry of the acquisition system. Iterative methods use optimization to find the density that most closely resemble the measurements once projected using a forward operator. In addition, deep learning has seen increasing use in the field, with algorithms such as learned primal-dual (Adler and Oktem, 2018), invertible learned primal-dual (Rudzusika et al., 2021) and LIRE (Moriakov et al., 2022). Footnote 1: To be precise, we try to find the _attenuation coefficients_, but we may use density interchangeably, as they are strongly related under assumptions that hold in our setting. Recently, reconstruction methods that employ Neural Fields (NFs) have been proposed. _NFs are a class of neural architectures that parameterize a field \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\), i.e. a quantity defined over spatial and/or temporal coordinates, using a neural network \(f_{\theta}\)_(see Xie et al. (2022) for a survey on NFs). In CT reconstruction, these architectures have been used to approximate the density directly over the volume space \(\mathbb{R}^{3}\)(Zang et al., 2021; Zha et al., 2022; Lin et al., 2023). Zha et al. (2022) proposed Neural Attenuation Fields (NAF), an approach to super measured attenuated photon counts at the detector. Despite showing promising results, this method requires training a NF from scratch for each volume, prohibiting transfer of learned features across volumes through weight sharing. Instead, Lin et al. (2023) propose encoding a set of projections into a latent space shared over all training volumes, and decoding this into a density modeled as a NF. However, encoding of all available projections is only feasible when a small number of them is used, as it would otherwise result in prohibitive compute and memory requirements. In this work, we instead aim to remove the need for an explicit decoder. We leverage the work of Park et al. (2019), who propose to learn latent codes for a dataset of 3D shapes using _auto-decoding_, where randomly initialized latent codes are optimized during training. Dupont et al. (2022) expand on by using these learned latent codes as modulations for a shared NF. Bauer et al. (2023) show that the use of a single global code per signal limits reconstruction quality, and instead use a spatially structured grid of codes. Their approach greatly increases reconstruction quality, but requires interpolating a grid of modulations, increasing computational requirements for signals over higher-dimensional domains. We introduce the **Neural Modulation Field** (NMF) which models a continuous field of modulations over the signal domain. We propose the **Conditional Cone Beam Neural Tomography** (CondCBNT) framework, which incorporates this _local conditioning function_ to speed up reconstruction, while still processing all available projections, relieving restrictions on projection counts used in the reconstruction process. In doing so, we show considerable improvements in scenarios with both sufficient or limited projections, as well as in the presence of both noisy and noise-free data. ## 2 Method Beer-Lambert's law relates the attenuation of electromagnetic radiation such as visible light or X-rays to the properties of the material it is traveling through (Sweinehart, 1962). Let \(\mathbf{r}:[T_{0},T_{1}]\longrightarrow\mathbb{R}^{3}\) be the straight path taken by radiation through the medium. The radiation intensity \(I(\mathbf{r}(T_{1}))\) at position \(\mathbf{r}(T_{1})\) is the line integral \[I(\mathbf{r}(T_{1}))=I_{0}\exp\Bigg{[}-\int_{T_{0}}^{T_{1}}\mu(\mathbf{r}(t)) \left|\mathbf{r}^{\prime}(t)\right|dt\Bigg{]}, \tag{1}\] where \(\mu:\mathbb{R}^{3}\longrightarrow\mathbb{R}^{+}\) is the attenuation coefficient of the medium and \(I_{0}=I(\mathbf{r}(T_{0}))\) is the initial intensity. The integral in (1) can be approximated by the sum \[I(\mathbf{r}(T_{1}))\approx I_{0}\exp\Bigg{[}-\sum_{c=1}^{N}\mu(\mathbf{r}(t_{ c}))\left|\mathbf{r}^{\prime}(t_{c})\right|\Delta t\Bigg{]}, \tag{2}\] where \(t_{c}\in[T_{0},T_{1}]\) and \(\left|\mathbf{r}^{\prime}(t_{c})\right|\Delta t=\Delta\mathbf{r}_{c}=\left| \mathbf{r}(t_{c+1})-\mathbf{r}(t_{c})\right|\). Given a set of 2D CBCT projections \(v_{\alpha}\in\mathbb{R}^{H\times W}\) with \(H,W\) the height and width of the sensor and \(\alpha\) the angle under which the projection was taken, we are trying to estimate density values along rays cast from source to sensor. Each ray is the straight path \(\mathbf{r}\) which connects the source to pixels in the detector. For simplicity, we bound the patient volume with a box and assume zero attenuation outside the box. Therefore, for every path, we compute the sum in (2) with only those \(\mathbf{r}(t_{c})\) that are contained in the bounding box. By taking the logarithm we can avoid the computationally tedious exponential and use \(\log I(\mathbf{r}(T_{1}))\approx-\sum_{c=1}^{N}\mu(\mathbf{r}(t_{c}))\Delta \mathbf{r}_{c}+\log I_{0}\) and discard the constant that depends on the initial intensity, which we assume is the same for all projections. We use a neural field Figure 1: We propose _Conditional Cone Beam Neural Tomography_ (CondCBNT), a framework for reconstructing Cone Beam Computed Tomography volumes using neural fields. An integral is taken over values sampled from a neural field \(f_{\theta}\) at coordinates \(\mathbf{r}(t)\) along a ray cast from source to sensor. The coordinates are encoded into a multiresolution hash-encoding \(h(\mathbf{r}(t))\)(Müller et al., 2022), and passed through \(L\) linear layers. To leverage consistencies over anatomies of different patients, we propose to model the density for a specific patient \(p_{i}\) using a shared neural field \(f_{\theta}\), whose activations \(\boldsymbol{a}^{l}\) are modulated by a patient-specific _Neural Modulation Field_ (NMF) \(\varphi_{i}\). This conditioning function learns a field of \(\boldsymbol{\gamma},\boldsymbol{\beta}\) FiLM modulations (Dumoulin et al., 2018) over the input space \(\mathbb{R}^{3}\) for a patient \(p_{i}\). The integral \(-\sum_{c=1}^{N}f_{\theta}(\mathbf{r}(t_{c}))\Delta\mathbf{r}_{c}\) is supervised at the sensor using the corresponding observed projection value. \(f_{\theta}:\mathbb{R}^{3}\longrightarrow\mathbb{R}^{+}\) to approximate the density \(\mu\) such that the intensity \(I(\mathbf{r}(T_{1}))\) coincides with the intensity recorded by the detector at the position \(\mathbf{r}(T_{1})\): \[\log I(\mathbf{r}(T_{1}))\approx-\sum_{c=1}^{N}f_{\theta}(\mathbf{r}(t_{c})) \Delta\mathbf{r}_{c}. \tag{3}\] Coordinate embedding.Tancik et al. (2020) showed that ReLU MLPs suffer from spectral bias, limiting their capacity to model high frequency functions on low-dimensional domains. As a solution, they note that it is possible to embed coordinates \(\mathbf{r}(t_{c})\in\mathbb{R}^{3}\) into a higher-dimensional space \(\mathbb{R}^{e}\) with \(e\gg 3\) before passing them through the MLP. We choose to follow Muller et al. (2022) and use the _multiresolution hash-encoding_, denoted \(h(\mathbf{r}(t_{i}))\), as it empirically shows fastest convergence in our experiments. See Appx. A for a full description of this embedding. Conditioning with Neural Modulation Fields.Conditioning in neural fields consists of modulating the weights \(\theta\) or activations \(\mathbf{a}\) of a NF \(f_{\theta}\) with a conditioning variable \(\mathbf{z}\) to vary the NF's output (Xie et al., 2022), a method often used to encode different samples \(x_{i}\) from a single dataset \(X\) through a set of latents \(\{\mathbf{z}_{i}|x_{i}\in X\}\). Intuitively, in the setting of CT reconstruction, we could fairly assume the densities for patients \(p_{i}\in P\) share a lot of anatomical structure. A conditional NF that is tasked with reconstructing a dataset of multiple volumes would be able to leverage this consistency in anatomical information in its reconstruction (e.g. inferring from noisy or missing data), with patient-specific characteristics being refined with the conditioning variable \(\mathbf{z}_{i}\). To this end, we could in principle use the aforementioned auto-decoding approach with a _global_ conditioning latent \(\mathbf{z}_{i}\). However, global conditioning has been shown to result in reconstructions with limited detail (Dupont et al., 2022; Bauer et al., 2023). This limitation is significant because patient-specific fine-grained details in scans contain information crucial for medical purposes. We instead opt for _local_ conditioning, where the conditioning variable \(\mathbf{z}_{i}\) depends on the input coordinate \(\mathbf{r}(t)\). In previous works, this is done through interpolation of a trainable discrete data structure, e.g. a grid of latent codes (Shaham et al., 2021; Yu et al., 2021; Bauer et al., 2023). Instead, to further increase expressivity of the resulting modulation and forego modelling choices such as code grid resolution and interpolation method, we propose to abstract the learning of modulations away from a discrete data structure and model the modulations themselves as a continuous field through a patient-specific _Neural Modulation Field_ (NMF) we denote \(\varphi_{i}\). During training, parameters \(\theta_{i}\) of the patient-specific NMFs \(\varphi_{\theta_{i}}\) are optimized alongside the weights of the shared NF \(f_{\theta}\), during inference - for a novel set of projections - only the parameters for \(\theta_{i}\) are optimized. For the activation modulation, we use feature-wise linear modulations (FiLM) (Dumoulin et al., 2018), such that activations \(\mathbf{a}^{l}\) at a layer \(l\) with weights \(\mathbf{W}^{l}\) and bias \(\mathbf{b}^{l}\) are transformed with patient-specific _local_ scaling and shifting modulations \(\mathbf{\gamma}_{i},\mathbf{\beta}_{i}\), as follows: \[\mathbf{a}^{l}_{i}=\mathrm{ReLU}((\mathbf{W}^{l}\mathbf{a}^{l-1}_{i}+\mathbf{b}^{l})\odot\mathbf{ \gamma}_{i}+\mathbf{\beta}_{i}), \tag{4}\] where \(\mathbf{\gamma}_{i},\mathbf{\beta}_{i}\) are obtained from the NMF \(\varphi_{\theta_{i}}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{\dim(\mathbf{\gamma})+ \dim(\mathbf{\beta})}\). For specific architectural choices of the NMF and shared NF, see Appx. C. We term the resulting model _Conditional Cone Beam Neural Tomography_ (Cond-CBNT). See Fig. 1 for an overview of the framework. Dataset.The dataset used is derived from the LIDC-IDRI (Armato III et al., 2015). This is a collection of diagnostic lung cancer screening thoracic CT scans. A random selection of \(250\) cases was chosen and the CT scan resampled to \(2\)mm resolution. Then, each volume is projected using \(256\times 256\) pixel, \(2\)mm resolution detectors. Angles equally spaced between \(0^{\circ}\) and \(205^{\circ}\) are used. \(400\) projections are created, first without any noise, then with Poisson noise, used to simulate measurement noise with \(5\times 10^{5}\) photons. A subset of \(50\) equally-spaced projections is obtained from both. The \(250\) volumes are split into \(200/25/25\) for training, validation, and testing. The resulting dataset will be made publicly available upon acceptance. Metrics.For quantitve evaluation we rely on the _Peak Signal to Noise Ratio_ (**PSNR**), a classical measure of signal quality, and the _Structural Similarity Index Measure_ (**SSIM**), which captures the perceptive similarity between two images by analyzing small local chunks (Wang et al., 2004). Historically, both metrics have been defined for images, but we compute them over full volumes. Finally, we track the GPU memory used and the time required to reconstruct a volume. Baselines.FDK reconstruction (Feldkamp et al., 1984) was performed using Operator Discretization Library (Adler et al., 2017). As an iterative reconstruction baseline, we implemented Landweber iteration with Total Variation regularization (Kaipio and Somersalo, 2005), where parameters such as step size, iteration count and the amount of regularization were chosen via grid search on the validation set. As a deep learning reconstruction baseline, we use LIRE-32(L) architecture from Moriakov et al. (2022), which is a dedicated lightweight, memory-efficient variant of learned primal-dual method from Adler and Oktem (2018) for CBCT reconstruction. From the NF class of models, we compare with Zha et al. (2022); we do not compare with Lin et al. (2023) due to their prohibitive computational costs. ## 3 Experiments Hyperparameter search for NAF, CondCBNT, and the Iterative method was carried out on the validation set. With noisy projections, early stopping was used to avoid overfitting the noise. Instead, with noise-free projections, we decided to stop after about 10 minutes of training. Although more time would have improved performance further, it would not have provided any additional insights. It is worth noting that individual volume optimization was not conducted to reflect the constraints of a realistic scenario. During training, we followed Lin et al. (2023) and directly supervised the neural field with density values, as we observed this greatly improved stability. During inference on validation and test sets, we kept the shared NF fixed and only optimized the randomly initialized NMF weights for each unseen scan (see Appx. C). We first evaluated the model on the test set using 50 and 400 noise-free projections respectively, results shown in Tab. 1 right. CondCBNT greatly improves reconstruction quality both in terms of PSNR and SSIM, compared to classical methods and NAF. Next, we validated the model on 50 and 400 noisy projections, results for which are shown in Tab. 1 left. Again, we see considerable improvements in our method over all baseline approaches. LIRE-L is the exception, achieving a performance slightly better than CondCBNT with significantly faster reconstruction speed at the cost of an increased memory footprint. Qualitative assessment in the noisy case is possible from Fig. 3, where it is evident that NAF overfits the noise. The iterative method over-smooths the reconstruction and exhibits blocky artifacts. The FDK reconstruction suffers from artifacts caused by the detector size, noise, and the low number of projections. LIRE-L and CondCBNT both reconstruct the volume with better soft-tissue contrast and without overfitting the noise. Comparing convergence speed from Tab. 1 is hard because of diverging implementation choices and final performance reached. Therefore, we normalized performance by maximum PSNR reached after optimization. Additionally, given that dataset and batch size were the same, we decided to compare using the number of iterations instead of wall-clock time (Fig 2). This shows how CondCBNT quickly reaches a satisfying performance with both noisy and noise-free projections. Especially interesting is that, in the 400 projection case, CondCBNT was optimized for only half of a full epoch and still managed to outperform NAF and be within \(1\) standard deviation of LIRE-L. Since our method does not require training the whole model from scratch for a newly obtained set of projections, the model converges considerably faster. ## 4 Conclusion We improve noise resistance of neural field (NF)-based CBCT reconstruction methods by sharing a conditional \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{P.} & \multicolumn{4}{c}{Noisy} & \multicolumn{4}{c}{Noise-free} \\ \cline{2-9} & Method & PSNR (\(\uparrow\)) & SSIM (\(\uparrow\)) & Time & PSNR (\(\uparrow\)) & SSIM (\(\downarrow\)) & Time & Mem. (\(\downarrow\)) \\ \hline 50 & FLK & 14.54 \(\pm\) 2.90 & 20.07 \(\pm\) 0.8 & 16.06 \(\pm\) 3.22 & 2.43 \(\pm\) 0.9 & 0.8 & 100 \\ & Inferite & 25.36 \(\pm\) 2.11 & 2.08 \(\pm\) 0.8 & 7.27 & 21.38 \(\pm\) 0.7 & 1.48 & 0.8 & 300 \\ & LIRE-L & 29.84 \(\pm\) 2.07 & 3.83 \(\pm\) 0.8 & 3.9 & & & & 2.18 \\ & NAF & 22.83 \(\pm\) 2.54 & 8.30 \(\pm\) 16 & 141.26 \(\pm\) 2.52 & 2.72 \(\pm\) 0.08 & 582 & 18 \\ & **ConfBCNT** & 28.31 \(\pm\) 1.22 & 8.05 \(\pm\) 0.5 & 124 & 30.21 \(\pm\) 1.42 & 3.86 \(\pm\) 0.65 & 647 & 96 \\ \hline 400 & FLK & 16.43 \(\pm\) 3.38 & 4.55 \(\pm\) 2.2 & 7.161 \(\pm\) 3.47 & _5.85 \(\pm\) 0.9_ & 7 & 100 \\ & Iterative & 23.88 \(\pm\) 3.27 & 7.28 \(\pm\) 1.31 & _7.41_ & 31.40 \(\pm\) 6.22 & 6.17 & 487 & 174 & 600 \\ & LIRE-L & 30.00 \(\pm\) 2.52 & 8.05 \(\pm\) 0.5 & 12.8 & & & & - & 44 \\ & NAF & 25.93 \(\pm\) 2.45 & 7.55 \(\pm\) 0.08 & 275 & 25.04 \(\pm\) 2.91 & _37.7_ \(\pm\) 0.08 & 580 & 205 \\ & **ConfBCNT** & 29.89 \(\pm\) 1.39 & 8.05 \(\pm\) 0.65 & 763 & 30.63 \(\pm\) 1.43 & 3.84 \(\pm\) 0.04 & 595 & 96 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean \(\pm\) standard deviation of metrics over test set for FDK (Feldkamp et al., 1984), Iterative (Kaipio & Somersalo, 2005), LIRE-L (Moriakov et al., 2022), NAF (Zha et al., 2022), and CondCBNT (ours). LIRE-L slightly outperforms CondCBNT but requires more GPU memory. Our method excels with less memory and comparable runtime. Figure 3: Ground truth and reconstructions using all the methods applied to noisy projections. Top 50, bottom 400 projections. Grayscale with density in \([0-0.04]\). Our method does not overfit the noise and maintains tissue contrast. High-res in Appx. D. Figure 2: Using noisy projections, the percentage of the best PSNR (\(\uparrow\)) that a model can reach over the number of steps required to achieve it. CondCBNT converges significantly faster. NF over scans taken from different patients. We propose learning a continuous, local conditioning function expressed through a sample-specific _Neural Modulation Field_ which modulates activations in the conditional NF to express volume-specific details. _Conditional Cone-Beam Neural Tomography_ (CondCBNT) represents an efficient improvement over previous approaches, in terms of GPU memory scalability and reconstruction quality on both noise-free and noisy data and with varying numbers of available projections.
2303.15191
Strain effects on magnetic compensation and spin reorientation transition of Co/Gd synthetic ferrimagnets
Synthetic ferrimagnets are an attractive materials class for spintronics as they provide access to all-optical switching of magnetization and, at the same time, allow for ultrafast domain wall motion at angular momentum compensation. In this work, we systematically study the effects of strain on the perpendicular magnetic anisotropy and magnetization compensation of Co/Gd and Co/Gd/Co/Gd synthetic ferrimagnets. Firstly, the spin reorientation transition of a bilayer system is investigated in wedge type samples, where we report an increase in the perpendicular magnetic anisotropy in the presence of in-plane strain. Using a model for magnetostatics and spin reorientation transition in this type of system, we confirm that the observed changes in anisotropy field are mainly due to the Co magnetoelastic anisotropy. Secondly, the magnetization compensation of a quadlayer is studied. We find that magnetization compensation of this synthetic ferrimagnetic system is not altered by external strain. This confirms the resilience of this material system against strain that may be induced during the integration process, making Co/Gd ferrimagnets suitable candidates for spintronics applications.
Giovanni Masciocchi, Thomas J. Kools, Pingzhi Li, Adrien A. D. Petrillo, Bert Koopmans, Reinoud Lavrijsen, Andreas Kehlberger, Mathias Kläui
2023-03-27T13:26:34Z
http://arxiv.org/abs/2303.15191v4
Strain effects on magnetic compensation and spin reorientation transition of Co/Gd synthetic ferrimagnets ###### Abstract Synthetic ferrimagnets are an attractive materials class for spintronics as they provide access to all-optical switching of magnetization and, at the same time, allow for ultrafast domain wall motion at angular momentum compensation. In this work, we systematically study the effects of strain on the perpendicular magnetic anisotropy and magnetization compensation of Co/Gd and Co/Gd/Co/Gd synthetic ferrimagnets. Firstly, the spin reorientation transition of a bilayer system is investigated in wedge type samples, where we report an increase in the perpendicular magnetic anisotropy in the presence of in-plane strain. Using a model for magnetostatics and spin reorientation transition in this type of system, we confirm that the observed changes in anisotropy field are mainly due to the Co magnetoelastic anisotropy. Secondly, the magnetization compensation of a quadlayer is studied. We find that magnetization compensation of this synthetic ferrimagnetic system is not altered by external strain. This confirms the resilience of this material system against strain that may be induced during the integration process, making Co/Gd ferrimagnets suitable candidates for spintronics applications. ## I Introduction Recent advances in spintronics have opened new possibilities for electronic applications beyond the CMOS standard. New concepts of high density and ultrafast non-volatile data storage have been proposed in magnetic memories [1; 2]. Throughout the years, magnetic memories have evolved [3; 4] exploiting different geometries [5] and new material platforms such as ferrimagnets [6] have been used to improve storage density [7], reading and writing speed [8] and energy efficiency [9; 10]. At the same time, single-pulse optical-switching (AOS) of magnetization has reduced the switching speed of the magnetization below ps timescale [11; 12; 13; 14]. This bears promise for a new generation of ultrafast data buffering, in a single chip that integrates photonics with spintronics [15; 16; 17; 18]. Ferimagnets are a class of magnets with unbalanced antiparallel-aligned sublattice moments. The compensation of the two inequivalent sublattices, combines the advantages of both antiferromagnets (antiparallel alignment of magnetic moments) and ferromagnets (finite Zeeman coupling and spin polarization) [16; 20]. Moreover, the drastic contrast between the two sublattices in non-adiabatic dynamics, could potentially accommodate AOS by a femtosecond laser pulse [12; 16]. Single-pulse AOS is typically observed in rare earth-transition metal (RE-TM) ferrimagnetic alloys like GdFeCo [20] or in multilayer synthetic ferrimagnet, such as Co/Gd and [Co/Tb]\({}_{n}\)[21; 22]. In particular, the one based on multilayer of Co/Gd is a good candidate for integrated opto-spintronics devices as it shows AOS - without the constrains on the composition as imposed by alloy system [23; 24] - and at the same time exhibits magnetic and angular momentum compensation, allowing ultrafast domain wall motion [25; 26]. For instance, the integration of Co/Gd synthetic ferrimagnets in an optically switchable magnetic tunnel junction has been recently reported [27]. When it comes to technological implementation, strain induced effects must be considered, which could be incurred from processing steps such as packaging and layer deposition [28]. Intrinsic stresses and strain could affect the magnetic anisotropy via changes to the spin-orbit coupling (SOC) [29] or to the magnetization compensation of ferrimagnets especially in RE-TM alloys [30; 31]. However, in spite of being omnipresent in applications [32; 33; 34], the effect of strain has not yet been explored in these materials. In this work, we present a systematic study of the effects of strain on Co/Gd synthetic ferrimagnets. By the application of external strain, using substrate bending, we investigate the impact of strain on the perpendicular magnetic anisotropy (PMA) and the magnetization compensation of [Co/Gd] and [Co/Gd]\({}_{2}\) multilayers, respectively. Using wedge samples in a bilayer system of Co/Gd and polar magneto-optic Kerr effect (pMOKE) measurements, we confirm that the PMA is increased by in-plane tensile strain and a negative magnetostriction is reported. By including the contribution of the strain-anisotropy for this system in a model for the magnetostatics, we show that the effects of strain on the magnetization are mainly due to the modification of the spin-orbit coupling within the magnetic layer and at the the Pt/Co interface that increases the magnetic anisotropy via magnetoelastic coupling. Additionally, we find that the magnetization compensation point is not affected significantly by strain, as the magnetoelastic coupling affects the anisotropy rather than the magnetization of the two sublattices. Our study explores the mechanisms that underlie the influence of strain on the magnetic anisotropy of Co/Gd ferrimagnets and contributes to a better understanding of the magnetoelastic effects of ferrimagnetic multilayers. These results could be employed for the optimization and development of spintronics devices, as well as for potential applications in fields such as magnetic memory and sensing. ## II Methods and sample fabrication The samples were grown on a 1.5 \(\mu\)m thick, thermally oxidized SiOx on top of a 625 \(\mu\)m thick Si substrate by DC magnetron sputtering in a chamber with a typical base pressure of \(5\times 10^{-9}\) mBar. To obtain a variable thickness (wedge) along the sample surface, a shutter in the close proximity of the sample is gradually closed during deposition. This allows to study the compensation and spin reorientation transition (SRT) within a single sample. Two types of samples are realized. Firstly, a bilayer of Ta(4 nm)/Pt(4)/ Co(0-2)/Gd(t\({}_{Gd}\))/TaN(4) with a constant Gd layer on top of a Co wedge is considered to study the SRT. In addition, a quadlayer of Ta(4)/Pt(4)/Co(0.6)/Gd(0-2)/Co(0.6)/Gd(1.5)/TaN(4), this time with a Gd wedge, is grown to study the magnetization compensation. The magnetic properties of these wedge samples were investigated by pMOKE, where we are only sensitive to the out-of-plane (OOP) component of the Co magnetization at a wavelength of 658 nm. According to Fig. 1 (a), the surface of the sample is scanned along the y-direction using a focused laser spot with a spot-size of \(\simeq\)250 \(\mu\)m diameter. Accordingly, the local magnetic properties and hysteresis loops can be measured as a function of layer thickness, with a negligible thickness gradient \(<\) 0.025 nm within the used laser spot. All the measurements are performed at room temperature. To apply in-plane tensile strain to our multilayer, the substrate is mechanically bent using a three-point method [35]. A square sample of 1 by 1 cm is vertically constrained on two sides and pushed uniformly from below by a cylinder that has off-centered rotation axis. The device generates a tensile strain in the plane of the sample when the cylinder is rotated. As previously reported, the tensile strain is uniaxial along \(x\) and uniform in the measured area of the sample. The in-plane strain magnitude is 0.1% and has been measured with a strain gauge (RS PRO). More details about the strain generating device can be found in section S2 of the supplementary information. ## III Results and discussion ### Spin reorientation transition in Co/Gd bilayer The use of magnetic materials for high density data storage requires magnetic systems that are OOP magnetized [36; 37]. In thin films, an OOP magnetic easy axis can be obtained by magnetocrystalline anisotropy induced at the interface with heavy metal [38; 39]. In addition to that, strain has been shown to affect the magnetic easy axis direction in systems with PMA [40]. To understand the effect of external strain on Co/Gd systems with PMA, we investigate bilayer samples consisting of Ta(4 nm)/Pt(4)/ Co(0-2)/Gd(t\({}_{Gd}\))/TaN(4). Specifically, the Co thickness is varied between 0 and 2 nm over a few mm along the \(y\) direction, whereas \(t_{Gd}\) is constant (as in Fig. 1 (a)). In this system, the balance between the interfacial anisotropy energy (magnetocrystalline anisotropy energy at the Pt/Co interface) and demagnetization energy determines the effective magnetic anisotropy. In such system, the demagnetization energy increases with the thickness of the Co magnetic layer, and consequently, the magnetization will go from out-of-plane (OOP) to in-plane (IP). To probe the magnetization of our wedge sample, we record hysteresis loops from the pMOKE signal. We repeat the measurement moving the laser spot along the wedge in the y direction. Firstly, a sample where t\({}_{Gd}\)=0 is considered. This measurement can be seen in Figs. 1 (b) and (c). Fig. 1 (b) reports the magnetic response of the Ta(4 nm)/Pt(4)/Co(0-2)/TaN(4) sample to an OOP magnetic film for different t\({}_{Co}\). The effective anisotropy \(K_{eff}\) was estimated [38] recording hysteresis loops with magnetic field applied OOP and IP and the corresponding anisotropy energy per unit area is \(K_{s}=1.7\) mJ/m\({}^{2}\). For \(t_{Co}=1.35\) nm the square-shaped loop indicates PMA with \(K_{eff}\)= 1.5(2)\(\times 10^{5}\) J/m\({}^{3}\). A value of \(M_{Co}=1.3\) MA/m was used in the calculation. As the thickness of Co is increased (moving the laser spot along the wedge direction - y) the remanence and squarenes of the hysteresis loop decreases together with the PMA of the system. For \(t_{Co}=2.00\) nm, the sample is IP magnetized and \(K_{eff}=\) -0.8(2)\(\times 10^{5}\) J/m\({}^{3}\) is negative. The OOP to IP transition occurs at \(t_{Co}=1.85(2)\) nm in this system. To investigate the effects of externally applied in-plane strain, we repeat the measurement while the sample is mechanically bent. The magnetization is coupled to the external strain and can be described by the expression for the anisotropy energy [35]: \[K_{ME}=-\frac{3}{2}\lambda_{s}Y\epsilon, \tag{1}\] where \(\lambda_{s}\) is the saturation magnetostriction, \(Y\) is the Young's modulus and \(\epsilon\) is the strain. If the strain in the film is non-zero, the magneto-elastic coupling of Co contributes in principle to the effective anisotropy. Accordingly, the total anisotropy \(K_{eff}\) of the magnetic stack is expected to change in the presence of external strain. Fig. 1 (c) shows the OOP hysteresis loops of Ta(4 nm)/Pt(4)/Co(1.85)/TaN(4) sample before (blue) and after (red) the application of \(\epsilon_{xx}=0.1\%\). We observe that the anisotropy field is decreased after the application of in-plane strain. This happens because, in this system, the strain-induced magnetoelastic anisotropy \(K_{ME}=0.02\)\(mJ/m^{2}\) is positive, as we expect from a material with negative magnetostriction like Co [40; 41]. More details about the calculations of magnetoelastic anisotropy can be found in section S2 of the supplementary information. Accordingly, the PMA is increased by the applied strain, i.e. the system is expected to be OOP magnetized for thicker Co if compared to samples without strain. After this preliminary study on Pt/Co systems, we focused our attention on the magnetostriction of Co/Gd multilayers. In Co-Gd alloys the magnetostriction has been reported to be strongly dependent on the composition [42; 29] due to the structural modification occurring with different atomic content. In contrast to this case, the effects of magnetostriction of a mul tilayer, are expected to be dependent on the magnetoelastic coupling of the individual layers [43]. To study the magnetostriction of a Co/Gd multilayer, a constant layer of Gd on top of the Co wedge is added. To perform thickness dependent studies, a thickness \(t_{Gd}=1\) nm and 3 nm is considered. In the bilayer system, the magnetization in the Gd layers is mainly induced at the interface with the Co layer, and couples anti-parallel the Co magnetization [21]. Accordingly, \(t_{Co}\) required to reach SRT is expected to change with increasing \(t_{Gd}\)[44]. To compare the SRT of Ta(4 nm)/Pt(4)/Co(0-2)/Gd(\(t_{Gd}\))/TaN(4) samples with different \(t_{Gd}\) we performed remanent intensity scan along our Co wedge, in addition to hysteresis loop measurements. After the sample is saturated with an OOP magnetic field of 1T, we determine the thickness-dependent remanence from the pMOKE signal without external magnetic field. The remanent intensity scans are reported in Fig. 1 (d). As the pMOKE signal is mainly sensitive to the OOP component of Co magnetization, the normalized remanent intensity will drop to zero at the SRT, when the magnetization rotates IP. The SRT can be observed in Fig. 1 (d) in samples with different thicknesses of Gd before and after the application of strain. As previously reported [44] the critical thickness \(t_{Co}=t_{c}\) at which SRT occurs, changes significantly in the presence of a Gd layer. For all the considered samples, the in-plane strain shifts the OOP to IP transition towards larger Co thickness. This suggests that the effective magnetostriction of the Co/Gd bilayer is negative and its value \(\lambda_{s}=-10(5)\times 10^{-6}\) is not significantly altered by the presence of the Gd layer. To obtain a quantitative understanding of the shape of the spin reorientation boundary, we employ an analytical model [44] describing the magnetostatic free energy of the anisotropy, which is zero at the SRT boundary. The first constituent energies of the model are the demagnetization energies of the Co layer \[E_{d,Co}=\frac{1}{2}\mu_{0}\int_{0}^{x}M_{Co}^{2}dq=\frac{1}{2}\mu_{0}M_{Co}^{ 2}y \tag{2}\] and of the Gd layer \[\begin{split} E_{d,Gd}&=\frac{1}{2}\mu_{0}\int_{0}^ {x}M_{Gd}^{2}exp(-2q/\lambda_{Gd})\,dq=\\ &\frac{1}{4}\mu_{0}M_{Gd}^{2}\lambda_{Gd}\left(1-\exp\left(\frac {-2x}{\lambda_{Gd}}\right)\right)\end{split} \tag{3}\] where \(\lambda_{Gd}\) is the characteristic decay length of the Gd magnetization, which is induced at the Co/Gd interface, \(M_{Co}\) is the magnetization of the Co layer, \(M_{Gd}\) is the effective Gd magnetization at the interface between Co and Gd and \(x\) and \(y\) are, respectively, the Gd and Co thicknesses in the diagram of Fig.2 (a). The plot axes in Fig.2 (a) have been inverted for a better comparison with the other figures. The magnetocrystalline anisotropy is included with the term Figure 1: (a) Sample sketch, red arrow indicates the direction of the applied strain. (b) Out of plane hysteresis loops of a Pt/Co/TaN stack for different Co thicknesses. (c) OOP hysteresis loops of Pt/Co(1.85 nm)/TaN before (blue) and after (red) application of 0.1% in-plane strain. (d) MOKE intensity scan at remanence (no applied field) of Pt/Co/Gd/TaN films along the Co wedge. \[E_{K}=K_{s}-\Delta K\left(1-\exp\left(\frac{-2x}{\lambda_{K}}\right)\right), \tag{4}\] and it is also considered to decay with a characteristic decay length \(\lambda_{K}\) and magnitude \(\Delta K\). The second term in Eq. 4 phenomenologically addressed the experimentally observed decay in the effective anisotropy, which may be caused by sputter induced disordering of the Co [45]. Using a numerical fit to the experimentally determined SRT, the parameters \(\lambda_{K}\), \(\lambda_{Gd}\) and \(\Delta K\) for our Co/Gd bilayer are determined. All the other parameters were either experimentally measured or taken from literature and are reported in Table S.1, section S1 of the supplementary information. In addition to the anisotropy term, and additional energy term \(E_{mix}\) is included in the model. \(E_{mix}\) takes into account the mixing at the magnetic layer interfaces where the local net magnetization is zero. More details about the expression for this term and the determination of the fitting parameters can be found in the supplementary information and in the work of Kools et al. [44]. In this model, the expression of the total free energy density per unit area is, considering all the terms mentioned so far: \[E_{tot}=-E_{K}-E_{mix}+E_{d,Co}+E_{d,Gd}. \tag{5}\] The magnetocrystalline anisotropy energy per unit area \(K_{s}\), due to the Pt/Co interface is assumed constant. Eq. 5, describing the total energy of a Ta(4nm)/Pt(4)/Co(\(t_{Co}\))/Gd(\(t_{Gd}\))/TaN(4) sample, can be solved for y (t\({}_{Co}\)) by imposing \(E_{tot}=0\) (spin reorientation transition). The solution for the SRT obtained with the model described above is reported in Fig. 2 (a) with a blue solid line in a phase diagram where \(t_{Gd}\) (x) and \(t_{Co}\) (y) are continuously varied from 0 to 3 nm and from 0 to 2 nm, respectively. Together with the calculations, the SRT measured experimentally without externally applied strain is reported with blue diamonds in Fig. 2 (a). The experimental data, follow well the general trend of the calculations. Discrepancies between model and experimental values for \(t_{Gd}=0\), might be due to additional mixing between the layers. To include the effects of strain, a magnetoelastic anisotropy \(K_{ME}\) is added to Eq. 5 that becomes \[E_{tot}=-E_{K}-E_{mix}-K_{ME}+E_{d,Co}+E_{d,Gd}. \tag{6}\] In our case \(K_{ME}=0.02\) mJ/m\({}^{2}\) corresponds to the value of magnetoelastic anisotropy induced with 0.1% externally applied in-plane strain in our experiments. As showed in Fig. 1 (d), we do not observe significant changes to \(K_{ME}\) with increasing \(t_{Gd}\). Again considering the SRT-boundary to be at \(E_{tot}=0\), the solution of Eq. 6 (that includes the magnetoelastic term) is reported in Fig. 2 (a) with an orange solid line. As expected from a material with negative magnetostriction, \(K_{ME}\) sums to \(K_{s}\) and the PMA is enhanced by in-plane strain. The SRT calculated including \(K_{ME}\) to Eq. 6 is consequently shifted to larger values of \(t_{Co}\). This trend is in agreement with the experimentally determined SRT when and external strain \(\epsilon_{xx}=0.1\%\) is applied (orange squares in Fig.2 (a)). Another way to visualize the SRT is solving Eq. 6 for fixed values of \(t_{Gd}\) and obtaining the critical thickness of \(t_{Co}\) such that \(E_{tot}=0\). Then, the SRT can be represented as a step function in the diagram of Fig. 2 (b), analogue to the MOKE remanence scan shown in Fig. 1 (d). The values of Gd thicknesses considered are \(t_{Gd}=0\), 1 and 3 nm and are plotted in Fig. 2 (b) with solid lines in black, blue and orange, in order. Solid lines consider \(K_{ME}=0\) mJ/m\({}^{2}\). Dashed lines consider instead \(K_{ME}=0.02\) mJ/m\({}^{2}\) in Fig. 2 (b). The information contained here can be correlated to the experimental remanent intensity scan in Fig. 1 (d). Comparing Fig. 2 (b) with Figure 2: (a) 2D phase diagram of the SRT of the a Ta(4nm)/Pt(4)/Co(\(t_{Co}\))/Gd(\(t_{Gd}\))/TaN(4) stack as a function of \(t_{Gd}\) (x) and \(t_{Co}\) (y). The axes have been inverted for a better comparison with other figures. Blue diamonds and red squares correspond to the experimental data. reported without and with strain applied, respectively. The solid lines indicate the calculated values using the model for the magnetoelastic and Eq. 6. A magnetoelastic anisotropy \(K_{ME}=0\) and 0.02 mJ/m\({}^{2}\) is considered, respectively, for the blue and orange curve. (b) Spin reorientation transition of a Ta(4)/Pt(4)/Co(\(t_{Co}\))/Gd(\(t_{Gd}\))/TaN(4) sample calculated for values of t\({}_{Gd}\)=0, 1 and 3 nm and plotted as a function of \(t_{Co}\). The SRT is represented here by a step function. Solid and dashed lines consider \(K_{ME}=0\) and 0.02 mJ/m\({}^{2}\), respectively. Fig. 1 (d), a similar behavior can be observed. Firstly we can note that the model predicts the SRT to shift when the thickness of the Gd layer is \(t_{Gd}>0\). Secondly, we observe a similar shift of the SRT point in Fig. 2 (b) and Fig. 1 (d) due to the effect of magnetoelastic anisotropy and of the external strain, respectively. As we expect from a material with negative magnetostriction, \(K_{s}\) adds to \(K_{ME}\), therefore the PMA is increased and the Co/Gd bilayer stays OOP magnetized for thicker Co (corresponding to larger \(E_{d,Co}\)). We confirm that the major effect of strain on the Ta(4 nm)/Pt(4)/ Co(0-2)/Gd(\(t_{Gd}\))/TaN(4) sample is the alteration of the PMA. Moreover, the estimated effective magnetostriction of the stack - \(\lambda_{s}=-10(5)\times 10^{-6}\) - is not significantly altered by the presence of the Gd layer in the thickness range considered. In this section, we examined the impact of in-plane strain on the effective PMA of a Co/Gd ferrimagnetic bilayer. Our results suggest negative magnetostriction of the stack for the investigated thickness values. We employ a recent model for the magnetostatics of these type of systems, where we include the effects of strain purely as magnetoelastic anisotropy. Our experimental findings are in good agreement with the predictions made by this model, providing deeper understanding of the response of this material platform to external strain. ### Magnetization compensation in quadlayer systems In ferrimagnets, magnetization compensation can be achieved. This occurs when the net magnetization \(\bar{M_{tot}}=\bar{M_{Gd}}+\bar{M_{Co}}\) vanishes because the magnetization, coming from the two sub-lattices, is equal in magnitude and opposite in sign. In recent studies, changes to the saturation magnetization in the presence of strain were reported in epitaxial films [31] and rare earth free ferrimagnets [50]. To study the effects of strain on magnetization compensation of synthetic ferrimagnets, we consider a quadlayer sample [44] consisting of Ta(4 nm)/Pt(4)/Co(0.6)/Gd(0-2)/Co(0.6)/Gd(1.5)/TaN(4) as schematically drawn in Fig. 3 (a). In this case, the thickness of the bottom Gd layer is varied between 0 and 2 nm over a few mm, whereas all the other layers have constant thickness. The reason for this choice is that compared to the Co/Gd bilayer, the magnetic volume of the Co is doubled while the number of Co/Gd interfaces where magnetization is induced in the Gd through direct exchange with the Co, is tripled. In this way magnetization compensation can be more readily achieved. The growing thickness of Gd, increases the contribution of \(\bar{M_{Gd}}\) to \(\bar{M_{tot}}\). For this reason, some areas of the wedge sample will be Co-dominated (for \(t_{Gd}<t_{comp}\)) and other will be Gd-dominated (for \(t_{Gd}>t_{comp}\)) with \(\bar{M_{tot}}=0\) at \(t_{Gd}=t_{comp}\). Here, \(t_{comp}\) is the thickness where magnetization compensation is obtained. At magnetization compensation two effects are expected: a divergence of the coercivity and a sign change in the remanent pMOKE signal (Kerr rotation, normalized to its value in absence of Gd). The measurements for coercivity and intensity are reported in Figs. 3 (b) and (c), respectively. The coercivity data were extracted from hysteresis loops measured across the wedge direction (along y). The reason for the sign change in the pMOKE signal, is the alignment of the Gd magnetization along the field direction, in the Gd dominated regime. We report magnetization compensation in this quad-layer for \(t_{Gd}=1.25\) nm. In a similar fashion to what we have done investigating the PMA in the bilayer system, we repeat the experiment in the presence of \(\varepsilon_{xx}=\)0.1% in-plane strain. The results are reported in orange in Fig. 3 (b) and (c). Remarkably, the compensation point of the Co/Gd quadlayer is unchanged by the application of this externally applied strain. Figs. 3 (d) and (e) contain OOP hysteresis loops of Ta(4 nm)/Pt(4)/Co(0.6)/Gd(\(t_{Gd}\))/Co(0.6)/Gd(1.5)/TaN(4) Figure 3: (a) Layerstack consisting of a Co/Gd quadlayer used to obtain magnetization compensation. (b) Coercivity and (c) remanent pMOKE intensity scan as a function of \(t_{Gd}\). Measurements before (blue) and after (orange) application of in-plane strain are reported. (d) Hysteresis loops in the Co dominated and (e) Gd dominated state. Both curves with (orange) and without (blue) in-plane strain applied are shown. samples for \(t_{Gd}=1.15\) nm and \(t_{Gd}=1.35\) nm, respectively, and further show the effects of magnetization compensation. The sample is in this case OOP magnetized. As the thickness of Gd is increased, the magnetization of the sample goes from Co dominated (Fig. 3 (d)) to Gd dominated (Fig. 3 (e)). The inversion of hysteresis loops happens because for \(t_{Gd}>1.25\)\(nm\) the Co-magnetization aligns antiparallel to the field, leading to the change in sign of the pMOKE signal. When the measurement is repeated in the presence of \(\epsilon_{xx}=0.1\%\) strain (orange line), no significant changes to the remanent intensity or coercivity are reported, if compared to the unstrained case (blue line). This suggests that magnetization compensation can be achieved in these multilayer systems in the presence of external strain and, most importantly, that the magnetization compensation point is unaffected. To explain this, we can consider earlier studies about magnetostatics of these types of systems. As previously reported [44; 25], magnetization compensation is due to the balance in Co magnetization and the Gd magnetization, induced in the Gd at the Co/Gd interfaces. In-plane strain in multilayer samples with PMA modifies spin orbit coupling within one layer [46], thus altering the magnetocrystalline anisotropy energy of the system [47]. On the other hand the total magnetic moment per unit area \(\vec{M}_{tot}\) in synthetic ferrimagnets is obtained by integrating the magnetization of the Co and Gd sublattices over the respective layer thicknesses. Accordingly, in a multilayer in-plane strain is not affecting the induced magnetic moment from the Co onto the Gd, thus not altering magnetization compensation. ## IV Conclusions This work reveals the effect that external strain has on PMA and magnetization compensation of Co/Gd systems at room temperature. Growing wedge samples, where the thickness of one of the magnetic layers was varied, has allowed us to determine thickness dependent transition in the magnetostatics of this multilayer system. Deliberate in-plane strain was applied to the sample. In a bilayer Pt/Co/Gd system, we experimentally show that a sizable magnetoelastic coupling changes the SRT in the presence of strain. The contribution of the strain-anisotropy for this system has been included in a model for the magnetostatics, describing the experimental observations well if an effective negative magnetostriction is considered. In a Pt/Co/Gd/Co/Gd quadlayer we obtain magnetization compensation of the two sub-lattices by varying the thickness of the bottom Gd layer. Here, we find that the application of in-plane strain does not affect the magnetization compensation. The induced magnetic moment from the Co onto the Gd, being an interface effect in a multilayer system, is not altered by such mechanical deformation. To conclude, this work provides a broad understanding of the magnetoelastic properties of these multilayer systems. As PMA and magnetic compensation are maintained in the presence of externally applied strain, this material system is a good candidate for technological implementation of ferrimagnets. ## Supplementary Material See supplementary material for magnetostatics model for the spin reorientation transition and for more details about the setup used for application of strain. ###### Acknowledgements. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 860060 "Magnetism and the effect of Electric Field" (MagnEFi), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 173 - 268565370 (project A01 and B02) and the Austrian Research Promotion Agency (FFG). The authors acknowledge support by the Max-Planck Graduate Centre with Johannes Gutenberg University. ## Author Declarations ### Conflict of interest The authors have no conflicts to disclose. ## Data Sharing policy The data that support the findings of this study are available from the corresponding author upon reasonable request.
2305.11079
Renormalization of the gluon distribution function in the background field formalism
We derive the Leading Order DGLAP evolution of gluon distribution function in the target light cone gauge starting from its standard operator definition. The derivation is performed using the background field formalism employed in the Color Glass Condensate effective theory of small $x$ QCD. We adopt Mandelstam-Leibbrandt prescription to regulate in an unambiguous way the spurious singularity appearing in the light-cone gauge Feynman propagator. UV divergences are regulated via conventional dimensional regularization. The methods introduced in this paper represent the first steps in the construction of a unified framework for QCD evolution, which could address collinear physics as well as small $x$ physics and gluon saturation.
Tolga Altinoluk, Guillaume Beuf, Jamal Jalilian-Marian
2023-05-18T16:08:03Z
http://arxiv.org/abs/2305.11079v1
# Renormalization of the gluon distribution function ###### Abstract We derive the Leading Order DGLAP evolution of gluon distribution function in the target light cone gauge starting from its standard operator definition. The derivation is performed using the background field formalism employed in the Color Glass Condensate effective theory of small \(x\) QCD. We adopt Mandelstam-Leibbrandt prescription to regulate in an unambiguous way the spurious singularity appearing in the light-cone gauge Feynman propagator. UV divergences are regulated via conventional dimensional regularization. The methods introduced in this paper represent the first steps in the construction of a unified framework for QCD evolution, which could address collinear physics as well as small \(x\) physics and gluon saturation. ## 1 Introduction pQCD-based collinear factorization formalism has been extremely successful in describing production of high \(p_{t}\) particles in high energy collisions. If it can be proven for a class of processes it guarantees a clean separation of perturbative from non-perturbative dynamics up to power suppressed corrections. An essential ingredient in this approach is the evolution (scale dependence) of parton distribution functions which are calculable perturbatively in powers of \(\alpha_{s}\). This evolution arises from renormalization of the parton distribution functions which exhibit the usual divergences present in relativistic quantum field theories. Nevertheless collinear factorization is expected to break down at very high energy (small \(x\)) due to the large gluon density in a proton or nucleus wave function generated by the fast rise of gluon distribution function. At such high gluon densities the concept of a quasi-free parton as envisioned by Feynman is not very useful and it may be more appropriate to describe this high occupancy state via semi-classical methods. Color Glass Condensate (CGC) formalism is an effective theory of QCD at small \(x\) that uses classical color fields to describe such a high occupancy state (see [1; 2; 3] and references therein). In this formalism and in the context of dilute-dense collisions in the so-called hybrid approach, appropriate to the forward rapidity kinematics, one considers scattering of a projectile parton in a dilute proton on the dense system of gluons described as a classical color field. The typical momentum exchanged in such a scattering is of the order of the saturation scale \(Q_{s}\) which roughly defines the border between dense and dilute regions of the target wave function. Such a formalism however can not be used at high transverse momenta since pQCD evolution of the hard scale becomes significant and one must use the collinear factorization formalism. As the current and the proposed future colliders have a large phase space in \(p_{t}\) and/or large \(x\) it is imperative to try to combine the two approaches into one unified formalism that has both low \(p_{t}\) (small \(x\)) and high \(p_{t}\) (large \(x\)) dynamics built in. While there is an obvious need for and significant rewards for deriving such a unified formalism it is also a daunting task due to the complexities of the calculations involved as well as the fact that the underlying approximations and assumptions are vastly different. There is already some work done towards this goal [4; 5; 6] where one includes scattering of a projectile parton not only from small \(x\) gluons of the target described by a classical color field, but also from large \(x\) gluon field of the target. One loop corrections to this leading order result would then lead to an evolution equation which should reduce to DGLAP [7; 8; 9] and JIMWLK [10; 11; 12; 13; 14; 15; 16; 17] evolution equations in the appropriate limits. The treatment introduced in [4; 5; 6] goes beyond the standard eikonal approximation that is frequently adopted in the CGC computations. Indeed, there has been a lot of efforts to include the subeikonal corrections in the CGC computations over the last decade. In Refs. [18; 19] a subset of subeikonal corrections to the gluon propagator are computed at next-to-next-to-eikonal order. The effects of these corrections on various observables in proton-proton and in proton-nucleus collisions are studied in [20; 21; 22; 23; 24]. Subeikonal corrections to the quark propagator and their applications to various DIS observables have been also studied at next-to-eikonal accuracy [25; 26; 27; 28]. In [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44] quark and gluon helicity evolutions have been computed for at next-to-eikonal accuracy. Subeikonal corrections to both quark and gluon propagators have been studied in the high-energy operator product expansion (OPE) formalism, and applied to study the polarized structure functions \(g_{1}\) at low-\(x\) in [45; 46]. Apart from the aforementioned direct studies of the subeikonal corrections, in [47; 48; 49; 50] rapidity evolution of transverse momentum dependent parton distributions (TMDs) that interpolates between the low and moderate energies have been studied. A similar idea has been pursued in [51; 52] for the unintegrated gluon distributions. Finally, effects of subeikonal corrections are studied in the context of orbital angular momentum in [53; 54]. The main goal of this paper is take the first steps towards understanding how the scale evolution of the collinear parton distributions can be embedded in the CGC effective theory of a dense target. We start with the standard operator definition of the gluon distribution function in the target light cone gauge and use the background field formalism to calculate the one loop corrections to the tree level result. Even though the target light cone gauge is not the standard one used in CGC but it is the standard gauge used in the collinear factorization framework where the parton model is manifest. Moreover, we use the Mandelstam-Leibbrandt (ML) prescription since it provides an unambiguous treatment of the spurious singularity appearing in the Feynman propagator in the light cone gauge [55; 56; 57]. As expected we encounter UV divergences which are then absorbed into the gluon distribution function leading to its scale dependence (evolution). Hence we show that DGLAP evolution of gluon distribution function corresponds to the standard UV renormalization of a composite operator constructed from the background fields of the CGC effective theory. Finally, we provide a summary of the results and discuss the outlook. ## 2 General definitions and setup ### Gluon propagators in light-cone gauge In the present study, we adopt the light-cone gauge \[n\!\cdot\!A(x)=\!0 \tag{1}\] for the gluon field, where \(n^{\mu}\) is a fixed light-like vector, meaning that \(n^{2}=0\). By convention, \(n^{\mu}\) is oriented towards the future (\(n^{0}>1\)). The Feynman propagator for gluon in vacuum, defined as the time-ordered correlator \[\langle 0|\mathcal{T}\left\{A_{a}^{\mu}(x)A_{b}^{\nu}(y)\right\}|0 \rangle=\left[\mathbb{1}\right]_{ab}\ G_{0,F}^{\mu\nu}(x,y)=\left[\mathbb{1} \right]_{ab}\ \int\frac{d^{D}k}{(2\pi)^{D}}\,e^{-ik\cdot(x-y)}\,\tilde{G}_{0,F}^{\mu\nu}(k )\,, \tag{2}\] is obtained in the light-cone gauge (1) as \[\tilde{G}_{0,F}^{\mu\nu}(k)=\frac{i}{(k^{2}+i0^{+})}\,\left\{-g^{\mu\nu}+ \frac{(k^{\mu}n^{\nu}+n^{\mu}k^{\nu})}{[n\!\cdot\!k]}\right\} \tag{3}\] in momentum space. In addition to the usual \(k^{2}\) denominator, regularized by the \(+i0^{+}\) in the standard way for Feynman propagators, there is an extra denominator \([n\!\cdot\!k]\) which require a regularization as well, in order to fully specify the propagator (3). This issue is related to the residual gauge freedom, after imposing the light-cone gauge condition (1). In early studies in light-cone gauge QCD (in particular in Ref. [58]), the extra denominator was regularized with the Cauchy principal value. However, such regularization leads to various complications, such as preventing to perform Wick rotations, and the loss of power counting criterion for the convergence of Feynman integrals. A better regularization for that denominator, compatible with Wick rotations and power counting, is the Mandelstam-Leibbrandt (ML) prescription [55; 56], defined as \[\frac{1}{[n\!\cdot\!k]}\equiv\frac{(\bar{n}\!\cdot\!k)}{(n\!\cdot\!k)(\bar{n} \!\cdot\!k)+i0^{+}}=\frac{1}{((n\!\cdot\!k)+i0^{+})}\,\theta(\bar{n}\!\cdot\!k )+\frac{1}{((n\!\cdot\!k)-i0^{+})}\,\theta(-\bar{n}\!\cdot\!k)\,, \tag{4}\] where \(\bar{n}^{\mu}\) is an additional light-like vector, with \(\bar{n}^{2}=n^{2}=0\) and \(\bar{n}\cdot n=1\). In Ref. [57], the Hamiltonian quantization of QCD has been performed in the light-cone gauge, leading unambiguously to the ML prescription (4) for the Feynman propagator. Interestingly, the Feynman propagator (3) with the ML prescription can be rewritten as \[\tilde{G}^{\mu\nu}_{0,F}(k)=\frac{i}{(k^{2}+i0^{+})}\,\left\{-g^{\mu\nu}+ \frac{2(\bar{n}\!\cdot\!k)}{\mathbf{k}^{2}}(k^{\mu}n^{\nu}+n^{\mu}k^{\nu}) \right\}-\frac{i}{\left(2(n\!\cdot\!k)(\bar{n}\!\cdot\!k)+i0^{+}\right)}\, \frac{2(\bar{n}\!\cdot\!k)}{\mathbf{k}^{2}}\,(k^{\mu}n^{\nu}+n^{\mu}k^{\nu}) \tag{5}\] in which the second term can be interpreted as a ghost propagating along a light-like direction, resulting from the residual gauge freedom in the light-cone gauge [57]. In the present study, not only the free Feynman propagator is needed, but also the free Wightman propagator, defined as \[\langle 0|A^{\mu}_{a}(x)A^{\nu}_{b}(y)|0\rangle=\left[\mathbb{1} \right]_{ab}\,G^{\mu\nu}_{0,W}(x,y)=\left[\mathbb{1}\right]_{ab}\,\int\frac{d ^{D}k}{(2\pi)^{D}}\,e^{-ik\cdot(x-y)}\,\tilde{G}^{\mu\nu}_{0,W}(k)\,. \tag{6}\] In momentum space, the Wightman propagator is related to the positive \(k^{0}\) energy contribution to the discontinuity (when flipping \(+i0^{+}\) to \(-i0^{+}\)) of the Feynman propagator, as \[\tilde{G}^{\mu\nu}_{0,W}(k)=\theta(k^{0})\;2\text{Re}\left(\tilde{G}^{\mu\nu} _{0,F}(k)\right)\,. \tag{7}\] Then, using eq. (5) and the Sochocki-Plemelj theorem, one obtains \[\tilde{G}^{\mu\nu}_{0,W}(k)= \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! up to UV renormalization issues, where \(\mathrm{x}>0\) is the fraction of the momentum \(P^{-}\) of the target carried by the gluon. In Eq. (9) the two field strength operators are connected in color by the adjoint gauge link operator \[U_{A}(\Delta x^{+},0)= \,\mathcal{P}_{+}\exp\left\{-ig\int_{0}^{\Delta x^{+}}\!\!\!\!\!dz^ {+}\,T^{\mathrm{c}}\,A_{c}^{-}(z^{+},z^{-}=0;\mathbf{z}=0)\right\}\,, \tag{10}\] where \(\mathcal{P}_{+}\) indicates path ordering along the \(z^{+}\) direction. In the present study, for simplicity, we will restrict ourselves to the target light-cone gauge \[n\!\cdot\!A(x)\equiv\!A^{-}(x)=0\,, \tag{11}\] with the same light-like vector \(n^{\mu}\) specifying both the gauge choice and the main component of the momentum of the target.2 With that gauge choice, the gauge link (10) reduces to the identity matrix, and Eq. (9) becomes Footnote 2: Alternatively, one could choose the projectile light-cone gauge, with still \(n\!\cdot\!A(x)=0\) but now \(\bar{n}\!\cdot\!P\) instead of \(n\!\cdot\!P\) as the main component of the boosted target momentum. The projectile light-cone gauge is the most commonly used in the CGC literature, but the gauge link in the definition of the gluon distribution survives in that gauge. We plan to consider the case of projectile light-cone gauge as a future study. \[G(\mathrm{x})=\frac{1}{\mathrm{x}P^{-}}\int\frac{d\Delta x^{+}}{(2\pi)}\,e^{- i\mathrm{x}P^{-}\Delta x^{+}}\,\langle P|F_{a}^{-i}(\Delta x^{+},0;0)F_{a}^{-i}(0,0;0 )|P\rangle\,. \tag{12}\] Moreover, in the target light-cone gauge, the field strength tensor components with one upper \(-\) index simplify as \[F^{-\nu}(x)= \,\partial^{-}A^{\nu}(x)=\partial_{+}A^{\nu}(x)\,. \tag{13}\] As a remark, since the target state with the same momentum \(P^{\mu}\) is applied on the left and on the right of the operator, the phase factors generated when applying a translation to each field strength are compensating each other, and one has \[\langle P|F_{a}^{-i}(\Delta x^{+},0;0)F_{a}^{-i}(0,0;0)|P\rangle= \,\langle P|F_{a}^{-i}(x^{+}\!+\!\Delta x^{+},0;0)F_{a}^{-i}(x^{+},0;0)|P \rangle=\langle P|F_{a}^{-i}(0,0;0)F_{a}^{-i}(-\Delta x^{+},0;0)|P\rangle\,. \tag{14}\] Hence, Eq. (12) can be equivalently written as \[G(\mathrm{x}) =\frac{1}{\mathrm{x}P^{-}}\int\frac{d\Delta x^{+}}{(2\pi)}\,e^{- i\mathrm{x}P^{-}\Delta x^{+}}\,\langle P|F_{a}^{-i}(0,0;0)F_{a}^{-i}(-\Delta x^{+},0 ;0)|P\rangle\] \[=\frac{1}{(2\pi)\mathrm{x}P^{-}}\,\frac{1}{\left[\int dx^{+} \right]}\int dx^{+}\int dx^{\prime+}e^{-i\mathrm{x}P^{-}(x^{\prime+}-x^{+})} \langle P|F_{a}^{-i}(x^{\prime+},0;0)F_{a}^{-i}(x^{+},0;0)|P\rangle\,. \tag{15}\] The expression (15) has a clear interpretation if one adopts light-front quantization, with \(n\!\cdot\!x=x^{-}\) replacing \(x^{0}\) as evolution variable. Then, due to the sign in the phases in Eq. (15) (with \(\mathrm{x}P^{-}>0\)), the partial Fourier transform selects the annihilation operator piece of the rightmost field strength and discard its creation operator piece. By contrast, the partial Fourier transform selects creation operator piece of the leftmost field strength, and discard its annihilation operator piece. Hence, the rightmost field strength in Eq. (15) removes a gluon with light-cone momentum \(\mathrm{x}P^{-}\) from the target, and the leftmost field strength adds it back. In particular, in light-front quantization along \(n\!\cdot\!x=x^{-}\), all states have a positive (or vanishing) light-cone momentum \(k^{-}\), so that it is not possible to remove a gluon with a momentum \(\mathrm{x}P^{-}\) larger than the total momentum \(P^{-}\) of the target. For that reason, \[\mathrm{x}>1\;\Rightarrow\;G(\mathrm{x})=0 \tag{16}\] so that the gluon distribution \(G(\mathrm{x})\) has a support \(0\leq\mathrm{x}\leq 1\). Although that property has been shown within light-front quantization, it should be valid in general, for any quantization procedure. Finally, an important feature in the operator definition (12) of the gluon distribution is that the field strength operators are not time ordered (like in the Feynman propagator (2)) but are instead always in the same order, like in the Wightman propagator (6). In the Schwinger-Keldysh formalism, Wightman propagators can be interpreted as correlators between an operator in the amplitude and an operator in the complex conjugate amplitude, through the final-state cut, whereas Feynman propagators are correlators within amplitude only. This is consistent with the fact that collinear factorization and parton distributions arise only at the cross section level, not at amplitude level. Then, the rightmost field strength in the operator definition (12) should be intepreted as belonging to the amplitude, and the leftmost field strength as belonging to the complex conjugate amplitude. ### Expansion around a background field In order to calculate the evolution of the gluon distribution (12) induced by UV renormalization, we are using the background field method.3 Namely, we are splitting the gluon field into a background contribution and a fluctuation contribution, as Footnote 3: The DGLAP evolution was derived in Ref. [60] in the target light-cone gauge using the ML prescription as well, but using the formalism developed in Ref. [58], involving on-shell partons instead of background fields. \[A^{\mu}(x)=\mathcal{A}^{\mu}(x)+\delta A^{\mu}(x) \tag{17}\] at the gauge field level and \[F^{\mu\nu}(x)=\mathcal{F}^{\mu\nu}(x)+\delta F^{\mu\nu}(x) \tag{18}\] at the field strength level, and then we integrate over the fluctuation contribution to first order in the coupling \(\alpha_{s}\) at the level of the gluon distribution. Since we apply the target light-cone gauge (2.3) both for the background and for the fluctuation fields, \(\mathcal{A}^{-}(x)=0\) and \(\delta A^{-}(x)=0\), one obtains both \(\mathcal{F}^{-\nu}(x)=\partial_{+}\mathcal{A}^{\nu}(x)\) and \(\delta F^{-\nu}(x)=\partial_{+}(\delta A^{\nu})(x)\). First, by neglecting entirely the fluctuation field and keeping only the background field in the definition (12), one obtains what we call the bare contribution to the gluon distribution \[G^{(0)}(\mathrm{x})=\frac{1}{\mathrm{x}P^{-}}\int\frac{d\Delta x ^{+}}{(2\pi)}\,e^{-i\mathrm{x}P^{-}\Delta x^{+}}\,\langle P|\mathcal{F}_{a}^{- i}(\Delta x^{+},0;0)\mathcal{F}_{a}^{-i}(0,0;0)|P\rangle\,. \tag{19}\] Corrections beyond that bare contribution are obtained by expanding the correlator of two field strengths in the target hadron state as \[\langle P|F_{a}^{-\mu}(x^{\prime})F_{a}^{-\nu}(x)|P\rangle- \langle P|\mathcal{F}_{a}^{-\mu}(x^{\prime})\mathcal{F}_{a}^{-\nu}(x)|P\rangle\] \[=\langle P|\mathcal{F}_{a}^{-\mu}(x^{\prime})\delta F_{a}^{-\nu} (x)|P\rangle+\langle P|\delta F_{a}^{-\mu}(x^{\prime})\mathcal{F}_{a}^{-\nu}( x)|P\rangle+\langle P|\delta F_{a}^{-\mu}(x^{\prime})\delta F_{a}^{-\nu}(x)|P\rangle\] \[=\partial_{x^{+}}\langle P|\mathcal{F}_{a}^{-\mu}(x^{\prime})\; \delta A_{a}^{\nu}(x)|P\rangle+\partial_{x^{\prime+}}\langle P|\delta A_{a}^{ \mu}(x^{\prime})\;\mathcal{F}_{a}^{-\nu}(x)|P\rangle+\partial_{x^{\prime+}} \partial_{x^{+}}\langle P|\delta A_{a}^{\mu}(x^{\prime})\;\delta A_{a}^{\nu}( x)|P\rangle\,. \tag{20}\] Each term in Eq. (20) can be expanded as a series both in the QCD coupling \(g\) and in the background field. First, let us first consider the contributions to the first two terms in Eq. (20) in which the fluctuation field \(\delta A_{a}^{\nu}(x)\) is calculated (to all orders in \(g\)) at vanishing background field. This corresponds to the total vacuum tadpole contribution to \(\delta A_{a}^{\nu}(x)\). By color symmetry, the color factor of any such tadpole diagram with one single adjoint index has to vanish. Hence, all non-trivial contributions to the first two terms in Eq. (20) are at least quadratic in the background field overall, with one power of the background field inserted inside the fluctuation \(\delta A_{a}^{\nu}(x)\), beyond the explicit \(\mathcal{F}_{a}^{-\mu}(x^{\prime})\) factor. Let us now consider the third term in Eq. (20), at vanishing background field. Then, this correlator of fluctuations fields is the Wightman propagator in vacuum, but including corrections to all orders in \(g\). When expanding in \(g\) as well, the zeroth order contribution amounts simply to insert the free Wightman propagator (8) into Eq. (15), as \[\frac{1}{(2\pi)\,{\rm x}P^{-}\left[\int dx^{+}\right]}\int dx^{+} \int dx^{\prime+}e^{-i{\rm x}P^{-}(x^{\prime+}-x^{+})}\partial_{x^{\prime+}} \partial_{x^{\prime+}}\ \left[\mathbb{I}\right]_{aa}\int\frac{d^{D}k}{(2\pi)^{D}}\,e^{-ik^{-}(x^{ \prime+}-x^{+})}\ \tilde{G}^{ii}_{0,W}(k)\] \[= \frac{(N_{c}^{2}\!-\!1)({\rm x}P^{-})^{2}}{(2\pi)\,{\rm x}P^{-}} \int\frac{d^{D}k}{(2\pi)^{D}}\,2\pi\delta(k^{-}\!+\!{\rm x}P^{-})\,(D\!-\!2)\, \theta(k^{+})\,2\pi\delta\left(2k^{+}k^{-}\!-\!{\bf k}^{2}\right)=0\,. \tag{21}\] Indeed, the free Wightman propagator (8) imposes that \(k^{-}\geq 0\) for a particle present at the final state cut, whereas both the Fourier transforms with respect to \(x^{+}\) and \(x^{\prime+}\) impose that \(k^{-}<0\), since \({\rm x}P^{-}>0\). This argument extends to all orders in \(g\), for contributions independent of the background field. Indeed, at higher order in \(g\), one typically has several particles at the final state cut, each described by a free Wightman propagator forcing its momentum to obey \(k_{n}^{-}\geq 0\), whereas the Fourier transforms in \(x^{+}\) and \(x^{\prime+}\) forces the sum of these light-cone momenta to be strictly negative. In order to escape this argument, and obtain a non vanishing contribution to the gluon distribution from the third term in Eq. (20), one has to include background field insertions, in order to break the momentum conservation between the endpoints \(x\) and \(x^{\prime}\) of the correlator and the final state cut. More precisely, at least one background field insertion is required on each side of the final-state cut. For similar reasons, it is clear that in the first two terms in Eq. (20), contributions with all background field insertions on the same side of the cut will vanish at the level of the gluon distribution (15). So far, we have shown that all non-zero contributions to the gluon distribution (15) in the expansion (17) around the background field and at any order in \(g\) are at least quadratic in the background field, with at least one background field on each side of the cut, as expected for a parton distribution. In the following of the present study, we focus on the corrections which are exactly quadratic in the background field and of order \(g^{2}\), in order to check that we recover the standard LO DGLAP evolution of the gluon distribution from its UV renormalization. At that order, from Eq. (20), we have \[G({\rm x},\mu^{2})=G^{(0)}({\rm x})+G({\rm x},\mu^{2})\bigg{|}_{\rm virt.ampl. }^{\rm NLO}+G({\rm x},\mu^{2})\bigg{|}_{\rm virt.cc.ampl}^{\rm NLO}+G({\rm x },\mu^{2})\bigg{|}_{\rm real}^{\rm NLO}+O({\cal F}^{3})+O(\alpha_{s}^{2})\,, \tag{22}\] with the three NLO corrections beyond the bare term (19) coming respectively from the three terms in Eq. (20) and are represented diagrammatically on Fig. 1, and \(O({\cal F}^{3})\) represents the corrections of higher order in the background field. We use conventional dimensional regularization (CDR) to deal with the UV divergences, and the gluon distribution becomes dependent on the CDR scale \(\mu\). The first NLO contribution in Eq. (22) amounts to correct the background field on the amplitude side by inserting Figure 1: The three NLO corrections to the gluon distribution in the pure glue sector, see Eq. (22). In each diagram, the dashed line represent the final state cut. the 1-loop vacuum polarization tensor, as \[\delta A^{\mu}_{a}(x)\Big{|}_{\text{vac. pol.}}=\int\frac{d^{D}q}{(2\pi)^{D}}\;e^{iq \cdot x}\,\tilde{G}^{\nu\mu}_{0,F}(q)\;i\Pi^{ba}_{\rho\nu}(q)\;\int d^{D}y\,e^{- iy\cdot q}\,\mathcal{A}^{\rho}_{b}(y)\,, \tag{23}\] so that \[\delta F^{-\mu}_{a}(x)\Big{|}_{\text{vac. pol.}}=\partial_{x^{+}}\delta A^{\mu} _{a}(x)\Big{|}_{\text{vac. pol.}}=\int\frac{d^{D}q}{(2\pi)^{D}}\;e^{iq\cdot x} \,\tilde{G}^{\nu\mu}_{0,F}(q)\;i\Pi^{ba}_{\rho\nu}(q)\;\int d^{D}y\,e^{-iy\cdot q }\,\mathcal{F}^{-\rho}_{b}(y)\,, \tag{24}\] in the target light-cone gauge. At the level of the gluon distribution, this correction corresponds to \[G(\text{x},\mu^{2})\Big{|}_{\text{virtual};\,\text{ampl.}}^{\text {NLO}}= \,\frac{1}{(2\pi)\text{x}P^{-}}\;\frac{1}{\left[\int dx^{+}\right]} \int dx^{+}\int dx^{\prime+}e^{-i\text{x}P^{-}(x^{\prime+}-x^{+})}\int\frac{d ^{D}q}{(2\pi)^{D}}\,e^{iq-x^{+}}\,\tilde{G}^{\nu i}_{0,F}(q)\;i\Pi^{ba}_{\rho \nu}(q)\] \[\times\;\int d^{D}y\,e^{-iy\cdot q}\,\langle P|\mathcal{F}^{-i}_{ a}({x^{\prime}}^{+},0;0)\mathcal{F}^{-\rho}_{b}(y)|P\rangle\] \[= \,\int\frac{d^{D}q}{(2\pi)^{D}}\,\frac{2\pi\delta(q^{-}\!+\!\text {x}P^{-})}{(2\pi)\text{x}P^{-}}\;\tilde{G}^{\nu i}_{0,F}(q)\;i\Pi^{ba}_{\rho \nu}(q)\int d^{D}\Delta y\,e^{iq\cdot\Delta y}\,\langle P|\mathcal{F}^{-i}_{ a}(0)\mathcal{F}^{-\rho}_{b}(-\Delta y)|P\rangle \tag{25}\] using the invariance by translation of the matrix element (14) at zero momentum transfer. The contribution of the second diagram on Fig. 1, with the vacuum polarization tensor inserted on the leftmost field strength, or equivalently on the complex conjugate amplitude side is analog to the contribution (25). The calculation of the one-loop vacuum polarization tensor is presented in Sec. (4), as well as the evaluation of contribution (25) to the gluon distribution at NLO. The third NLO contribution in Eq. (22), represented by the third diagram on Fig. 1, can be written as \[G(\text{x},\mu^{2})\Big{|}_{\text{real}}^{\text{NLO}}=\frac{( \text{x}P^{-})^{2}}{(2\pi)\,\text{x}P^{-}\left[\int dx^{+}\right]}\int dx^{+} \int dx^{\prime+}e^{-i\text{x}P^{-}(x^{\prime+}-x^{+})}\delta^{ii^{\prime}} \left[\mathbb{1}\right]_{aa^{\prime}}\;\langle P|\,\delta A^{i^{\prime}}_{a^{ \prime}}({x^{\prime+}},0;0)\delta A^{i}_{a}(x^{+},0;0)|P\rangle\Big{|}_{ \mathcal{AA}}\,, \tag{26}\] in which the correlator is the contribution to the gluon Wightman propagator in the in the target state with two insertions of the target background field strength, one on each side of the final state cut. The contribution (26) to the gluon distribution at is evaluated in section 3. ## 3 Real contributions We now calculate the real diagram shown as the third diagram on Fig. (1), corresponding to Eq. (26) and show how it contributes to the DGLAP evolution of the gluon distribution function defined above. We use the standard Feynman rules of QCD and write our expressions in \(D\)-dimensions with the dimensionful parameter \(\mu\) included since we use dimensional regularization to regulate the divergences that are expected to arise. The main building block in Eq. (26) in the Wightman propagator with two insertions of the gluon background field of the target, one on each side of the cut. It can be written as \[\langle P|\delta A^{i^{\prime}}_{a^{\prime}}(x^{\prime})\delta A^ {i}_{a}(x)|P\rangle\bigg{|}_{\mathcal{AA}}=\int\frac{d^{D}q}{(2\pi)^{D}}\int \frac{d^{D}k}{(2\pi)^{D}}\int\frac{d^{D}q^{\prime}}{(2\pi)^{D}}\int d^{D}y\int d ^{D}y^{\prime}\;e^{-iq^{\prime}\cdot x^{\prime}}\,e^{iq\cdot x}\,e^{iy\cdot(k -q)}\,e^{-iy^{\prime}\cdot(k-q^{\prime})}\] \[\times\;\tilde{G}^{i^{\prime}\nu^{\prime}}_{0,F}(q^{\prime})\,(-1 )V_{3g\nu^{\prime}\sigma^{\prime}\rho^{\prime}}(-q^{\prime},k,q^{\prime}\!-\!k )\;\tilde{G}^{\sigma^{\prime}\sigma}_{0,W}(k)V_{3g\nu\sigma\rho}(q,-k,k\!-\!q )\;\tilde{G}^{\nu i}_{0,F}(q)\;\langle P|\mathcal{A}^{\rho^{\prime}}_{c^{ \prime}}(y^{\prime})\,\mathcal{A}^{\rho}_{c}(y)|P\rangle \tag{27}\] Note that only the propagator of momentum \(q\) is entirely on the amplitude side, and is thus a free Feynman propagator given by Eq. (3). By contrast, the propagator of momentum \(k\) crosses the final state cut, and is thus the free Wightman propagator given by Eq. (8). Finally, the propagator of momentum \(q^{\prime}\) is entirely on the complex conjugate amplitude side of the cut. It is thus an anti-time-ordered propagator. In momentum space, this is simply the complex conjugate of the corresponding time-ordered Feynman propagator, \[\tilde{G}^{\mu\nu}_{0,F}(p)=\left(\tilde{G}^{\mu\nu}_{0,F}(p)\right)^{\dagger}\,. \tag{28}\] The real diagram from Fig. (1) contains two standard three-gluon vertices. However, one is on each side of the final state cut. The one on the complex conjugate amplitude side comes with an extra \((-1)\) factor, following the standard Schwinger-Keldysh formalism. We can rewrite the expression (27) in terms of gluon field strength tensors by multiplying and dividing by factors of \((k^{-}\!-\!q^{-})\) and \((k^{-}\!-\!q^{-}{}^{\prime})\). One can then convert these momenta into derivatives with respect to \(y^{+}\) and \(y^{\prime}{}^{+}\) acting on the fields \(\mathcal{A}^{e}_{c}(y)\) and \(\mathcal{A}^{\rho^{\prime}}_{c^{\prime}}(y^{\prime})\) after integration by parts, which in the light-cone gauge correspond to background field strengths. We also use the invariance by translation of the correlation function (see Eq. (14)) in order to write it as a function of the separation \(\Delta y\equiv y^{\prime}-y\) between the fields only. This allows us to to integrate over the "center of mass" coordinate \(y+y^{\prime}\) and use the resulting delta function to integrate over momentum \(q^{\prime}\) which sets it equal to \(q\). This leads to \[\langle P|\delta A^{i^{\prime}}_{a^{\prime}}(x^{\prime})\delta A ^{i}_{a}(x)|P\rangle\bigg{|}_{\mathcal{A}\mathcal{A}}=2g^{2}\,\mu^{2\epsilon} \,f^{abc}\,f^{a^{\prime}bc^{\prime}}\int\frac{d^{D}q}{(2\pi)^{D}}\,\int\frac{ d^{D}k}{(2\pi)^{D}}\,\tilde{G}^{i^{\prime}\nu^{\prime}}_{0,F}(q)\,\tilde{G}^{ \sigma^{\prime}\sigma}_{0,W}(k)\,\tilde{G}^{\nu i}_{0,F}(q)\] \[\times\Big{[}(q_{\rho}+k_{\rho})g_{\nu\sigma}+(q_{\nu}-2k_{\nu}) g_{\sigma\rho}+(k_{\sigma}-2q_{\sigma})g_{\nu\rho}\Big{]}\Big{[}(q_{\rho^{ \prime}}+k_{\rho^{\prime}})g_{\nu^{\prime}\sigma^{\prime}}+(q_{\nu^{\prime}}- 2k_{\nu^{\prime}})g_{\sigma^{\prime}\rho^{\prime}}+(k_{\sigma^{\prime}}-2q_{ \sigma^{\prime}})g_{\nu^{\prime}\rho^{\prime}}\Big{]}\] \[\times\frac{e^{-iq\cdot(x^{\prime}-x)}}{(k^{-}-q^{-})^{2}}\int d ^{D}\Delta y\,e^{-i(k-q)\cdot\Delta y}\,\langle P|\mathcal{F}^{-\rho^{\prime} }_{c^{\prime}}(\Delta y)\mathcal{F}^{-\rho}_{c}(0)|P\rangle\,, \tag{29}\] where \(D=4-2\epsilon\). Next, we insert the expression (29) into Eq. (26) and integrate over \(x^{+}\) and \(x^{\prime+}\). To proceed further we make the assumption that the field strength correlator does not depend on the \(\Delta y^{-}\) coordinate. Indeed, such dependence is a power suppressed effect in the high-energy limit [26] which should not be relevant in the derivation of the DGLAP evolution. At this stage, one obtains \[G(\mathrm{x},\mu^{2})\bigg{|}_{\mathrm{real}}^{\mathrm{NLO}}= \frac{g^{2}\,C_{A}\,\mu^{2\epsilon}}{(2\pi)\,\mathrm{x}P^{-}}\,\int\frac{d^{D}k }{(2\pi)^{D}}\,\left(\frac{\mathrm{x}P^{-}}{k^{-}\!+\!\mathrm{x}P^{-}}\right)^ {2}\int d\Delta y^{+}\int d^{D-2}\Delta\mathbf{y}\langle P|\mathcal{F}^{- \rho^{\prime}}_{a}(\Delta y^{+},0;\Delta\mathbf{y})\mathcal{F}^{-\rho}_{a}(0,0 ;0)|P\rangle\] \[\times\int\frac{d^{D}q}{(2\pi)^{D}}\,2\pi\delta(q^{+}\!-\!k^{+}) \,2\pi\delta(q^{-}\!+\!\mathrm{x}P^{-})\,e^{-i(k^{-}\!+\!\mathrm{x}P^{-})\Delta y ^{+}}\,e^{i(\mathbf{k}-\mathbf{q})\cdot\boldsymbol{\Delta y}}\frac{\tilde{G}^{ \sigma^{\prime}\sigma}_{0,W}(k)}{\big{[}-2\mathrm{x}P^{-}k^{+}-\mathbf{q}^{2} +i0^{+}\big{]}\big{[}-2\mathrm{x}P^{-}k^{+}-\mathbf{q}^{2}-i0^{+}\big{]}}\] \[\times\bigg{\{}-g_{\sigma}{}^{i}(q_{\rho}\!+\!k_{\rho})-g_{\rho} {}^{i}(k_{\sigma}\!-\!2q_{\sigma})+2g_{\sigma\rho}\Big{[}\mathbf{k}^{i}\!-\! \frac{k^{-}}{q^{-}}\mathbf{q}^{i}\Big{]}\bigg{\}}\bigg{\{}-g_{\sigma}{}^{i}(q_{ \rho^{\prime}}\!+\!k_{\rho^{\prime}})-g_{\rho^{\prime}}{}^{i}(k_{\sigma^{\prime} }\!-\!2q_{\sigma^{\prime}})+2g_{\sigma^{\prime}\rho^{\prime}}\Big{[}\mathbf{k}^ {i}\!-\!\frac{k^{-}}{q^{-}}\mathbf{q}^{i}\Big{]}\bigg{\}} \tag{30}\] Due to the specific form of the cut propagator in Eq. (8), it is more convenient to consider the two parts of the cut propagator separately, labeled as metric and gauge parts. We first consider the gauge part, simplifying the Lorentz algebra and using the delta functions to perform the \(q^{+},q^{-}\) integrals leads to \[G(\mathrm{x},\mu^{2})\bigg{|}_{\mathrm{real;gauge\,art.}}^{\mathrm{ NLO}}=\frac{g^{2}\,C_{A}\,\mu^{2\epsilon}}{(2\pi)\,\mathrm{x}P^{-}}\,\int d \Delta y^{+}\int d^{D-2}\Delta\mathbf{y}\ \langle P|\mathcal{F}_{a}^{-\rho^{\prime}}(\Delta y^{+},0; \Delta\mathbf{y})\mathcal{F}_{a}^{-\rho}(0,0;0)|P\rangle\] \[\times\int\frac{d^{D}k}{(2\pi)^{D}}\,e^{-i(k^{-}+\mathrm{x}P^{-}) \Delta y^{+}}\,\left(\frac{\mathrm{x}P^{-}}{k^{-}+\mathrm{x}P^{-}}\right)^{2} \,\int\frac{d^{D-2}\mathbf{q}}{(2\pi)^{D-2}}\,\frac{e^{i(\mathbf{k}-\mathbf{q} )\cdot\Delta\mathbf{y}}}{(2\mathrm{x}P^{-}k^{+}+\mathbf{q}^{2})^{2}}\,\frac{ \theta(k^{+})}{\mathbf{k}^{2}}\,2\pi\Big{[}\delta\Big{(}k^{-}-\frac{\mathbf{ k}^{2}}{2k^{+}}\Big{)}-\delta(k^{-})\Big{]}\] \[\times(k^{-}+2\mathrm{x}P^{-})\Big{\{}2g_{\rho}^{\ \ i}g_{\rho^{\prime}}^{\ Note that the \(\theta(z-{\rm x})\) is obtained from the property (16), due to the dependence on \({\rm x}P^{-}/z\) instead of \({\rm x}P^{-}\) in the phase. The UV divergent piece (with its accompanying \(\mu^{2\epsilon}\) factor) is independent of the internal transverse momentum \({\bf l}\) which then allows us to perform the integral over \({\bf l}\) resulting in a delta function of transverse distances. This forces the transverse separation \(\Delta{\bf y}\) between the two field strength tensors to go to zero as is the case in the standard definition of the gluon distribution function. It is important to notice that we have not made this assumption but that it is the result of the calculation. Defining the \(+\) prescription as \[\int_{\rm x}^{1}dz\,\frac{f(z)}{(1-z)_{+}}\equiv\int_{0}^{1}dz\,\frac{[f(z) \theta(z\!-\!{\rm x})-f(1)]}{(1-z)} \tag{37}\] as usual, the contribution of the gauge dependent piece of the cut propagator the evolution of gluon distribution function can then be written as \[G({\rm x},\mu^{2})\bigg{|}^{\rm NLO}_{\rm real;gauge\,art\,;\,UV}=\frac{\alpha _{s}\,C_{A}}{\pi}\,\left\{\frac{\mu^{2\epsilon}}{\epsilon}\int_{\rm x}^{1} \frac{dz}{z}\frac{1}{2}\bigg{[}\frac{z(1+z)}{(1-z)_{+}}+1-z^{2}\bigg{]}G^{(0) }\Big{(}\frac{{\rm x}}{z}\Big{)}+O(\epsilon^{0})\right\} \tag{38}\] where the factors common with the tree level definition in Eq. (19) are factored out. Eq. (38) represents the UV contribution of the gauge dependent part of the cut propagator (8) in the real NLO correction to the gluon distribution function. We now consider the contribution of the metric part of the free Wightman propagator, Eq. (8) to the real NLO diagram. Most of the intermediate steps are identical to the gauge dependent part that was shown in great detail. After doing some of the momentum integrations using the delta functions we get \[G({\rm x},\mu^{2})\bigg{|}^{\rm NLO}_{\rm real;\,metric} =\frac{g^{2}\,C_{A}\,\mu^{2\epsilon}}{(2\pi)\,{\rm x}P^{-}}\int d \Delta y^{+}\int d^{2-2\epsilon}\Delta{\bf y}\,\langle P|{\cal F}_{a}^{-\rho ^{\prime}}(\Delta y^{+},0;\Delta{\bf y}){\cal F}_{a}^{-\rho}(0,0;0)|P\rangle\] \[\times\int\frac{d^{4-2\epsilon}\,k}{(2\pi)^{4-2\epsilon}}\,2\pi \delta\Big{(}k^{+}-\frac{{\bf k}^{2}}{2k^{-}}\Big{)}\,\frac{\theta(k^{-})}{2 k^{-}}\bigg{(}\frac{{\rm x}P^{-}}{k^{-}+{\rm x}P^{-}}\bigg{)}^{2}\,e^{-i(k^{-}+{ \rm x}P^{-})}\Delta y^{+}\int\frac{d^{2-2\epsilon}\,{\bf q}}{(2\pi)^{2-2 \epsilon}}\frac{e^{i({\bf k}-{\bf q})\cdot\Delta{\bf y}}}{[2{\rm x}P^{-}k^{+ }+{\bf q}^{2}]^{2}}\] \[\times\bigg{\{}g_{\rho}^{\ i}g_{\rho^{\prime}}^{\ i}\bigg{[}4 \Big{(}{\bf k}+\frac{k^{-}}{{\rm x}P^{-}}{\bf q}\Big{)}^{2}+2\Big{(}2{\rm x}P ^{-}k^{+}+{\bf q}^{2}\Big{)}+2({\bf k}-{\bf q})^{2}\bigg{]}+2(1-\epsilon)(k^{ -}-{\rm x}P^{-})^{2}g_{\rho}^{\ +}g_{\rho^{\prime}}^{\ +}\] \[\qquad+\big{(}g_{\rho}^{\ i}g_{\rho^{\prime}}^{\ j}+g_{\rho}^{\ j }g_{\rho^{\prime}}^{\ j}\Big{)}\bigg{[}\bigg{(}{\bf q}^{i}+{\bf k}^{i}-2 \Big{(}{\bf k}^{i}+\frac{k^{-}}{{\rm x}P^{-}}{\bf q}^{i}\Big{)}\bigg{)}(2{\bf k }^{j}-{\bf q}^{j})-\epsilon({\bf k}^{i}+{\bf q}^{i})({\bf k}^{j}+{\bf q}^{j}) \bigg{]}\] \[\qquad+\big{(}g_{\rho}^{\ i}g_{\rho^{\prime}}^{\ i}+g_{\rho}^{\ +}g_{\rho^{\prime}}^{\ j}\Big{)}\bigg{[}2(2k^{-}{\rm x}P^{-})\Big{(}{\bf k}^{i }+\frac{k^{-}}{{\rm x}P^{-}}{\bf q}^{i}\Big{)}-(k^{-}-{\rm x}P^{-})\Big{(}3{\bf k }^{i}-2\epsilon({\bf q}^{i}+{\bf k}^{i})\Big{)}\bigg{]}\bigg{\}} \tag{39}\] In this case, we can define the momentum fraction \(z\) as \[z=\frac{{\rm x}P^{-}}{k^{-}+{\rm x}P^{-}} \tag{40}\] for simplicity and perform changes of variables from \(k^{-}\) to \(z\) and then from transverse momenta \(({\bf k},{\bf q})\) to \(({\bf K},{\bf l})\) as before, to get \[G({\rm x},\mu^{2})\bigg{|}_{\rm real;\,metric}^{\rm NLO} =\frac{\alpha_{s}\,C_{A}}{(2\pi)\,{\rm x}P^{-}}\int_{\rm x}^{1}dz \,\int d\Delta y^{+}\,e^{-i\frac{{\rm x}P^{-}}{z}\Delta y^{+}}\int d^{2-2\epsilon }\Delta{\bf y}\,\langle P|{\cal F}_{a}^{-\rho^{\prime}}(\Delta y^{+},0;\Delta{ \bf y}){\cal F}_{a}^{-\rho}(0,0;0)|P\rangle\] \[\times\int\frac{d^{2-2\epsilon}}{(2\pi)^{2-2\epsilon}}\,e^{i \Lambda{\bf y}}\,\mu^{2\epsilon}\int\frac{d^{2-2\epsilon}{\bf K}}{(2\pi)^{2-2 \epsilon}}\,\frac{z(1-z)}{[{\bf K}^{2}+z(1-z){\bf l}^{2}]^{2}}\] \[\times\bigg{\{}g_{\rho}^{\ i}g_{\rho^{i}}\bigg{[}4\frac{{\bf K}^{ 2}}{z^{2}}+\frac{2}{(1-z)}\Big{(}{\bf K}^{2}+z(1-z){\bf l}^{2}\Big{)}+2{\bf l}^ {2}\bigg{]}+2(1-\epsilon)\frac{(1-2z)^{2}}{z^{2}}({\rm x}P^{-})^{2}g_{\rho}^{ \ +}g_{\rho^{\prime}}^{\ +}\] \[\qquad+\big{(}g_{\rho}^{\ i}g_{\rho^{j}}^{\ +}+g_{\rho}^{\ j}g_{\rho^{j}} \big{)}\Big{[}-2\frac{(1-z)}{z}{\bf K}^{i}{\bf K}^{j}+(1-2z)(2-z){\bf l}^{i}{ \bf l}^{j}-4\epsilon\,{\bf K}^{i}{\bf K}^{j}-\epsilon(1-2z)^{2}{\bf l}^{i}{ \bf l}^{j}\Big{]}\] \[\qquad+\big{(}g_{\rho}^{\ i}g_{\rho^{\prime}}^{\ +}+g_{\rho}^{\ +}g_{\rho}^{\ j} \big{)}\frac{{\rm x}P^{-}}{z}(-1)(1-2z){\bf l}^{i}\big{[}3(1-z)-2\epsilon(1-2 z)\big{]}\bigg{\}} \tag{41}\] The transverse momentum integration over \({\bf K}\) can be again carried out using dimensional regularization techniques and gives the UV divergent piece of the integral as \[G({\rm x},\mu^{2})\bigg{|}_{\rm real;\,metric;\,UV}^{\rm NLO} =\frac{\alpha_{s}\,C_{A}}{(2\pi)\,{\rm x}P^{-}}\int_{\rm x}^{1}dz \,\int d\Delta y^{+}\,e^{-i\frac{{\rm x}P^{-}}{z}\Delta y^{+}}\int d^{2-2 \epsilon}\Delta{\bf y}\,\langle P|{\cal F}_{a}^{-\rho^{\prime}}(\Delta y^{+},0;\Delta{\bf y}){\cal F}_{a}^{-\rho}(0,0;0)|P\rangle\] \[\times\int\frac{d^{2-2\epsilon}{\bf l}}{(2\pi)^{2-2\epsilon}}\,e ^{i\Lambda{\bf y}}\,\frac{1}{4\pi}z(1-z)\frac{\Gamma(1+\epsilon)}{\epsilon} \bigg{[}\frac{z(1-z){\bf l}^{2}}{4\pi\mu^{2}}\bigg{]}^{-\epsilon}\] \[\times\bigg{\{}g_{\rho}^{\ i}g_{\rho^{j}}\bigg{[}\frac{4}{z^{2}}+ \frac{2}{(1-z)}\bigg{]}+\big{(}g_{\rho}^{\ i}g_{\rho^{j}}^{\ j}+g_{\rho}^{\ j}g_{\rho^{j}} \big{)}\frac{\delta^{ij}}{2(1-\epsilon)}\bigg{[}-2\frac{(1-z)}{z}-4\epsilon \bigg{]}\bigg{\}} \tag{42}\] As before the divergent piece of the result is independent of the intrinsic transverse momentum \({\bf l}\) which allows one to perform the \({\bf l}\) integral forcing \(\Delta{\bf y}\to 0\). The UV divergent piece of the contributions of the metric part of the cut propagator can then be written as \[G({\rm x},\mu^{2})\bigg{|}_{\rm real;\,metric;\,UV}^{\rm NLO}=\frac{\alpha_{s} \,C_{A}}{\pi}\,\left\{\frac{\mu^{2\epsilon}}{\epsilon}\int_{\rm x}^{1}\frac{ dz}{z}\bigg{[}\frac{(1-z)}{z}+\frac{z}{2}-\frac{(1-z)^{2}}{2}\bigg{]}G^{(0)} \Big{(}\frac{{\rm x}}{z}\Big{)}+O(\epsilon^{0})\right\} \tag{43}\] Adding this to the UV divergent piece of the contributions of the gauge dependent part as given in Eq. (38) the total UV contribution by the real diagram is \[G({\rm x},\mu^{2})\bigg{|}_{\rm real;\,UV}^{\rm NLO}=\frac{\alpha_{s}\,C_{A}}{ \pi}\,\left\{\frac{\mu^{2\epsilon}}{\epsilon}\int_{\rm x}^{1}\frac{dz}{z}\bigg{[} \frac{z}{(1-z)_{+}}+\frac{(1-z)}{z}+z(1-z)\bigg{]}G^{(0)}\Big{(}\frac{{\rm x}} {z}\Big{)}+O(\epsilon^{0})\right\}\,. \tag{44}\] On the one hand, the \(1/\epsilon\) UV pole contribution can be compensated by adjusting the bare gluon distribution \(G^{(0)}({\rm x})\), as part of the renormalization of the gluon distribution as a composite operator. On the other hand, one can obtain the contribution to the DGLAP evolution of the gluon distribution from the real diagram by taking a derivative of Eq. (44) with respect to \(\mu^{2}\), multiply by \(\mu^{2}\) and then the limit \(\epsilon\to 0\). Finally, it is possible to replace the bare distribution in the right hand side of the equation by the full one (up to terms of higher orders in \(\alpha_{s}\)), following Eq. (22). One thus obtains \[\mu^{2}\partial_{\mu^{2}}G({\rm x},\mu^{2})\bigg{|}_{\rm real}=\frac{\alpha_{s} \,C_{A}}{\pi}\,\int_{\rm x}^{1}\frac{dz}{z}\bigg{[}\frac{z}{(1-z)_{+}}+\frac{(1 -z)}{z}+z(1-z)\bigg{]}G\left(\frac{{\rm x}}{z},\mu^{2}\right)+O(\alpha_{s}^{2})\,. \tag{45}\] This is the contribution of the real diagram to the LO DGLAP evolution of the gluon distribution function as defined as Eq. (12). Contrary to a widespread belief, the \(+\) prescription regularizing the would-be divergence at \(z\to 1\) is obtained by direct calculation from the real diagram alone, and is entirely unrelated to the virtual diagrams. It is instead a consequence of the \(\delta(k^{-})\) term present in the Wightman propagator (8), which is required by consistency with the ML prescription (4) for the Feynman propagator (see Ref. [60]). ## 4 Virtual contributions In this section, the calculation of the gluon vacuum polarization tensor in light-cone gauge is recalled, and the corresponding virtual NLO corrections to the gluon distribution are obtained. ### Gluon vacuum polarization tensor at NLO in light-cone gauge #### 4.1.1 Tadpole diagram The tadpole diagram (on the left of Fig. 2) might a priori contribute to the gluon vacuum polarization tensor. It writes \[i\Pi^{ba}_{\rho\nu}(q)\bigg{|}_{\text{tad.}}=\frac{1}{2}\int\frac{d^{D}k}{(2\pi )^{D}}\;V_{4g\nu\rho\sigma\lambda}\;\delta^{cd}\;\tilde{G}^{\sigma\lambda}_{0,F}(k)\,, \tag{46}\] where the \(1/2\) is the symmetry factor associated with the tadpole loop, \(V_{4g\nu\rho\sigma\lambda}\;\) is the QCD four-gluon vertex, and we use the Feynman propagator in light-cone gauge (3) with the ML prescription (4). Performing the numerator algebra, one arrives at \[i\Pi^{ba}_{\rho\nu}(q)\bigg{|}_{\text{tad.}}=[\mathbb{1}]_{ba}\;\;i\,g^{2}\,C_ {A}\,\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\;\frac{i}{(k^{2}+i0^{+})}\, \left[(D\!-\!3)g_{\rho\nu}+\frac{(n_{\rho}k_{\nu}+k_{\rho}n_{\nu})}{[n\!\cdot\! k]}\right]\,. \tag{47}\] In dimensional regularization, one has the vanishing of the standard scaleless integral \[\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\;\;\frac{i}{(k^{2}+i0^{+})}=0\,, \tag{48}\] so that the first term in Eq. (47) vanishes. By contrast, the second term in Eq. (47) involves a vector integral with the denominator \([n\!\cdot\!k]\). Such integral in the ML prescription is a priori a linear combination of the two available vectors \(n_{\nu}\) and \(\bar{n}_{\nu}\), by Lorentz symmetry. Moreover, integrals with the ML prescription preserve homogeneity with respect to \(n_{\nu}\) and with respect to \(\bar{n}_{\nu}\). Hence, in this case, the only possible contribution is of the form \[\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\;\frac{i}{(k^{2}+i0^{+})}\;\frac{k_ {\nu}}{[n\!\cdot\!k]}=C\,\frac{\bar{n}_{\nu}}{\bar{n}\cdot n}\,, \tag{49}\] Figure 2: The two diagrams contributing to the vacuum polarization tensor at one loop in the pure glue sector. with a constant coefficient \(C\). Multiplying the relation (49) with \(n^{\nu}\), one finds \[C= n^{\nu}\,\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\ \frac{i}{(k^{2}+i0^{+})}\ \frac{k_{\nu}}{[n\!\cdot\!k]}=\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\ \frac{i}{(k^{2}+i0^{+})}=0\,, \tag{50}\] thanks to Eq. (48). Hence, with the joint use of CDR and the ML prescription, the vector integral appearing in Eq. (47) vanishes identically as well.4 Hence, the total contribution from the tadpole diagram on the left of Fig. 2 vanishes : Footnote 4: This result can be checked explicitly by realizing that both poles in \(n\!\cdot\!k\) always lie on the same side of the real axis, thanks to the ML prescription. \[i\Pi^{ba}_{\mu\nu}(q)\bigg{|}_{\text{tad.}}= \,0\,. \tag{51}\] #### 4.1.2 Bubble diagram contribution to the gluon vacuum polarization tensor The only one-loop contribution to the gluon vacuum polarization tensor is thus the so-called bubble diagram on the right of Fig. 2. Its expression is given by \[i\Pi^{ba}_{\mu\nu}(q)\bigg{|}_{\text{bub.}}=\frac{1}{2}\int\frac{d^{D}k}{(2 \pi)^{D}}\ V_{3g\,_{\nu\sigma\lambda}^{\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, All in all, one finds the expression \[i\Pi^{ba}_{\rho\nu}(q)\bigg{|}_{\text{bub.}} =\left[\mathbb{1}\right]_{ba}\,i\,g^{2}\,C_{A}\bigg{\{}\Big{[}q_{ \rho}q_{\nu}\!-\!q^{2}g_{\rho\nu}\Big{]}\Big{[}\frac{11}{3}+\frac{1}{9}\epsilon+ O(\epsilon^{2})\Big{]}\mathcal{B}_{0}(q)\] \[\quad-2(n\!\cdot\!q)\bigg{[}g_{\rho\sigma}\!-\!\frac{n_{\rho}q_{ \sigma}}{(n\!\cdot\!q)}\bigg{]}\bigg{[}g_{\mu\nu}\!-\!\frac{q_{\mu}n_{\nu}}{(n \!\cdot\!q)}\bigg{]}\bigg{[}q^{\sigma}\,\mathcal{I}^{\mu}(q)+q^{\mu}\,\mathcal{ I}^{\sigma}(q)\bigg{]}\] \[\quad+4(n\!\cdot\!q)\bigg{[}\left(q_{\rho}\!-\!\frac{q^{2}\,n_{ \rho}}{(n\!\cdot\!q)}\right)\left(q_{\nu}\!-\!\frac{q^{2}\,n_{\nu}}{(n\!\cdot \!q)}\right)-\left(q_{\rho}q_{\nu}\!-\!q^{2}g_{\rho\nu}\right)\bigg{]} \mathcal{I}_{0}(q)\bigg{\}} \tag{58}\] with the notations \[\mathcal{I}_{0}(q)\equiv \,\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\,\frac{i}{(k^{2}+i0^ {+})\left((k\!-\!q)^{2}+i0^{+}\right)}\,\frac{1}{[n\!\cdot\!k]} \tag{59}\] \[\mathcal{I}^{\mu}(q)\equiv \,\mu^{4\!-\!D}\int\frac{d^{D}k}{(2\pi)^{D}}\,\frac{i}{(k^{2}+i0^ {+})\left((k\!-\!q)^{2}+i0^{+}\right)}\,\frac{k^{\mu}}{[n\!\cdot\!k]}\,. \tag{60}\] As a remark, the one-loop contribution (58) is manifestly transverse \[q^{\rho}i\Pi^{ba}_{\rho\nu}(q)\bigg{|}_{\text{bub.}}=0\,, \tag{61}\] as required from Ward identity [56; 62]. Obviously, one has the relation \(n_{\mu}\mathcal{I}^{\mu}(q)=\mathcal{B}_{0}(q)\). With the ML prescription (4), one can calculate \(\mathcal{I}_{0}(q)\) and the other components of \(\mathcal{I}^{\mu}(q)\), which are finite at \(D=4\) and thus will not be needed in the present study. Hence, the vector integral \(\mathcal{I}^{\mu}(q)\) obeys \[\mathcal{I}^{\mu}(q)=\frac{\bar{n}^{\mu}}{\bar{n}\cdot n}\,\mathcal{B}_{0}(q) +\text{finite}+O(\epsilon)\,, \tag{62}\] and the UV divergent part of the expression (58) can be isolated as \[i\Pi^{ba}_{\rho\nu}(q)\bigg{|}_{\text{bub.}} =\left[\mathbb{1}\right]_{ba}\,i\,g^{2}\,C_{A}\bigg{\{}\frac{11}{3} \Big{[}q_{\rho}q_{\nu}\!-\!q^{2}g_{\rho\nu}\Big{]}\mathcal{B}_{0}(q)-2\frac{(n \!\cdot\!q)}{(n\!\cdot\!n)}\bigg{[}q_{\rho}\!-\!\frac{q^{2}\,n_{\rho}}{(n\! \cdot\!q)}\bigg{]}\bigg{[}\bar{n}_{\nu}\!-\!\frac{(\bar{n}\!\cdot\!q)n_{\nu}}{( n\!\cdot\!q)}\bigg{]}\mathcal{B}_{0}(q)\] \[\quad-2\frac{(n\!\cdot\!q)}{(\bar{n}\!\cdot\!n)}\bigg{[}\bar{n}_{ \rho}\!-\!\frac{(\bar{n}\!\cdot\!q)n_{\rho}}{(n\!\cdot\!q)}\bigg{]}\Big{[}q_{ \nu}\!-\!\frac{q^{2}\,n_{\nu}}{(n\!\cdot\!q)}\bigg{]}\mathcal{B}_{0}(q)+ \text{finite}+O(\epsilon)\bigg{\}} \tag{63}\] which is indeed the result found in Ref. [56]. #### 4.1.3 Gluon field renormalization and counterterms The first UV divergent contribution in Eq. (63) can be compensated by the standard counterterm for gluon field renormalization, which is both local and Lorentz covariant. For example in the \(\overline{MS}\) scheme, such counterterm corresponds to a contribution \[i\Pi^{ba}_{\rho\nu}(q)\bigg{|}_{\text{std. c.t.}}=\left[\mathbb{1}\right]_{ba}\, \frac{i\,g^{2}\,C_{A}}{(4\pi)^{2}}\frac{11}{3}\Big{[}q_{\rho}q_{\nu}\!-\!q^{2}g _{\rho\nu}\Big{]}\bigg{[}\frac{1}{\epsilon}-\gamma_{E}+\log(4\pi)+O(\epsilon) \bigg{]} \tag{64}\] to the vacuum polarization tensor. The other UV divergent terms in Eq. (63) are not Lorentz covariant due to the vectors \(n^{\mu}\) and \(\bar{n}^{\mu}\), and are non-local. Despite these unusual features, QCD in the light-cone gauge with the ML prescription was shown to be renormalizable in Ref. [62], with a small number of renormalization constant to adjust. Moreover, it was shown in Ref. [62] that only the standard counterterms can contribute to physical observables. We will check in the following that the non-standard UV divergent terms in Eq. (63) drops at the gluon distribution level, so that the corresponding counterterms would drop as well. ### Virtual corrections at the parton distribution level It remains now to insert the one-loop result (58) for the vacuum polarization tensor into the expression (25) for the virtual correction to the gluon distribution. Noting that, in the target light-cone gauge, \[n_{\nu}\tilde{G}_{0,F}^{\nu i}(q)= \,0 \tag{65}\] \[q_{\nu}\tilde{G}_{0,F}^{\nu i}(q)= \frac{i}{[n\!\cdot\!k]}\;n^{i}=0\] (66) \[n_{\rho}\,\mathcal{F}_{a}^{-\rho}= \,\mathcal{F}_{a}^{--}=0\,, \tag{67}\] and that \(\partial_{\rho}\,\mathcal{F}_{a}^{-\rho}\) vanishes up to higher order terms in the coupling \(g\), due to the equation of motion of the background field, one finds \[G(x,\mu^{2})\bigg{|}_{\text{virtual;\,ampl.}}^{\text{NLO}} = \int\frac{d^{D}q}{(2\pi)^{D}}\,\frac{2\pi\delta(q^{-}\!+\!xP^{-})} {(2\pi)xP^{-}}\,\frac{i}{(q^{2}\!+\!i\epsilon)}\,\left[-g^{\nu i}+\frac{n^{\nu }\mathbf{q}^{i}}{[n\!\cdot\!q]}\right]\,\int d^{D}\Delta y\,e^{iq\cdot\Delta y} \,\langle P|\mathcal{F}_{a}^{-i}(0)\mathcal{F}_{a}^{-\rho}(-\Delta y)|P\rangle \tag{68}\] \[\times\;\,i\,g^{2}\,C_{A}\bigg{\{}-q^{2}g_{\rho\nu}\Big{[}\frac{1 1}{3}+\frac{1}{9}\epsilon+O(\epsilon^{2})\Big{]}\mathcal{B}_{0}(q)+4(n\!\cdot \!q)q^{2}g_{\rho\nu}\mathcal{I}_{0}(q)\bigg{\}}\] \[= g^{2}\,C_{A}\int\frac{d^{D}q}{(2\pi)^{D}}\,\frac{2\pi\delta(q^{- }\!+\!xP^{-})}{(2\pi)xP^{-}}\;\,\int d^{D}\Delta y\,e^{iq\cdot\Delta y}\, \langle P|\mathcal{F}_{a}^{-i}(0)\mathcal{F}_{a}^{-i}(-\Delta y)|P\rangle\] \[\times\;\bigg{\{}-\Big{[}\frac{11}{3}+\frac{1}{9}\epsilon+O( \epsilon^{2})\Big{]}\mathcal{B}_{0}(q)+4(n\!\cdot\!q)\mathcal{I}_{0}(q) \bigg{\}}\] On the one hand, the \(1/\epsilon\) UV pole is subtracted by including the contribution of the \(\overline{MS}\) counterterm (64). On the other hand, one can extract the \(\mu\) dependence induced by the UV divergence. From the expression (55) of \(\mathcal{B}_{0}(q)\) and the fact that \(\mathcal{I}_{0}(q)\) is UV finite, one gets \[\mu^{2}\partial_{\mu^{2}}\mathcal{B}_{0}(q)= \,-\,\frac{1}{(4\pi)^{2}}\,+O(\epsilon) \tag{69}\] \[\mu^{2}\partial_{\mu^{2}}\mathcal{I}_{0}(q)= \,O(\epsilon)\,, \tag{70}\] and thus \[\mu^{2}\partial_{\mu^{2}}G(x,\mu^{2})\bigg{|}_{\text{virtual;\,ampl.}}^{ \text{NLO}} = \frac{11}{3}\,\frac{g^{2}\,C_{A}}{(4\pi)^{2}}\int d^{D}\Delta y\, \langle P|\mathcal{F}_{a}^{-i}(0)\mathcal{F}_{a}^{-i}(-\Delta y)|P\rangle\int \frac{d^{D}q}{(2\pi)^{D}}\,\frac{2\pi\delta(q^{-}\!+\!xP^{-})}{(2\pi)xP^{-}} \,\,e^{iq\cdot\Delta y}\,+O(\epsilon) \tag{71}\] \[= \frac{11}{3}\,\frac{\alpha_{s}\,C_{A}}{4\pi}\int d\Delta y^{+}\, \frac{e^{-ixP^{-}\Delta y^{+}}}{(2\pi)xP^{-}}\,\langle P|\mathcal{F}_{a}^{-i} (\Delta y^{+},0:0)\mathcal{F}_{a}^{-i}(0)|P\rangle\;+O(\epsilon)\] \[= \frac{11}{12}\,\frac{\alpha_{s}\,C_{A}}{\pi}\,\,G^{(0)}(x)+O( \epsilon)\] \[= \frac{11}{12}\,\frac{\alpha_{s}\,C_{A}}{\pi}\,\,G(x,\mu^{2})+O( \epsilon)+O(\alpha_{s}^{2})\,,\] using the relation (22) to trade the bare gluon distribution in favor of the full one, up to higher order contributions in \(\alpha_{s}\). The second diagram on Fig. (1), with the gluon vacuum polarization tensor inserted on the complex conjugate amplitude side would give the same contribution to the DGLAP equation as Eq. (71). All in all, including the contributions the three diagrams from Fig. (1), one finds from Eqs. (45) and (71) \[\mu^{2}\partial_{\mu^{2}}G(\text{x},\mu^{2})=\frac{\alpha_{s}\,C_{A}}{\pi}\, \int_{\text{x}}^{1}\frac{dz}{z}\bigg{[}\frac{z}{(1-z)_{+}}+\frac{(1-z)}{z}+z(1- z)+\frac{11}{6}\,\delta(1-z)\bigg{]}G\left(\frac{\text{x}}{z},\mu^{2}\right)+O( \alpha_{s}^{2})\,, \tag{72}\] which is indeed the well-known LO DGLAP for the gluon distribution in pure Yang-Mills theory. Inclusion of quark effects could be done in the same way. In particular, the gluon Wightman propagator (8) would contribute to the real correction in the quark-to-quark channel, and provide the appropriate \(+\) prescription regulating the would-be \(z\to 1\) divergence. ## 5 Conclusion We have derived the DGLAP evolution equation for the gluon distribution function using the background field methods of the Color Glass Condensate formalism. Starting with the operator definition of the gluon distribution function in "target" light cone gauge (\(A^{-}=0\) for a left moving proton) in the presence of a background field the DGLAP evolution is cast as the standard UV renormalization of a composite operator constructed from quantum fields, decomposed into background field and fluctuations around it. One loop corrections generate UV divergences for the transverse components of this operator and are regulated via standard techniques using conventional dimensional regularization in \(\overline{MS}\) scheme. While the results presented here are well-known, there are several aspects of this calculation which will be very useful in the pursuit of a unified framework for QCD evolution. With the method developed in this paper we make the connections between the collinear factorization and the CGC effective theory more clear. Having understood how the scale evolution of gluon distribution function arise in the background field formalism in the target light cone gauge, we will repeat the calculation in the "projectile" light cone gauge (\(A^{+}=0\) for a left moving proton) where CGC formalism is most conveniently applied. This will require to fully take into account the gauge link between the field strength tensors in the UV renormalization of the composite operator in the background field formalism. Moreover, this setup and the methods developed here can also adapted to the case of transverse momentum distribution functions (TMDs). We are planning to study the evolution of the TMDs in the presence of the background fields to further investigate the relation between CGC formalism and TMD framework. ## Acknowledgements TA is supported in part by the National Science Centre (Poland) under the research grant no. 2018/31/D/ST2/00666 (SONATA 14). GB is supported in part by the National Science Centre (Poland) under the research grant no 2020/38/E/ST2/00122 (SONATA BIS 10). J.J-M is supported the DOE Office of Nuclear Physics through Grant No. DE-SC0002307.
2307.11600
Optical pumping enhancement of a free-induction-decay magnetometer
Spin preparation prior to a free-induction-decay (FID) measurement can be adversely affected by transverse bias fields, particularly in the geophysical field range. A strategy that enhances the spin polarization accumulated before readout is demonstrated, by synchronizing optical pumping with a magnetic field pulse that supersedes any transverse fields by over two order of magnitude. The pulsed magnetic field is generated along the optical pumping axis using a compact electromagnetic coil pair encompassing a micro-electromechanical systems (MEMS) vapor cell. The coils also resistively heat the cesium (Cs) vapor to the optimal atomic density without spurious magnetic field contributions as they are rapidly demagnetized to approximately zero field during spin readout. The demagnetization process is analyzed electronically, and directly with a FID measurement, to confirm that the residual magnetic field is minimal during detection. The sensitivity performance of this technique is compared to existing optical pumping modalities across a wide magnetic field range. A noise floor sensitivity of $238\,\mathrm{fT/\surd{Hz}}$ was achieved in a field of approximately $\mathrm{50\,\mu{T}}$, in close agreement with the Cram\'{e}r-Rao lower bound (CRLB) predicted noise density of $258\,\mathrm{fT/\surd{Hz}}$.
Dominic Hunter, Marcin S. Mrozowski, Allan McWilliam, Stuart J. Ingleby, Terry E. Dyer, Paul F. Griffin, Erling Riis
2023-07-21T14:10:58Z
http://arxiv.org/abs/2307.11600v2
# Optical pumping enhancement of a free-induction-decay magnetometer ###### Abstract Spin preparation prior to a free-induction-decay (FID) measurement can be adversely affected by transverse bias fields, particularly in the geophysical field range. A strategy that enhances the spin polarization accumulated before readout is demonstrated, by synchronizing optical pumping with a magnetic field pulse that supersedes any transverse fields by over two order of magnitude. The pulsed magnetic field is generated along the optical pumping axis using a compact electromagnetic coil pair encompassing a micro-electromechanical systems (MEMS) vapor cell. The coils also resistively heat the cesium (Cs) vapor to the optimal atomic density without spurious magnetic field contributions as they are rapidly demagnetized to approximately zero field during spin readout. The demagnetization process is analyzed electronically, and directly with a FID measurement, to confirm that the residual magnetic field is minimal during detection. The sensitivity performance of this technique is compared to existing optical pumping modalities across a wide magnetic field range. A noise floor sensitivity of \(238\,\mathrm{f}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}\) was achieved in a field of approximately \(50\,\mathrm{\mu T}\), in close agreement with the Cramer-Rao lower bound (CRLB) predicted noise density of \(258\,\mathrm{f}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}\). pacs: 03.65.-b, 03.65.-b, 03.65.Lx, 03.65.-b, 03.65.Lx ## 1 Introduction Extensive efforts have been devoted toward developing optically pumped magnetometers (OPMs) that operate at zero field [1, 2, 3], exploiting the well-established spin-exchange relaxation-free (SERF) mechanism [4, 5, 6]. Such devices can achieve exceptionally high sensitivity [7], and are therefore well-suited to applications demanding the capability to resolve fT-level signals, e.g., magnetoencephalography (MEG) [8]. Sensors operating in the SERF regime are already commercially available at sensitivities below \(10\,\mathrm{f}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}\) with a bandwidth of \(135\,\mathrm{Hz}\)[9]. Accordingly, they offer an attractive alternative to superconducting quantum interference devices (SQUIDs), particularly in MEG applications, as these compact and flexible devices can be easily integrated into custom mounting hardware [10]. However, the narrow magnetic resonances essential for high sensitivity operation in SERF devices also imposes limitations on both sensor bandwidth and dynamic range. This restricts the implementation of these sensors to low-field environments that are often conditioned using both passive and active field compensation techniques [11]. The exceptional sensitivity achievable with SERF operation is attributable to suppression of spin-exchange collisions that occur between alkali atoms when operating close to zero field with dense atomic ensembles. Several studies have been conducted that extend the utility of spin-exchange suppression, enabling \(\mathrm{f}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}\) sensitivity operation in bias field's of several \(\mathrm{\mu T}\)[12, 13, 14]. Such devices provide a framework for unshielded sensing that could become a valuable resource in many research areas including geophysical [15], space science [16], GPS-denied navigation [17], and biomedical applications [18]. These sensors rely on the light-narrowing phenomenon [19, 20], which requires close to unity spin polarizations to be generated. Therefore, maintaining sufficient optical pumping efficiency is crucial in both maximizing signal-to-noise ratio (SNR) and lowering spin-exchange contributions to the transverse relaxation rate, \(\tau_{2}\). This study employs an OPM based on the free-induction-decay (FID) measurement protocol [21, 22, 23]. This modality exhibits a wide and tunable bandwidth given the flexibility in digital signal processing (DSP) that can be used to analyse the FID data [23, 24]. A notable benefit of FID-based sensors is their accuracy, as optical pumping and detection are separated temporally. Consequently, this allows the polarized spins to precess freely at the Larmor frequency without being perturbed by intense pumping light, thereby considerably lowering light-shift systematics compared to continuous-wave (cw) pumping schemes [25]. Moreover, these sensors are robust as they are commonly operated in a free-running mode and enable direct Larmor frequency extraction. Additionally, spin-exchange suppression can be exploited to extend the sensor dynamic range to bias fields exceeding the Earth's field through efficient spin preparation. The presence of strong magnetic fields can hinder the optical pumping dynamics as the atoms experience a torque that deflects the generated spin polarization from the beam propagation (optical pumping) axis. This issue can be circumvented by, for example, resonantly driving the atoms at the Larmor frequency by modulating either the amplitude [21] or frequency [26] of the pump light. Additionally, one can null the external magnetic field contributions during the spin preparation stage [27]. Although effective, both these techniques require prerequisite knowledge regarding the field of interest, and feedback to maintain optimal conditions. In this work, we exploit a simple and practical approach, hereby referred to as enhanced optical pumping (EOP), that facilitates efficient spin polarization buildup throughout a range of bias field conditions. This is achieved by applying a strong field, \(\vec{B}_{p}\), of several mT along the quantization axis during spin preparation to negate the detrimental effects of transverse field components. Additionally, the resistive heating produced by the coils generating \(\vec{B}_{p}\) is utilized to elevate the vapor cell temperature and reach an optimal atomic density. This provides an effective way of exploiting the FID sensor dead-time whilst simultaneously enhancing the optical pumping efficiency. The approximate optimum atomic density occurs when the ratio of \(\gamma_{2}\) and SNR is maximized [28]. However, delivering heating power to the vapor cell can often adversely affect the instrument's performance. For instance, the alkali atoms are most commonly heated by passing current through a resistive element in contact with the cell. Current noise flowing through the heating element is converted to magnetic fluctuations that can lift the sensor noise floor. Furthermore, magnetic resonance broadening and additional systematics can be induced by subsequent stray fields. Oscillating currents are often used at frequencies far exceeding the atomic bandwidth to alleviate these issues [29]. However, heating at MHz frequencies far beyond the Larmor precession rate is often necessary when applied to total-field OPMs [25], which is typically inefficient. There have been non-electrical heating methods adopted in the past such as optical heating [30] and hot air systems [31], although these techniques require a great deal of power and are often restricted to a laboratory setting. The heating strategy proposed in this work ensures no current is flowing through the heaters during spin readout, hence is immune to systematics and magnetic noise contributions that often contaminate other OPM technologies. This is made possible by demagnetization electronics that enable the current through the heating coil to be switched from 1.4 A to within 10 % of the MOSFET leakage current (\(<50\) pA) over a period of approximately 2500 ns. ## 2 Experimental Methodology ### FID magnetometer setup A simplified schematic of the experimental arrangement is illustrated in Fig. 1(a), showing a two-beam OPM operating in the FID regime. The sensor head consists of a 3 mm thick microelectromechanical systems (MEMS) Cs vapor cell with nitrogen (N\({}_{2}\)) buffer gas [32]. As performed in previous work [26], the collisional broadening and shift in the optical spectrum measured against a Cs reference cell determined the internal buffer gas pressure to be approximately 220 torr. Optical pumping and detection are performed using two co-propagating probe and pump beams tuned to the Cs \(D_{1}\) and \(D_{2}\) transitions, respectively. Both beam widths were set to a diameter (\(1/e^{2}\)) of 3.1 mm. While the 895 nm probe light remains linearly polarized, the 852 nm pump beam becomes circularly polarized after passing through a dual-wavelength multi-order waveplate. This enables optical pumping and detection to be optimized without sacrificing sensitivity, as opposed to launching the beams at a slight angle relative to one another [12]. A polarizing beamsplitter is used to combine both beams prior to traversing the waveplate and illuminating the vapor cell. Overlap between both beams within the interaction region is optimized based on the maximum FID signal amplitude. A high power (\(\leq 600\) mW) single-frequency laser is used to optically pump the alkali spins into a highly polarized state. The pump light is resonant with the \(F=3\longrightarrow F^{\prime}\) hyperfine transition of the Cs MEMS cell. The transition is collisionally broadened to a full width at half maximum (FWHM) linewidth of 3.7 GHz and shifted by \(-1.6\) GHz due to the N\({}_{2}\) buffer gas. Pumping on this transition provides efficient recycling of atoms from the \(F=3\) ground state so they can subsequently contribute to the signal [33]. Light narrowing is exploited to partially suppress spin exchange as most of the atomic population is transferred to the \(F=4\) ground state when pumped optimally, and thus cannot exchange spin due to angular momentum conservation [20]. The probe laser produces a cw beam that is 20 GHz blue-detuned from the \(F=4\to F^{\prime}=3\) Cs transition. This mitigates broadening of the magnetic resonance during detection, by reducing residual optical pumping whilst maintaining an appreciable light-atom interaction strength to maximize signal amplitude. The Glan-Thompson polarizer purifies the probe beam polarization and converts polarization noise, e.g., arising from the fiber, into amplitude noise. A non-polarizing beamsplitter separates the light equally between a monitor photodiode and the vapor cell. The probe power (\(\approx 450\) \(\mu\)W) prior to the vapor cell was actively stabilized, to within 0.4 %, using an analog PID controller (SRS SIM960) that adjusts the RF power supplied to an acousto-optic modulator (AOM). The measurement bias field, \(\vec{B}_{m}\), strength and direction can be controlled by applying currents through a set of three-axis coils that encapsulate the cell. The experiments performed here used only a single transverse axis coil to produce a field with magnitude, \(B_{y}\), along the \(y\)-axis. This was driven by a custom current source with a \(\pm 75\) mA range and a noise level considerably below the noise floor of the sensor [34]. Accordingly, the sensor's dynamic range can be evaluated up to \(B_{y}\approx 50\) \(\mu\)T using this coil. The whole assembly is placed inside a three-layer \(\mu\)-metal shield that attenuates environmental magnetic variations, and ambient fields down to nT levels. ### Optical pumping schemes Figure 1(b) shows two optical modulation schemes that can be employed during the spin preparation period, \(T_{p}\), dedicated to optical pumping [26]. The first method uses single-pulse (SP) modulation where the maximum available light intensity interacts with the atoms throughout the pumping phase, before switching to approximately zero during the detection period, One can also optically pump the atoms by resonantly modulating the light intensity at the Larmor frequency \(\omega_{L}=\gamma|\vec{B}|\), where \(|\vec{B}|\) is the magnetic field magnitude and \(\gamma\) is the gyromagnetic factor dependent on the atomic species, i.e., \(\sim 3.5\,\mathrm{Hz/nT}\) for Cs. This technique is known as synchronous modulation [21]. The peak optical power is \(\sim 65\,\mathrm{mW}\) after the pump light has traversed beam-conditioning optics, an AOM, and a fiber-coupling stage. The AOM's extinction ratio ensures that \(<10\,\mu\mathrm{W}\) of pump light interacts with the atoms during the off state. A spectral bandpass filter is placed in front of the balanced photodetector to attenuate \(852\,\mathrm{nm}\) light whilst allowing \(895\,\mathrm{nm}\) light to pass with \(>90\,\%\) transmission, thus avoiding saturation and lowering optical noise contributed by the pump light. This work demonstrates an alternative optical pumping strategy, EOP, which is exploited by mounting the vapor cell between two compact printed circuit boards (PCBs) that have a square central aperture for optical access, as seen in Fig. 1(a). The PCB assembly serves two purposes: to generate a strong field, \(\vec{B}_{p}\), along the beam propagation axis; and maintaining an optimal atomic density within the vapor cell through resistive heating. The technique is illustrated in Fig. 1(b) with \(|\vec{B}_{p}|\) set to several mT and applied along the optical pumping (z-axis), synchronized with the optical pulse over the pump period. Subsequently, the PCB coils producing \(\vec{B}_{p}\) are demagnetized by rapidly lowering the current flow to zero such that only \(\vec{B}_{m}\) persists during spin readout. Details regarding the demagnetization circuitry are discussed in Section C. As the strength of \(\vec{B}_{p}\) supersedes \(\vec{B}_{m}\) by at least two orders of magnitude, the macroscopic spin magnetization is pinned to the \(z\)-axis. This negates the adverse impact of any transverse field components, including \(\vec{B}_{m}\), during spin preparation. The copper tracks on the PCB are printed with a square spiral pattern on both sides of the two-layer PCB. A via is used to electrically connect both layers. The compact bi-planar stack can thus generate stronger magnetic fields compared to a single layer PCB, with the field-to-current ratio theoretically modelled to be \(2.7\,\mu\mathrm{T/mA}\) at the center of the vapor cell. Additionally, using multiple layers enables more coil turns in a smaller footprint for increased heating efficiency. The vapor cell temperature can be controlled by either adjusting the duty cycle or peak current of the applied current pulse flowing through the PCB coils. The duty cycle also impacts the time dedicated to optically pumping the atoms using EOP as the optical and magnetic pulses are synchronized. The spin polarization was found to be close to saturation after pumping for \(T_{p}\approx 88\,\mu\mathrm{s}\). This is equivalent to a \(8.8\,\%\) duty cycle for the pump-probe cycle repetition rate, \(f_{d}=1\,\mathrm{kHz}\), which was kept consistent in the experiments performed here. A peak current of \(1.4\,\mathrm{A}\) heats the vapor cell to a temperature of \(88\,^{\circ}\mathrm{C}\) providing the optimal atomic density. \(T_{p}\) was kept consistent when employing EOP or SP modulation to provide a valid comparison between both techniques. Synchronously pumping the atoms required longer to reach a steady-state spin polarization compared to EOP, with \(T_{p}\) set to around \(286\,\mu\mathrm{s}\). This can be attributed to the light intensity being low for most of the pump phase as the optimal duty cycle of the square-wave modulation was approximately \(30\,\%\). As a result, \(T_{r}\) was shorter for synchronous operation since \(f_{d}\) was kept constant in each case. Additionally, gated heating at \(0.5\,\mathrm{Hz}\) was applied to the vapor cell when utilizing synchronous or SP optical pumping, with measurements conducted when no current was flowing through the PCB. This was to ensure that magnetic noise from the heater was not contributing to the noise floor of the sensor using these techniques. ### Demagnetization electronics Current flowing through the coils cannot appear or disappear instantaneously due to their inductive nature. The coil acts as an open circuit after being magnetized and subsequently switched-off. This sudden change in current causes back EMF to be generated with opposing polarity to the supply voltage until the current exponentially decays to zero. During this process, the induced voltage is high (\(10\,\mathrm{s}\) of kV) which can easily damage switching electronics. A diode can be added in parallel to the coil to clamp this back EMF to the forward bias voltage of the diode, and allow for safe demagnetization. However, this Figure 1: (a) Simplified schematic of a FID OPM utilizing a two-color pump-probe configuration: GT, Glan-Thompson polarizer; NPBS, non-polarizing beamsplitter; PD, photodiode; M, mirror; PBS, polarizing beamsplitter; DWP, dual-wavelength waveplate; HWP, half-wave plate; BFF, bandpass filter; WP, Wollaston prism; BPD, balanced photodetector; PCB, printed circuit board. (b) Depiction of various optical pumping techniques that can be employed. In the single-pulse scheme, the atoms are continuously pumped at peak optical power throughout the period, \(T_{p}\). Synchronous pumping modulates the light resonantly at the Larmor frequency. The EOP scheme applies an additional longitudinal field, \(\vec{B}_{p}\), of approximately \(3.7\,\mathrm{mT}\) along the \(z\)-axis, synchronized with the optical pulse. The vapor cell is positioned between two PCB coils to generate \(\vec{B}_{p}\), which are also used for resistive heating. Pump light interacting with the atoms is extinguished to \(<10\,\mu\mathrm{W}\) during the readout period, \(T_{r}\). The PCB coils are demagnetized to within \(10\,\%\) of the field produced by the MOSFET leakage current (\(|\vec{B}_{p}|\sim 135\,\mathrm{fT}\)) at \(2.5\,\mu\mathrm{s}\). approach is relatively slow as the forward bias voltage limits the maximum energy at which the coil can demagnetize. This process can be sped up drastically using a Zener diode by exploiting its avalanche breakdown mechanism to rapidly demagnetize the coil [35]. This clamps the back EMF to the avalanche breakdown voltage of the Zener, such that more energy is dissipated at a faster rate leading to more rapid demagnetization. The circuit designed to rapidly magnetize and demagnetize the PCB coils is presented in Fig. 2. A P-channel MOSFET (\(Q_{1}\)) is used as a switch to control the current flow through the coil. \(Q_{1}\) is controlled with a function generator through a gate driver circuit that allows for a fast switching rate of the MOSFET, in addition to acting as a buffer. The demagnetization is managed by diodes \(D_{1}\) and \(D_{2}\). The transient-voltage-suppression (TVS) diode, \(D_{1}\), works similarly to a Zener diode although enables more energy dissipation due to its larger area p-n junctions [36]. The TVS breakdown voltage was selected to be close to the maximum permitted drain voltage of \(Q_{1}\), shortening the demagnetization time while preventing damage to \(Q_{1}\). The fast recovery rectifying diode, \(D_{2}\), has the sole purpose of preventing \(D_{1}\) from conducting during the magnetization process. \(R_{1}\) is a \(100\,\Omega\) low inductance wirewound resistor (Vishay WSN) connected in parallel with the coil, and serves as the final demagnetization device after the induced voltage falls below the \(D_{2}\) forward voltage. \(R_{1}\) also serves as a dampening device for the interwinding capacitance \(C_{c}\) of the coil. At the end of the demagnetization process, the current flowing through the coil is equivalent to the transistor drain leakage. The circuit is powered from a triple output power supply. The strength of \(\vec{B}_{p}\) and the heating power are controlled by adjusting supply voltage \(V_{S}\) using the \(2.5\,\mathrm{A}\) output. Two tests were conducted to analyse the demagnetization circuit's performance, including both transient and steady-state responses. These measurements have to be performed quickly (100s of ns) with very high dynamic range (A to pA), hence are difficult to perform simultaneously. Accordingly, the transient response was measured separately using an oscilloscope (Micsig STO1152c), offering the necessary sample rate and current resolution (\(\approx$4\,\mathrm{mA}$\)) to observe the avalanche breakdown clamping mechanism and the bulk of the overall response. Secondly, a precision source meter unit (SMU) evaluated the steady-state response, i.e., the leakage current through the drain \(I_{DS}\) of \(Q_{1}\). For the transient response test, a \(250\,\mathrm{m}\mathrm{\Omega}\) shunt resistor was inserted in series with the coil to monitor its current. The oscilloscope was connected to the shunt with a short (\(15\,\mathrm{cm}\)) coaxial cable to limit its capacitance. A \(1\,\mathrm{kHz}\) pulse with a duty cycle of \(10\,\mathrm{\char 37}\) was used to test the demagnetization. This was then compared to simulated data with similar components as exact macro-models were not available. \(V_{S}\) was set to \(5.8\,\mathrm{V}\) and the gate driver voltage was set to \(12\,\mathrm{V}\), to mimic typical experimental operating conditions. It can be seen in Fig. 3 that the \(90\,\mathrm{\char 37}\) to \(10\,\mathrm{\char 37}\) experimental (\(t_{f}=$215\,\mathrm{ns}$\)) and simulated (\(t_{f}=$203\,\mathrm{ns}$\)) fall times are in close agreement. The slight discrepancy most likely arises from the inability to match the exact macro-models, high variation in the breakdown voltage of the TVS diode, and other parasitics present. Next, the leakage current flowing through the transistor's source and drain, \(I_{DS}\), was estimated during switch off using a high precision SMU due to the oscilloscope's limited vertical resolution. Only \(Q_{1}\) was tested in this instance. The SMU was configured to source the voltage \(V_{S}\) of \(5.8\,\mathrm{V}\) through \(Q_{1}\). The gate of \(Q_{1}\) was driven by a set of two \(9\,\mathrm{V}\), PP3 batteries connected in series, and then through a potentiometer to set the gate voltage to \(12\,\mathrm{V}\). \(Q_{1}\) was connected to the SMU through a triaxial cable (Balden 9922), to limit any leakage current from the cable assembly or instrument itself [37]. The leakage current of \(Q_{1}\) was found to be lower than \(50\,\mathrm{pA}\), which translates to approximately \(135\,\mathrm{f}\mathrm{T}\) based on the theoretically predicted field-to-current ratio of the coils. The response settles to within \(10\,\mathrm{\char 37}\) of this steady state value after approximately \(2500\,\mathrm{ns}\) from the moment \(Q_{1}\) is switched off. ## 3 Results ### FID signal analysis During \(T_{r}\), the polarized spins undergo Larmor precession in the presence of \(\vec{B_{m}}\). Consequently, the alkali vapor experiences a modulated birefringence, detectable through optical rotation in the linearly polarized probe beam that is monitored by a balanced polarimeter. The optical rotation angle is proportional to the projection of spin polarization along the \(x\)-axis given by, \[M_{x}(t)=M_{0}\sin(\omega_{L}t+\phi_{0})\,e^{-\gamma t^{2}}, \tag{1}\] Figure 3: Transient response of the demagnetization circuit derived (a) experimentally and (b) theoretically. The oscillation in the experimental data is believed to be an artefact of the oscilloscope’s input capacitance. The predicted response settles to within \(10\,\mathrm{\char 37}\) of the final leakage current of the transistor \(I_{DS}\) after approximately \(2500\,\mathrm{ns}\) from the moment \(Q_{1}\) begins to switch off. Figure 2: Simplified schematic of the demagnetization circuitry used to produce \(\vec{B_{p}}\) for EOP, and to drive the resistive heating applied to the vapor cell. A pair of PCB coils (parasitics represented in light grey) were found to have an inductance \(L_{\mathrm{C}}\approx$10\,\mathrm{\char 37}$\,\mathrm{\char 37}$\,\mathrm{\char 37}$\, \mathrm{\char 37}$\,\mathrm{\char 37}$\, \mathrm{\char 37 where \(M_{0}\) is spin polarisation generated through optical pumping and \(\phi_{0}\) is the initial phase. It can be seen that it exhibits a sinusoidal decay where \(\omega_{L}\) corresponds to the precession experienced by the Cs atoms, and \(\gamma_{2}\) is dependent on the intrinsic properties of the vapor cell and operational systematics. The photodetector signal is digitized by a data acquisition (DAQ) system based on a Picoscope (model 5444D) operating with 15-bit voltage resolution at a sampling rate of 125\(\,\)Mz. The discretized signal can be modelled as, \[S_{n}=A\sin\!\left(\omega_{L}\,n\,\Delta t+\phi_{0}\right)e^{-\gamma_{2}\,n\, \Delta t}+\epsilon_{n}, \tag{2}\] where \(A\) is the FID amplitude, \(n\) is the data point of interest, \(\Delta t\) is the time interval between adjacent samples, and \(\epsilon_{n}\) is the signal noise. Figure 4(a) depicts examples of FID traces captured by the polarimeter during readout. Each signal was obtained using the optical pumping strategies described previously in a field \(B_{y}\approx 50\,\mu\)T. The signal is automatically downsampled by the DAQ device which averages 50 successive points yielding a final sampling rate of \(1/\Delta t=2.5\,\)MHz. One can generate a FID signal train by optically pumping the atoms and measuring the subsequent decay of spin polarization over multiple cycles, as seen in the inset of Fig. 4(a). This method was used throughout these experiments to generate a magnetic field time series at a sampling rate, \(f_{d}=1\,\)kHz, resulting in a Nyquist limited bandwidth of 500\(\,\)Hz [22]. The magnetometer noise budget was assessed by computing the ASD, as shown in Fig. 4(b) for each optical pumping technique. The ASDs were formulated using Welch's method [38] by averaging the discrete Fourier transforms (DFTs) calculated from 20 subsequent FID traces over the period, \(T_{r}\). A Hanning window was utilized to provide a more accurate determination of the baseline noise levels. Noise spectra were also collected with no pump light applied to the atoms and in the absence of probe light, to localize the dominant noise contributions in the system. These spectra were generated using 20 separate time domain traces as before, although over the full 1 ms window set by \(f_{d}\) as there is no optical pumping in these cases. Figure 4(b) shows that there is a close match in the baseline noise levels obtained from the FID spectra and the noise spectrum observed with no optical pumping, i.e., only probe light is present. This is to be expected as the pump beam is mostly extinguished by the AOM during readout, and is further attenuated by the bandpass filter prior to reaching the detector. The spectral peak for the synchronous case is slightly wider due to the shorter measurement window as more time is dedicated to optical pumping. The noise density, \(\rho_{A}\), was estimated to be \(4\,\mu\)V\(/\sqrt{\text{Hz}}\), calculated as the average noise density across a 2 kHz range centred at the Larmor frequency \(\omega_{L}\approx 2\pi\times 175\,\)kHz. This noise level dictates the achievable SNR and consequently limits the precision of the Larmor frequency measurement based on the CRLB condition (see Section B). The magnetometer is mostly limited by photon shot-noise of the probe light at this frequency as the technical noise inherent to the detector is less pronounced. This can be attributed to the effective suppression in common-mode noise sources facilitated by balanced photodetection e.g., laser intensity or frequency fluctuations. The photon shot-noise density can be estimated based on the the detected optical power, \(P_{det}\), as, \[\rho_{sn}=G\,\sqrt{2\,e\,P_{det}\,\mathcal{R}}, \tag{3}\] where \(e\) is the electron charge, \(\mathcal{R}\) is the detector responsivity, and \(G\) is the amplifier transimpedance gain [14]. With approximately 63% of the probe light reaching the detector, \(\rho_{sn}\) was calculated to be \(3.7\,\mu\)V\(/\sqrt{\text{Hz}}\). This is consistent with the noise density, \(\rho_{A}\), denoted in Fig. 4(b) when added in quadrature with the intrinsic noise of the detection system which was measured to be \(1.5\,\mu\)V\(/\sqrt{\text{Hz}}\) at 175\(\,\)kHz. ### Sensitivity estimation The sensitivity performance of a FID-based magnetometer can be assessed using the CRLB which is a measure of the minimum statistical uncertainty of determining an unbiased estimator from a signal [39]. Assuming \(\epsilon_{n}\) is distributed as white Gaussian noise, the CRLB standard deviation for extracting \(\omega_{L}\) from a FID trace can be calculated as [40], \[\sigma_{CR}\geq\frac{\sqrt{12\,C}}{\gamma\left(A/\rho_{A}\right)T_{r}^{3/2}}, \tag{4}\] where \(A\) is the FID amplitude, and \(\rho_{A}\) is the noise spectral density at the Larmor frequency. \(T_{r}\) is the readout duration, and \(N=T_{r}/\Delta t\) is the number of samples in the FID trace. C is a correction factor accounting for spin depolarization at a rate, \(\gamma_{2}\), and is given by [41], \[C=\frac{N}{12}\frac{(1-z^{2})^{3}(1-z^{2N})}{z^{2}(1-z^{2N})^{2}-N^{2}z^{2N}(1 -z^{2})^{2}}, \tag{5}\] where \(z=e^{-\gamma_{2}/NT_{r}}\). The correction factor has a lower bound of unity for an undamped sinusoid, \(\sigma_{CR}\) can be converted to a noise density \(\rho_{CR}=\sigma_{CR}/\sqrt{f_{d}/2}\) to provide a sensitivity metric in units of \(\text{T}/\sqrt{\text{Hz}}\). This assumes the magnetometer is white noise limited such that the noise density is flat across all frequencies within the magnetometer bandwidth. Figure 4: (a) Polarimeter-detected FID traces acquired using different optical pumping strategies including EOP, (blue), synchronous (orange), and SP (yellow). (b) Amplitude-spectral-density (ASD) curves of polarimeter output. Noise spectra were collected for each optical pumping regime, with no pump light applied (purple), and without probe light (grey). \(\rho_{A}\) is the average noise density, when no pump light was present, across a 2\(\,\)kHz range centred at the Larmor frequency \(\omega_{L}\approx 2\pi\times 175\,\)kHz. An alternative definition of sensitivity is based on the sensor noise floor, \(\rho_{B}\), which can be obtained by monitoring magnetic field fluctuations recorded by the OPM over a set time interval. A magnetic field time series can be produced by extracting \(\omega_{L}\) from consecutive FID traces in a signal train over multiple cycles as illustrated in the inset of Fig. 4(a). Subsequently, the ASD in \(\mathrm{fT}/\sqrt{\mathrm{Hz}}\) can be generated by applying the DFT to the time series data, as performed in previous instances. Figure 5 shows a set of magnetic field sensitivity spectra gathered using this approach for each of the aforementioned optical pumping strategies in a bias field, \(B_{y}\approx 50\,\mathrm{\mu T}\). The ASD curves in Fig. 5 were computed in a similar manner to before by averaging 20 non-overlapping 1 s time segments using Welch's method. The technical noise peaks observed between the frequencies \(20-60\,\mathrm{Hz}\) are related to environmental magnetic noise penetrating the three-layer \(\mu\)-metal shield encapsulating the vapor cell. Additional noise peaks at \(50\,\mathrm{Hz}\), and associated harmonics, originate from the current supply driving the coil producing the bias field. The noise floor, \(\rho_{B}\), and associated uncertainty were determined by calculating the average and standard deviation of the noise density over a \(70-500\,\mathrm{Hz}\) frequency range, ignoring technical noise peaks. The raw FID traces shown in Fig. 4(a) were processed post-acquisition by fitting the data to the model given in Eq. 1, providing the amplitude \(A\), and decay rate, \(\gamma_{2}\), measurements required to calculate \(\rho_{CR}\). The corresponding sensitivity estimations are listed in Table 1 for each optical pumping technique along with other relevant experimental parameters. The error in \(\rho_{CR}\) is mostly attributed to the uncertainty in the estimation of \(\rho_{A}\). Evidently, these sensitivity estimations are closely correlated with \(\rho_{B}\) for each optical pumping strategy. This clearly shows that external magnetic noise contributions, e.g., produced by stray currents in the heater, are not a limiting factor. Considering the white noise assumption in determining the CRLB, this further validates that the magnetometer is predominantly limited by photon shot-noise. ### Dynamic range characterization Dynamic range represents the limits in magnetic field that a magnetometer can reliably operate within and plays a crucial role in finite-field sensing. Some OPMs, e.g., SERF systems, can only function close to zero-field requiring well-conditioned magnetic field environments. In contrast, the FID-based approach enables operation over a wide range of bias fields as observed in Fig. 6. This conveys the sensors performance across the range \(B_{y}\approx 4-50\,\mathrm{\mu T}\) and provides a comparison of each optical pumping method discussed in this work. Clearly, SP optical pumping performs relatively poorly compared to synchronous modulation and EOP. This is particularly evident in Fig. 6(a) which shows a significant degradation in SNR as \(B_{y}\) is raised. This is directly related to a reduction in \(A\) as \(\rho_{A}\) is consistent for each pumping scheme as seen in Fig. 4(b). As mentioned previously, this degradation arises from transverse fields, e.g., \(\vec{B_{m}}\), applying a torque on the spin polarization during optical pumping, and is well described by the Bloch equation formalism [21]. Figure 6(b) shows the additional impact this has on \(\gamma_{2}\) which experiences a sharp rise as \(B_{y}\) is raised. This is to be expected as the effectiveness of spin-exchange suppression diminishes when the spins are prepared into a less polarized state. A more gradual reduction in \(A\) is observed at elevated bias fields when employing either synchronous modulation or EOP, although this is not immediately apparent from Fig. 6(a) as the SNR stays relatively consistent. This is because \(\rho_{A}\) also reduces as \(B_{y}\) is increased; a consequence of the technical noise present in the detection system being less prominent at higher Larmor frequencies (see Fig. 4(b)). The \(1/f\) noise dependence of \(\rho_{A}\) also explains the drop in SNR at lesser \(B_{y}\) values. Slightly higher signal amplitudes were achieved with synchronous modulation at low bias fields owing to the longer pumping period used. It is anticipated that much of the loss in amplitude observed for both EOP and synchronous modulation at larger bias fields will be attributed to nonlinear Zeeman (NLZ) splitting. This effect simultaneously broadens and distorts the magnetic resonance, and is more prevalent at stronger field magnitudes [42]. In the case of EOP, this broadening mechanism will be the main contributor to the \(\approx 80\,\mathrm{Hz}\) deviation in \(\gamma_{2}\) shown in Fig. 6(b), over the range of fields tested. Further investigation is necessary to fully quantify the effects NLZ has on the spin dynamics. Broadening due to magnetic field gradients should also be considered, although the deviation in magnetic field expected across the beam width based on the coil geometry is \(AB\approx 5.5\,\mathrm{nT}\). Gradient broadening is determined by the spread of precession \begin{table} \begin{tabular}{c c c c c} \hline **Parameter** & **EOP** & **Synchronous** & **SP** & **Units** \\ \hline \(\mathbf{B_{y}}\) & \(49.5\) & \(49.7\) & \(49.8\) & \(\mathrm{\mu T}\) \\ \(\mathbf{T_{p}}\) & \(88.4\) & \(286\) & \(88.4\) & \(\mathrm{\mu s}\) \\ \(\mathbf{A}\) & \(7.29\) & \(7.02\) & \(0.86\) & \(\mathrm{V}\) \\ \(\mathbf{\tau_{2}}\) & \(1.23\) & \(1.32\) & \(1.98\) & \(\mathrm{kHz}\) \\ \(\rho_{B}\) & \(238\pm 28.4\) & \(368\pm 38.5\) & \(3630\pm 400\) & \(\mathrm{fT}/\sqrt{\mathrm{Hz}}\) \\ \(\rho_{CR}\) & \(258\pm 21.7\) & \(351\pm 29.5\) & \(3130\pm 263\) & \(\mathrm{fT}/\sqrt{\mathrm{Hz}}\) \\ \hline \end{tabular} \end{table} Table 1: List of experimental parameters for each implemented optical pumping strategy: EOP, synchronous, and SP. The amplitude, \(A\), decay rate, \(\gamma_{2}\), and magnetic field, \(B_{y}\), were determined from a damped sinusoidal fit. \(\rho_{B}\) and \(\rho_{CR}\) are the sensitivity estimations based on the noise floor (see Fig. 5) and CRLB (see Eq. 4) predicted noise densities, respectively. Figure 5: Magnetic field sensitivity spectra acquired using enhanced (blue), synchronous (orange), and single-pulse (yellow) optical pumping. Each ASD was acquired with \(B_{y}\approx 50\,\mathrm{\mu T}\). The noise floors (dash-dotted lines) were calculated by averaging the spectra over a \(70-500\,\mathrm{Hz}\) frequency range, whilst avoiding technical noise peaks. The corresponding sensitivity estimations, \(\rho_{B}\), are listed in Table 1. frequencies throughout the cell, i.e., \(\gamma_{gr}\sim\gamma\,\Delta B\)[43], which is around \(19\,\mathrm{Hz}\) in this case. The integrated area of the magnetic resonance, in the frequency domain, is a reflection of the spin polarization gained through optical pumping. Thus, assuming this remains constant, one would expect a lower signal amplitude given the additional broadening that is induced by the NLZ effect. EOP demonstrated consistent spin polarization buildup across the full range of bias fields tested. This is not surprising given that the field strength of \(\vec{\mathcal{B}}_{p}\) is at least two orders of magnitude higher than \(B_{y}\), thus the spin polarization will no longer experience a significant torque during pumping. In contrast, the steady-state spin polarization achieved with synchronous modulation reduced as a function of \(B_{y}\). NLZ splitting inevitably influences the optical pumping dynamics in this case, as the atoms become more difficult to resonantly address due to the nonlinearity in the magnetic sublevel structure. Naturally, this will result in a steeper amplitude reduction with respect to \(B_{y}\) compared to EOP which does not rely on a resonant driving field. Furthermore, this accounts for the sharper rise in \(\gamma_{2}\) for synchronous pumping as spin-exchange suppression becomes less effective. Figures 6(c-e) provide a comparison in sensitivity performance for each of the optical pumping schemes investigated. Two approaches to sensitivity estimation were used in accordance with that described in Section B. The first metric is the CRLB noise density, \(\rho_{CR}\), denoted by the gray data points. The second method estimates the noise floor, \(\rho_{B}\), of the ASD computed from the magnetic field recordings, represented as colored markers. It can be readily seen that the magnetometer noise floor reaches the CRLB limit across the full range of bias fields for each pumping technique. This validates that the heating strategy used in EOP is not lifting the sensor noise level, which is to be expected since the coil is rapidly demagnetized over a short period of \(200\,\mathrm{ns}\). It also verifies that the magnetic field fluctuations produced by the bias field are well below the noise floor of the sensor. The sensitivity dependence matches well with expectations in accordance with Eq. 4 when considering the SNR and \(\gamma_{2}\) achieved in each case. The best sensitivity was obtained using EOP as seen in the inset of Fig. 6(c), due to the improved signal amplitudes and relaxation rates achieved with more efficient optical pumping, especially at large magnetic field strengths. ### Accuracy Considerations FID-based sensors have a distinct advantage in accuracy compared to other OPM configurations as they are inherently self-calibrating and do not suffer from light shifts caused by intense optical pumping. Despite this, systematics are still present when operating in geophysical magnetic fields due to magnetic resonance asymmetries [44]. For example, both hyperfine ground states possess slightly different gyromagnetic factors, and subsequently, their precession frequencies diverge depending on the magnetic field strength. Moreover, the magnetic resonance can be distorted owing to NLZ splitting of a single hyperfine manifold [42]. In both cases, the induced systematics depend on the atomic distribution among the Zeeman sublevels of both hyperfine levels, which is sensitive to both spin preparation and subsequent relaxation. The linearity of the magnetometer response with magnetic field was characterized for the EOP and SP modulation schemes. The average magnetic field was monitored over 1 s measurement intervals at various coil supply currents as seen in Fig. 7(a). This shows the residuals obtained from a linear fit to the recorded magnetic field data. The magnetometer responds relatively linearly with bias field in the case of EOP as the residuals fluctuate around zero with no perceivable trends. EOP reliably generates the same polarization state independent of the applied field strength. Consequently, the atomic population mainly resides in the \(F=4\) ground state such that the spins precess at a single predominant frequency, minimising systematic effects. The resid Figure 6: FID magnetometer dynamic range comparison for three optical pumping strategies: EOP (blue), synchronous (orange), and SP (yellow). The bias field, \(B_{y}\), was varied between \(4-50\,\mathrm{\mu T}\). (a) SNR based on the fitted FID amplitude, \(A\), and the noise density, \(\rho_{A}\), at the Larmor frequency of interest. (b) Transverse relaxation rate, \(\gamma_{2}\), calculated from a single decay period (\(1/e\)) measured by fitting each FID trace to the model in Eq. 1. The markers are larger than the associated error bars. (c-e) Noise floor, \(\rho_{B}\), estimated from the ASDs (colors), and the associated CRLB noise density, \(\rho_{CR}\) (grey). uals for SP modulation convey a distinct quadratic behaviour, as a result of inconsistency in the atomic population distribution achieved through optical pumping in various bias field conditions. In this case, the Larmor frequency estimation will be weighted according to the atomic population occupying each hyperfine state. In the EOP scheme, rapid coil demagnetization is crucial in preventing spin precession readout from being affected by the magnetic field pulse. Previously, this was verified electronically by measuring the transient current response, which demonstrated \(\sim 200\,\mathrm{ns}\) fall time and eventual decay to around \(135\,\mathrm{fT}\) due to the leakage current of the MOSFET. In order to observe the transient magnetic response from the magnetometer, an alternative signal processing strategy was devised based on a Hilbert Transform. This vastly extends the magnetometer bandwidth enabling resolution of frequencies up to the Nyquist limit dictated by the sampling rate of the DAQ system [24]. Generation of the full signal phasor using a dual matched finite-impulse response filter to perform a Hilbert transform allows estimation of the signal phase at each DAQ clock cycle, and hence the instantaneous precession frequency [25]. Consistent variation over time is observed using the SP scheme, and to a lesser extent, with the EOP scheme, and the trend can be clarified by averaging over 50 consecutive FID pulses (see Fig. 7(b)). Figure 7(b) shows the residual variation in instantaneous magnetic field over the course of a single FID cycle in a bias field \(B_{y}\approx 50\,\mathrm{\mu T}\). The observations for SP optical pumping suggest that the observed systematic variation of precession frequency during each FID pulse is not a consequence of demagnetization, as no current was flowing through the PCB coils using this technique. Instead, the decay of spin polarisation during the FID pulse, combined with the dependence of NLZ-induced heading error on this polarisation, is likely to be responsible for this behaviour. Ground-state spin relaxation creates a time-dependent spin distribution, which varies consistently during each FID pulse. At high bias fields, the splitting between the Zeeman sublevels is no longer linear, creating a dependence of the measured Larmor frequency on the magnetic sublevel populations, which vary during the FID pulse. One would expect the trend to be more distinctive for the SP case as the influence of spin-exchange processes results in more complex evolution of the ground-state spin distributions. Significantly, a lower systematic shift is observed in the EOP case, confirming that the pre-polarising coil is sufficiently de-energised at the measurement onset. ## 4 Conclusion and Outlook In conclusion, a more efficient spin preparation scheme for FID atomic magnetometers was demonstrated, by applying mT-level magnetic field pulses synchronized with the amplitude modulation of the pump light. A PCB coil pair encapsulates the miniaturized Cs vapor cell to produce this strong magnetic field along the optical pumping axis, whilst simultaneously resistively heating the vapor to the optimal atomic density for maximum sensitivity performance. No magnetic noise is produced as the coils are only active when optically pumping the atoms, and are rapidly demagnetized to near zero prior to spin readout. Tests of the demagnetization circuit's transient response yielded a fall time of \(213\,\mathrm{ns}\), closely matching theoretical predictions. Furthermore, a digital Hilbert transform applied to individual FID traces revealed no signature of the demagnetization process; this is noteworthy given that accuracy is one of the key benefits of the FID modality. The spin polarization prepared through EOP is consistent over a wide range of bias fields which aids in maintaining the sensor accuracy. Systematics in the Larmor frequency measurement induced by heading errors remain stable under various bias fields. Heading error can be corrected analytically based on the degree spin polarization [45]; however, this compensation becomes more straightforward under consistent optical pumping conditions. An optimal magnetic sensitivity of \(238\,\mathrm{fT}/\sqrt{\mathrm{Hz}}\) was achieved in a bias field of \(50\,\mathrm{\mu T}\) using EOP providing an improvement over existing optical pumping schemes. The sensor noise floor closely matched CRLB noise density predictions indicating that magnetic noise, e.g., arising from electrical heating or the bias field, is not a limiting factor. Therefore, sensitivity improvement can only be achieved by increasing the signal amplitude, as the magnetometer is mostly photon shot-noise limited especially at higher Larmor frequencies, or extending the spin coherence time. Optical rotation experienced by the probe beam could be doubled by placing a reflector after the vapor cell in a double-pass geometry. A compact device would also benefit from this geometry due to the reduced standoff distance between the vapor cell and the signal source of interest. Recent novel fabrications techniques [32] make it feasible to manufacture thicker MEMS cells that exhibit higher signal amplitudes and longer spin relaxation times. Furthermore, the customizable cell geometries improve the available optical access that can be interrogated. One could also lower the noise density, \(\rho_{A}\), slightly by implementing a faster low-noise DAQ system with higher bit-resolution, e.g., using field programmable gate arrays (FPGAs). The EOP strategy is ideally suited to sensor commercialization as it provides a scalable solution that limits the hardware and software requirements for robust operation. For example, Figure 7: (a) Linear fit residuals of magnetic field data recorded at various bias coil currents, using EOP (blue squares) and SP modulation (yellow circles). (b) Residual variation in the instantaneous magnetic field recorded over a FID cycle using EOP and SP modulation as noted in the legend. The data was processed using a Hilbert transform to calculate the spin precession phase at each DAQ clock cycle, and hence the instantaneous precession frequency. The plot above shows the residual variation from a linear phase dependence, and is an average of 50 consecutive FID pulses. sensors based on resonantly driven spin precession require electronic feedback loops to maintain optimal performance which increases complexity. Furthermore, more elaborate cell heating methods are required to prevent stray magnetic fields from raising the sensor noise floor. The magnetometer's topology could be made more compact and portable by replacing the pump and probe lasers with a vertical-cavity surface-emitting laser (VCSEL). The pump light would be frequency modulated in this case, as opposed to utilizing amplitude modulation with an AOM [26]. The power consumption of the pre-polarizing field and cell heating in the current configuration is approximately 0.8 W. This sort of power can be easily accessible with USB-C [46]. The power draw can be further optimized by the modification to coil geometry, balancing current draw and field induced. One could envision a high-performance finite-field sensor with minimal optical components including: at least one VCSEL for pumping and probing, a quarter-wave plate, MEMS vapor cell, PCB coils for EOP and heating, and a balanced detector. ## Funding. Innovate UK (ISCF-42186). ### Acknowledgments. AM was supported by a Ph.D. studentship from the Defence Science and Technology Laboratory (DstI). ## Disclosures. The authors declare that there are no conflicts of interest related to this article. ## Data availability. Data underlying the results presented in this manuscript are available in Ref. [47].
2305.10367
Truncated Partial-Wave Analysis for $η$-photoproduction observables via Bayesian Statistics
A truncated partial-wave analysis is performed for $\eta$-photoproduction using the polarization observables $\sigma_0, \Sigma, T, E, F$ and $G$. Different truncation orders are analyzed for six energy bins within the range of $E^{lab}_{\gamma} \in [750, 1250]$ MeV. Bayesian statistics is combined with truncated partial-wave analysis for the first time to investigate the structure of emerging ambiguities and their relevance in comparison to each other. Marginal distributions for the electromagnetic multipole parameters are presented together with predictions for polarization observables which have not yet been measured, in order to determine promising future measurements able to remove remaining mathematical ambiguities.
Philipp Kroenert, Yannick Wunderlich, Farah Afzal, Annika Thiel
2023-05-17T16:46:07Z
http://arxiv.org/abs/2305.10367v3
# Truncated Partial-Wave Analysis for \(\eta\)-photoproduction observables ###### Abstract A truncated partial-wave analysis is performed for \(\eta\)-photoproduction using the polarization observables \(\sigma_{0},\Sigma,T,E,F\) and \(G\). Different truncation orders are analyzed for six energy bins within the range of \(E_{\gamma}^{lab}\in[750,1250]\) MeV. Bayesian statistics is combined with truncated partial-wave analysis for the first time to investigate the structure of emerging ambiguities and their relevance in comparison to each other. Marginal distributions for the electromagnetic multipole parameters are presented together with predictions for polarization observables which have not yet been measured, in order to determine promising future measurements able to remove remaining mathematical ambiguities. ## I Introduction Baryon spectroscopy is an experimental technique to acquire a better understanding of the strong interaction and its fundamental theoretical description given by quantum chromodynamics. Particles, for example pions, real photons as well as electrons [1], are brought to collision with a nucleon. With a sufficient high centre-of-mass energy, the nucleon can be excited to a resonant state, which is classified as a distinct particle with certain intrinsic properties. Two well-established examples for baryon resonances are the Delta resonance \(\Delta(1232)3/2^{+}\) and the Roper resonance \(N(1440)1/2^{+}\)[2]. As such resonances are often formed and decay via the strong interaction, their proper lifetimes are rather short, for the above examples in the order of \(10^{-24}\) s. A direct detection of resonances with state-of-the-art detectors is not possible. Instead, the analysis of the final state particles angular distributions using partial-wave analysis, allows to draw conclusions about the formation of the resonance and its inherent properties such as total angular momentum, mass, decay width and parity. Up to the present day, single pseudoscalar meson photoproduction reactions are the experimentally most studied reactions in terms of baryon spectroscopy. A comprehensive overview can be found in the recently published review on light baryon spectroscopy by Thiel et al. [1]. The experimental data which are used as input to partial-wave analyses are called polarization observables. In single pseudoscalar meson photoproduction there are sixteen distinct ones and multiple facilities worldwide [3; 4; 5; 6; 7] contributed to a large database. In addition, multiple partial-wave analysis approaches [8; 9; 10; 11; 12; 13] do exist for describing the data and extracting information about the resonant states. Such states can also be predicted in a purely mathematical manner via theory models based on quantum chromodynamics, such as quark models or Lattice quantum chromodynamics, see for example [14]. However, theory models predicted significantly more states than are experimentally confirmed, predominantly in the higher-mass region, which is known as the missing-resonance problem [1]. Indeed, there is no final conclusion yet, which motivates for further studies and the exploration of new approaches within this field of physics. The above-mentioned partial-wave approaches share a common feature, the use of an energy-dependent parametrization for the complex amplitudes [1], which makes the results model-dependent. With the previous points in mind, the paper at hand focuses on single pseudoscalar-meson photoproduction and on a completely model-independent analysis approach, namely truncated partial-wave analysis [15; 16; 17; 18; 19]. Based on a selection of measured polarization observables, the relevant, complex multipole parameters are to be determined, which again define the four complex spin-amplitudes of the reaction, and thus the matrix elements of the quantum field theoretical transition-operator \(\mathcal{T}_{fi}\), as shown by Chew, Goldberger, Low and Nambu [20]. The large database of polarization observables enables a proficient choice of observables in order to avoid mathematical ambiguities. The determination of such an appropriate selection of observables is, for the problem of the extraction of the full production amplitudes, based on the theory of complete experiment analysis [21; 22; 23; 24]. The above-mentioned results have been derived under the assumption of measurements without uncertainties [25; 18]. A truncated partial-wave analysis, i.e. the extraction of the complex photoproduction multipoles up to some truncation angular-momentum \(\ell_{\text{max}}\), was first studied by Omeleanko [26]. A detailed treatment of the subject can be found in Refs. [17; 18; 19]. As such, it is an indispensable step in each analysis using experimental data, to check for possible ambiguities and to study their relevance compared to other solutions. This paper employs for the first time Bayesian statistics to truncated partial-wave analysis. Therefore, the results in this paper are given as distributions, as opposed to point estimates, allowing to quantify the uncertainty of an estimated multipole-parameter with an unprecedented level of detail, which is of particular importance. Through this approach it becomes possible to study the phase space in more detail and, by association, the structure of the above-mentioned ambiguities. It is even possible to discover a certain connectivity between different solutions, which hints to problematic ambiguities. The statistical model takes systematic uncertainties of the used data sets as well as correlations between data points into account. The marginal parameter distributions are compared to the maximum a posteriori estimates in order to classify the results. Finally, predictions for the rest of the sixteen polarization observables, which were not utilized as input within this analysis, are given, which are then used to deduce promising future measurements. The paper is structured as follows: a concise introduction to Bayesian statistics is given in Section II. An outline of truncated partial-wave analysis, hence the foundation of the employed model, is provided in Section III, followed by a discussion of the mathematical ambiguities. The employed data sets are introduced and discussed in Section IV, accompanied by the discussion of their systematic uncertainties and correlations between the used data points. Section V deals in detail with the statistical assumptions underlying the analysis presented in this paper, which form the final posterior distribution as described in Section VI. The focus of Section VII is on the applied analysis methods. Particularly, necessary adaptations and extensions of the standard methods, due to a multimodal posterior, are discussed. Finally, the results of truncated partial-wave analysis examined via Bayesian statistics are presented in Section VIII. ## II Basics of Bayesian Statistics The fundamental equation of Bayesian statistics is Bayes' theorem [27; 28]: \[p(\mathbf{\Theta}\mid\mathbf{y})=\frac{p(\mathbf{y}\mid\mathbf{\Theta})\,p(\mathbf{\Theta})}{\int p (\mathbf{y}\mid\mathbf{\Theta})\,p(\mathbf{\Theta})\,\mathrm{d}\mathbf{\Theta}}. \tag{1}\] Hereby, \(\mathbf{\Theta}\) denotes the parameters of the used model whereas \(\mathbf{y}\) stands for the employed data. The posterior distribution \(p(\mathbf{\Theta}\mid\mathbf{y})\) is in general a multidimensional probability distribution reflecting the probability of the model given the data. It consists of the likelihood distribution \(p(\mathbf{y}\mid\mathbf{\Theta})\), comprising the data points and model predictions, and the prior distribution \(p(\mathbf{\Theta})\), which inhibits the current knowledge about the parameters of the model, before the data is taken into consideration. The denominator in Eq. (1) plays the role of a normalization factor and can be neglected within the computations of parameter estimation as it is constant for fixed \(\mathbf{y}\). The overall goal of each analysis is to scan the relevant regions of the posterior accurately. From this, the parameter distributions can then be extracted, i.e. their marginal distributions1. In general, the posterior is non-trivial and the integrals encountered in the derivation of the marginal distributions cannot be solved analytically. Instead, one can employ numerical methods, such as Markov chain Monte Carlo algorithms, in order to estimate the involved integrals. For instance, the Metropolis-Hastings [29; 30] or the Hamiltonian Monte Carlo [31; 32] algorithm can be used, of which the latter one is applied in this work. The convergence of the Markov chains2 can be monitored by convergence diagnostics such as the potential-scale-reduction statistic \(\widehat{R}\)[34], Monte Carlo standard error [33] (which depends on the effective sample size [28]) and trace plots [35]. Footnote 1: The marginal distribution of \(\Theta_{1}\) with respect to the posterior distribution \(p(\Theta_{1},\Theta_{2}\mid\mathbf{y})\) is defined as \(p(\Theta_{1}\mid\mathbf{y})=\int\mathrm{d}\Theta_{2}\,p(\Theta_{1},\Theta_{2}\mid \mathbf{y})\). [28] To check the plausibility of the model under consideration, a posterior predictive check can be performed [28]. Hereby, replicated data distributions \(\mathbf{y}^{\mathrm{rep}}\) are generated using the sampled parameter distributions as input for the posterior distribution, while at the same time treating the data points as unknown parameters. Different statistical models can be compared by some measure for the goodness of fit. In Bayesian statistics, such a measure is the predictive accuracy of the model [28]. It can be estimated for example by cross-validation or the widely applicable information criterion [36; 37]. Both methods are discussed in detail in relation to their applicability within this work in Section VII.5. ## III Truncated Partial-Wave Analysis Within this section, the basic equations of truncated partial-wave analysis for single pseudoscalar-meson photoproduction are outlined. For an in depth explanation, the reader is advised to Refs. [18; 19]. Polarization observables are the measurable quantities of interest in single pseudoscalar-meson photoproduction. They are used as experimental input for a truncated partial-wave analysis. In total there are sixteen distinct polarization observables, which can be calculated by measuring differential cross sections under different polarization states. Three groups can be distinguished: the unpolarized differential cross section, three single-polarization observables and twelve double-polarization observables [38]. A comprehensive list of the required polarization states for each observable is given in Table 1 while a mathematical definition is given in Appendix A, Table 4. The theoretical prediction of a profile function3 of a polarization observable depends on the energy \(W\) as well as the scattering angle \(\theta\) in the center-of-mass frame. It can be expressed as an expansion into the basis of associated Legendre polynomials \(P_{k}^{\beta_{\alpha}}\)[19]: \[\check{\Omega}^{\alpha}_{\rm theo}(W,\theta)=\rho\sum_{k=\beta_{\alpha}}^{2\ell_{ \rm max}+\beta_{\alpha}+\gamma_{\alpha}}\mathcal{A}^{\alpha}_{k}(W)\;P_{k}^{ \beta_{\alpha}}(\cos\theta). \tag{2}\] Equation (2) includes a kinematic phase-space factor \(\rho\), angular expansion parameters \(\beta_{\alpha},\gamma_{\alpha}\), which are fixed parameters for each of the sixteen polarization observables of pseudoscalar-meson photoproduction, and energy dependent series coefficients \(\mathcal{A}^{\alpha}_{k}\): \[\mathcal{A}^{\alpha}_{k}(W)=\mathcal{M}^{\dagger}(W)\cdot\mathcal{C}^{\alpha}_ {k}\cdot\mathcal{M}(W). \tag{3}\] Here, \(\mathcal{M}\) denotes the complex multipole vector, which contains all participating multipoles involved for the truncation order \(\ell_{\rm max}\). A valid choice for the definition of this vector, by means of electromagnetic multipoles [20], is: \[\mathcal{M}(W)=[\] \[E_{0+}(W),E_{1+}(W),M_{1+}(W),M_{1-}(W),\] \[E_{2+}(W),E_{2-}(W),M_{2+}(W),M_{2-}(W),\ldots,\] \[E_{\ell_{\rm max}+}(W),E_{\ell_{\rm max}-}(W),M_{\ell_{\rm max}+ }(W),M_{\ell_{\rm max}-}(W)\] \[]. \tag{4}\] In addition, Eq. (3) contains a complex \(4\ell_{\rm max}\cross 4\ell_{\rm max}\) matrix \(\mathcal{C}\) for each observable \(\alpha\) and each summand \(k\). Its general definition can be found in [18]4. From these matrices one can not only read off the contributing partial-waves but also their interferences with each other [19]. Footnote 4: An overall factor of \(1/2\) is missing in the formula for \(\mathcal{C}^{\alpha}_{k}\) in [18]. Equations (2) to (4) imply: 1. The statistical analysis is performed for a single energy at a time. 2. The polarization observable \(\Omega^{\alpha}(W,\theta)\) and the unpolarized differential cross-section \(\sigma_{0}(W,\theta)\) have to share the same energy- and angular-binning. 3. The observables \(\Omega^{\alpha}(W,\theta)\) used within the truncated partial-wave analysis have to share the same energy binning. 4. As \(\check{\Omega}^{\alpha}(W,\theta)\) is an observable, i.e. a real number, the matrices \(\mathcal{C}^{\alpha}_{k}\) are hermitian. 5. The bilinear form of \(\mathcal{A}^{\alpha}_{k}\) gives rise to mathematical ambiguities, as certain transformations leave this quantity invariant. The last point is discussed in more detail in the following. ### Ambiguities The origin of the immanent mathematical ambiguities lies in the definition of the polarization observables. For photoproduction, they can be written in general as a bilinear product of the form [21; 24; 39]: \[\check{\Omega}^{\alpha}(W,\theta)=\kappa\cdot b^{\dagger}(W,\theta)\;\Gamma^{ \alpha}\;b(W,\theta), \tag{5}\] with a numerical prefactor \(\kappa\), a vector \(b\) of length \(N_{\rm A}\), containing the complex spin-amplitudes \(b_{i}\), and a matrix \(\Gamma^{\alpha}\) with dimensions \(N_{\rm A}\times N_{\rm A}\). Certain transformations \(T\) of the complex spin-amplitudes \(b_{i}(W,\theta)\stackrel{{ T}}{{\longrightarrow}}\) \begin{table} \begin{tabular}{c c c} Observable & Beam & Direction of target-/recoil- \\ & polarization & nucleon polarization \\ \hline \(\sigma_{0}\) & unpolarized & — \\ \hline \(\Sigma\) & linear & — \\ T & unpolarized & y \\ P & unpolarized & y’ \\ \hline H & linear & x \\ P & linear & y \\ G & linear & z \\ F & circular & x \\ E & circular & z \\ \hline \(O_{x^{\prime}}\) & linear & x’ \\ T & linear & y’ \\ \(O_{x^{\prime}}\) & linear & z’ \\ \(C_{x^{\prime}}\) & circular & x’ \\ \(C_{x^{\prime}}\) & circular & z’ \\ \hline \(T_{x^{\prime}}\) & unpolarized & x, x’ \\ \(L_{x^{\prime}}\) & unpolarized & z, x’ \\ \(\Sigma\) & unpolarized & y, y’ \\ \(T_{x^{\prime}}\) & unpolarized & x, z’ \\ \(L_{x^{\prime}}\) & unpolarized & z, z’ \\ \hline \(L_{x^{\prime}}\) & linear & x, x’ \\ \(C_{x^{\prime}}\) & linear & y, x’ \\ \(T_{x^{\prime}}\) & linear & z, x’ \\ E & linear & x, y’ \\ \(\sigma_{0}\) & linear & y, y’ \\ F & linear & z, y’ \\ \(L_{x^{\prime}}\) & linear & x, z’ \\ \(C_{x^{\prime}}\) & linear & y, z’ \\ \(T_{x^{\prime}}\) & linear & z, z’ \\ \(O_{x^{\prime}}\) & circular & y, x’ \\ G & circular & x, y’ \\ H & circular & z, y’ \\ \(O_{x^{\prime}}\) & circular & y, z’ \\ \end{tabular} \end{table} Table 1: This table collects the polarization configurations (beam, target, recoil) which allow to measure the sixteen polarization observables of pseudoscalar meson photoproduction. In the center-of-mass coordinate system, the unprimed coordinates are chosen as follows: \(\hat{z}\)-axis along incident photon beam direction and \(\hat{y}\) perpendicular to the reaction plane \(\hat{x}-\hat{z}\). The primed coordinates is a rotation of the unprimed coordinates such that the final state meson momentum points along the \(\hat{z}^{\prime}\)-axis. The table is redrawn from Ref. [38]. A mathematical definition of the observables can be found in Appendix A, Table 4. \(\tilde{b}_{i}(W,\theta)\) leave the bilinear product and thus the observable invariant. Hence, when all observables in a subset \(\left\{\hat{\Omega}^{\alpha_{1}},\ldots,\hat{\Omega}^{\alpha_{n}}\right\}\) are invariant under the same transformation, an ambiguity emerges [18; 21], as the experimental distinction between \(b_{i}\) and \(\tilde{b}_{i}\) is not possible any more. Such an ambiguity can be resolved by including a further observable \(\hat{\Omega}^{\alpha_{k}}\) into the subset, which is not invariant under the specific transformation [18; 21]. There exists one special case of an ambiguity which cannot be resolved by including any further observables, namely the simultaneous rotation of all transversity amplitudes by the same (possibly energy and angle-dependent) phase: \(b_{i}(W,\theta)\stackrel{{ T}}{{\longrightarrow}}e^{i\phi(W, \theta)}b_{i}(W,\theta)\) (see [21]). However, this continuous ambiguity can be ignored for the special case of a truncated partial-wave analysis, since the angle-dependent part of the ambiguity is generally removed by the assumed truncation (see comments made in reference [40]), and the energy-dependent part is fixed by imposing certain phase-conventions for the multipoles. The formalism for the remaining relevant discrete ambiguities in a truncated partial-wave analysis is outline briefly in the following. For more information about discrete as well as continuous ambiguities in the case of the complete experiment analysis, see the paper of Chiang and Tabakin [21]. As shown by Omelaenko [18; 26], in a truncated partial-wave analysis (truncated at some finite \(\ell_{\rm max}\geq 1\)) the complex spin-amplitudes can be expressed (up to kinematical prefactors) as a finite product of irreducible polynomials: \[b_{1}(W,\theta) \propto \prod_{k=1}^{2\ell_{\rm max}}\bigg{(}\tan\frac{\theta}{2}+\beta _{k}(W)\bigg{)}, \tag{6}\] \[b_{2}(W,\theta) \propto \prod_{k=1}^{2\ell_{\rm max}}\bigg{(}\tan\frac{\theta}{2}-\beta _{k}(W)\bigg{)},\] (7) \[b_{3}(W,\theta) \propto \prod_{k=1}^{2\ell_{\rm max}}\bigg{(}\tan\frac{\theta}{2}+\alpha _{k}(W)\bigg{)},\] (8) \[b_{4}(W,\theta) \propto \prod_{k=1}^{2\ell_{\rm max}}\bigg{(}\tan\frac{\theta}{2}-\alpha _{k}(W)\bigg{)}, \tag{9}\] with the complex roots \(\alpha_{k}(W)\) and \(\beta_{k}(W)\), which are in essence equivalent to multipoles. It can be shown [17; 26; 41] that the special case where \(\tan\frac{\theta}{2}=0\) implies a direct connection between the roots: \[\prod_{i=1}^{2\ell_{\rm max}}\alpha_{i}(W)=\prod_{j=1}^{2\ell_{\rm max}} \beta_{j}(W). \tag{10}\] All transformations \(T\) which correspond to a discrete ambiguity of the four group \(\mathcal{S}\) observables \(\left\{\sigma_{0},\tilde{\Sigma},\tilde{T},\tilde{P}\right\}\), must also satisfy Eq. (10), which allows to rule out a major part of the maximal possible \(4^{2\ell_{\rm max}}\)[17] discrete ambiguity-transformations from the beginning. The so-called 'double ambiguity' [17; 26], which corresponds to the simultaneous complex conjugation of all root, automatically preserves the constraint in Eq. (10). Unfortunately, there can also occur so-called accidental ambiguities. These emerge when any discrete ambiguity other than the double ambiguity of all roots approximately fulfills Eq. (10) [17]. The accidental ambiguities as well as the double ambiguity can in principle be resolved by including further observables into the analysis apart from the four group \(\mathcal{S}\) observables. Candidates for observables capable of resolving the above-mentioned discrete ambiguities would be either \(\tilde{F}\), \(\tilde{G}\) or any of the \(\mathcal{BR}\)- and \(\mathcal{TR}\)-type observables. The accidental ambiguities cannot be avoided for analyses of real data due to their abundance (i.e. \(4^{2\ell_{\rm max}}-2\) possible candidates exist for such ambiguities), and they will show up as modes within the posterior distribution and thus in the marginal parameter distributions. In contrast to the discrete ambiguities described above, there can also exist so-called continuous ambiguities in the truncated partial-wave analysis (in addition to the above-mentioned simultaneous phase-rotation of all transversity amplitudes, which has been ruled out), which exist on continuously connected regions within the multipole parameter-space [18]. These ambiguities can occur in case an insufficiently small set of observables is analyzed, and they manifest as plateau-like structures (with possibly rounded edges) in the marginalized posterior-distributions, as opposed to the peak-like structures (or modes) originating from discrete ambiguities. The set of six observables analyzed in this work (see Section IV) is large enough to avoid such continuous ambiguities. For more information about discrete ambiguities in truncated partial-wave analysis, the paper by Omelaenko [26] and especially the subsequent work [17] is recommended. The proof of the completeness of the set of six observables analyzed in this work (Section IV) in the idealized case of an 'exact' truncated partial-wave analysis 5 proceeds a little bit different compared to the work by Omelaenko [26]. The proof is outlined in some detail in Appendix A. Footnote 5: Accidental ambiguities can be disregarded for this rather academic scenario [18]. Summarizing, accidental discrete ambiguities will likely be present within truncated partial-wave analysis performed on real data, resulting in a multimodal likelihood and posterior distribution. ## IV Discussion of the used database A review of the currently available database on polarization observables for the reaction \(\gamma p\to\eta p\) can be found in [1]. In order to cover the largest possible energy range and to resolve discrete mathematical ambiguities, the truncated partial-wave analysis is performed using the six polarization observables \(\sigma_{0}\)[42], \(\Sigma\)[43], \(T\)[44], \(E\)[45], \(F\)[44] and \(G\)[46]. This choice of observables indeed resolve the discrete ambiguities of truncated partial-wave analysis, as shown in Appendix A. An overview of the data is given in Table 2 and a visualization of the phase-space coverage of the individual data sets can be found in Appendix B, Fig. 16. The available energies for the truncated partial-wave analysis are determined by the observable with the lowest statistics [18; 38], which in this case is the observable \(G\). In total six energy bins are available, starting near the \(\eta p\) photoproduction threshold at \(E_{\gamma}^{\rm lab}=750\) MeV up to 1250 MeV, in 100 MeV steps. As truncated partial-wave analysis is a single-energy fit, the energy binning of each observable has to be shifted to that of \(G\). The procedure is described in [18]. The advantage of this method is that no new, i.e. experimentally unobserved, data points have to be constructed, for example via interpolation. However, none of the observables are given as profile-functions which are needed for the truncated partial-wave analysis, see Eq. (2). Thus, the angular distribution of \(\sigma_{0}\) has to be adjusted for each observable, in order to multiply both. This is not an issue, since the very precise MAMI \(\sigma_{0}\)-dataset [42] covers a large angular range \([-0.958,0.958]\) with a small step size \(\sim 0.083\) in all available energies. The data discussed in Section IV do not only have statistical- but also systematic uncertainties. The latter ones originate primarily from the determination of the polarization degree of the photon beam and the target nucleon, the dilution factor6 as well as the background subtraction procedure [42; 43; 44; 45; 46]. Footnote 6: The dilution factor is the ratio of polarizable free protons to all nuclei in the used target material. In principle, each data point has its own systematic uncertainty. However, there is no generally accepted method to model the systematic uncertainty for each data point separately. Instead, the contributions to the systematic uncertainty, which are constant over the whole angular range, are determined for each data set. Then, the same systematic uncertainty is used for each data point within a data set. The contributions split up into the "_general systematic uncertainty_" (\(\sigma_{0}\): 4% [42, p. 5]), the degree of photon beam polarization (F: 2% [44], E: 2.7% [45], G: 5% [46]) and the degree of target polarization (T,F: 4% [44], E: 2.8% [45], G: 2% [46]). The authors of the polarization observable \(\Sigma\) added the statistical- and systematic uncertainty in quadrature for each data point [43]. Thus, their systematic uncertainty can not be modeled separately within this paper. The individual systematic contributions within a data set are combined in a conservative way. A worst-case scenario approach is employed, based on the formulas used to calculate the polarization observables, as given in the papers. In comparison with the'standard' procedure of adding the different contributions in quadrature, there are two main advantages: 1) The functional dependence is taken into account without the need to make an assumption about the distribution of the individual contributions. 2) The worst-case scenario covers the maximum/minimum impact of the systematic uncertainties, and everything in between. As an illustrative example, suppose an observable \(A\) which depends reciprocal on the degree of polarization of the photon beam \(p_{\gamma}\) and target \(p_{\rm t}\), each with their own relative systematic uncertainty \(\Delta_{\rm sys}^{P_{\gamma}}\) and \(\Delta_{\rm sys}^{\rm P_{\rm s}}\), respectively. Then the combined, relative systematic uncertainty of \(A\) would be: \[\Delta_{\rm sys}^{A}={\rm max}\Big{(}\big{|}1-(1+\Delta_{\rm sys} ^{P_{\gamma}})^{-1}\cdot(1+\Delta_{\rm sys}^{P_{\rm s}})^{-1}\big{|},\\ \big{|}1-(1-\Delta_{\rm sys}^{P_{\gamma}})^{-1}\cdot(1-\Delta_{ \rm sys}^{P_{\rm t}})^{-1}\big{|}\Big{)}. \tag{11}\] With input taken from the references, corresponding to the respective data sets [42; 43; 44; 45; 46], the outlined approach results in: \(\Delta_{\rm sys}^{\sigma_{0}}=4.0\%\), \(\Delta_{\rm sys}^{\rm G}=7.4\%\), \(\Delta_{\rm sys}^{\rm E}=5.7\%\), \(\Delta_{\rm sys}^{\rm T}=4.2\%\), \(\Delta_{\rm sys}^{\rm F}=6.3\%\). Due to the calculation of the profile functions, the systematic uncertainty of both data sets have to be combined as well: \[\Delta_{\rm sys}^{\hat{A}}={\rm max}\Big{(}\big{|}1-(1+\Delta_{ \rm sys}^{A})\cdot(1+\Delta_{\rm sys}^{\sigma_{0}})\big{|},\\ \big{|}1-(1-\Delta_{\rm sys}^{A})\cdot(1-\Delta_{\rm sys}^{\sigma_ {0}})\big{|}\Big{)}. \tag{12}\] Thus, the relative systematic uncertainties for the profile functions are: \(\Delta_{\rm sys}^{\sigma_{0}}=4.0\%\), \(\Delta_{\rm sys}^{\hat{G}}=11.7\%\), \(\Delta_{\rm sys}^{\hat{E}}=10.0\%\), \(\Delta_{\rm sys}^{\hat{T}}=8.3\%\), \(\Delta_{\rm sys}^{\hat{F}}=10.5\%\). The incorporation of the systematic uncertainties into the statistical model is described in more detail in Section VI. Furthermore, the calculation of the profile functions introduces a correlation between the unpolarized differential cross-section and the profile functions, as well as among the profile functions themselves. Since certain values of \(\sigma_{0}(W,\theta)\) were used to calculate \(\tilde{\Omega}^{\alpha}(W,\theta)\), correlations were introduced between certain data points of both observables. Moreover, the same value of \(\sigma_{0}(W,\theta)\) might be used to calculate data points of different profile functions. The relevance of these correlations can be estimated via the Pearson correlation coefficient [48], see Eqs. (C4) and (C5) in Appendix C. The measured values of the polarization observables are used as expectation values and the corresponding squared statistical uncertainties as the variances. An example for a correlation matrix is shown in Fig. 1. The correlations are quite small, with absolute values below \(\sim 0.17\), but typically in the order of \(10^{-2}-10^{-3}\). An exception is the significantly higher correlation between \(\sigma_{0}\) and \(\sigma_{0}\cdot E\), with minimal and maximal values of \(\sim 0.29\) and \(\sim 0.67\), respectively. This can be explained by the similar definition of the coefficients \(\mathcal{A}_{k}^{\alpha}(W)\) of \(\sigma_{0}\) and \(\sigma_{0}\cdot E\). Both having sensitivity to almost the exact same interference terms of multipoles, albeit with different strengths (see Ref. [19]). The magnitude of the correlation matrix elements as a function of the energy can be seen in Fig. 2. The corresponding covariance matrix, which is used to construct the likelihood distribution in Section VI.1, can be estimated via Eqs. (30) and (31) in Appendix C. ## V Underlying assumptions An enormous strength of Bayesian statistics is its clarity about the underlying assumptions and how these evolve into the used statistical model. In general one has \(N\) data-pairs \(\left(y,x\right)_{i}\), where the two components can be distinguished as follows: 1. The random variables \(\mathbf{y}=(y_{1},\ldots,y_{N})\) follow a certain distribution. In this context, these correspond to the values of the profile functions of the polarization observables \(\hat{\Omega}^{\alpha}(W,\theta)\). 2. The explanatory variables [28]\(\mathbf{x}=(x_{1},\ldots,x_{N})\) do not belong to any probability distribution. In this context, these are the angular values \(\cos(\theta_{i})\) at which the \(y_{i}\) were measured. The underlying distribution of \(\mathbf{y}\) is of upmost importance as it defines the shape of the likelihood function and, by association, the structure of the parameter space. It is therefore essential to examine the distribution from which \(\mathbf{y}\) originates and discuss the validity of the involved assumptions. Hereby, an understanding of the data-taking as well as the subsequent analysis, to extract values for the polarization observables, is mandatory. For this reason, special emphasis is placed on their discussion within this paper. The polarization observables used within this analysis, originate from measurements at multiple experimental facilities: ELSA [5], MAMI [49] and GRAAL [50]. The measured quantities are count rates, corresponding to differential cross-sections, from which then, one or multiple \begin{table} \begin{tabular}{c c c c c} Observable & Number of data points & \(E_{\gamma}^{\text{lab}}\) / MeV & \(\cos(\theta)\) & Facility & References \\ \hline \(\sigma_{0}\) & \(5736\) & \([723,1571]\) & \([-0.958,0.958]\) & MAMI & Kashevarov et al. [42] \\ \hline \(T,F\) & \(144\) & \([725,1350]\) & \([-0.917,0.917]\) & MAMI & Akondi et al. [44] \\ \hline \(\Sigma\) & \(140\) & \([761,1472]\) & \([-0.946,0.815]\) & GRAAL & Bartalini et al. [43] \\ \hline \(E\) & \(84\) & \([750,1350]\) & \([-0.917,0.917]\) & MAMI & Afzal et al. [45, 47] \\ \hline \(G\) & \(47\) & \([750,1250]\) & \([-0.889,0.667]\) & CBELSA/TAPS & Müller et al. [46] \\ \end{tabular} \end{table} Table 2: Information on the experimental data, given as dimensionless asymmetries, used for the truncated partial-wave analysis of \(\gamma p\to\eta p\). Energy and angular ranges are written as intervals. Figure 1: Example for a correlation matrix. The correlations between the data points of the unpolarized differential cross-section \(\sigma_{0}\) and the used profile functions, as well as the correlation between the profile functions themselves, is shown for \(E_{\gamma}^{\text{lab}}=750\) MeV. Each square represents a certain data point. The color encodes the correlation strength ranging from \(-1\) (darker colors) to \(+1\) (lighter colors). Figure 2: Unique correlation matrix element values as a function of the lab frame energy. The color encodes the correlation strength ranging from \(-1\) (darker colors) to \(+1\) (lighter colors). polarization observables can be extracted. The two most common methods are a 'binned chi-square fit' and an 'unbinned maximum-likelihood fit' [45]. For the first case, it is common to use an asymmetry of the form: \[A\propto\frac{N_{1}-N_{2}}{N_{1}+N_{2}}, \tag{13}\] where \(N_{1},N_{2}\) are normalized count rates of reconstructed \(\gamma p\to\eta p\) events for different polarization states [43; 44]. This has the advantage that systematic effects for example from the reconstruction efficiency cancel out. Certainly, the distribution of this asymmetry is not explicitly addressed in any of the analyses, concerning polarization observables, which the authors have encountered up to this point. However, since the distribution of \(A\) determines the structure of the likelihood distribution, it is mandatory to study its proper form. The count rates \(N_{1},N_{2}\) are Poisson-distributed random variables. If the expectation value, typically denoted as \(\lambda\), is high enough, the distribution goes over to a Gaussian distribution. In the case of the here used data, this should be a good assumption. The sum or difference of two independent Gaussian distributed random variables, as present in Eq. (13) is again Gaussian distributed, which can be shown for example using characteristic functions. However, the ratio of two, eventually correlated, Gaussian distributions \(Z=X/Y\) is far more complicated. A general treatment can be found in Ref. [51]. Additionally a closed form expression is given in Eq. (11), Appendix D.1. Indeed, there exist Gaussian shapes for the asymmetry \(A\) in certain limits, but there exists also the possibility for a bimodal distribution [51]. Therefore, the shape of the asymmetry \(A\) has to be checked for the absence of a bimodal structure. In order to use \(\chi^{2}\) as likelihood function, the distribution should be well approximated by a Gaussian distribution. These checks can be performed by inserting the corresponding values for the expectation values \((\mu_{x},\mu_{y})\), standard deviations \((\sigma_{x},\sigma_{y})\) and correlation \((\rho)\) into the formula for \(Z\) and its transformation, see Ref. [51], or by using Eq. (11). An alternative approach, where the utilization of such an asymmetry can be circumvented, is the already mentioned 'unbinned maximum-likelihood fit'. Albeit, in contrast to the first method, the detector acceptance has to be taken into account [45], which is possible [52]. Within this approach, the likelihood distribution can be modeled appropriately using Poisson distributions. Summarizing, it is advantageous to use the 'unbinned maximum-likelihood fit' for future analyses, in order to extract values for the polarization observables. However, the distribution of the extracted polarization observables not only depends on the shape of the used likelihood function, but also implicitly on the method used to estimate the parameter uncertainties. Again, the distribution of the parameters is rarely explicitly discussed within papers such as the references cited in Table 2. The error analysis of MINUIT uses by default the HESSE approach [53], which assumes an asymptotic approximation to a Gaussian distribution for the parameters under consideration. Thus, it is likely that the parameters were assumed to be Gaussian distributed. Another indication in the same direction is that all data used within the present analysis (cf. Table 2) do have symmetric statistical uncertainties [42; 43; 44; 45; 46]. The profile functions \(\hat{\Omega}^{\alpha}\) are calculated by a product of random variables. However, even when these two random variables are independent and Gaussian distributed, the result is not always a Gaussian, only when one of the standard deviations is very small, see [18] or Appendix D.2. Fortunately, this is the case for \(\sigma_{0}\) as it is the observable in \(p\eta\)-photoproduction with an unprecedented accuracy. ## VI The posterior distribution Using the knowledge of Section V, it seems reasonable to assume that the used dimensionless polarization observables, as well as the unpolarized differential cross-section, are Gaussian distributed and independent of each other. Furthermore, it seems reasonable that the profile functions are Gaussian distributed. The profile functions are correlated with the unpolarized differential cross-section, as well as among themselves, see Section IV. This dependence is modeled within the likelihood distribution using a covariance matrix. In favor of a compact representation, the functional dependencies are not shown explicitly in the subsequent equations. ### Likelihood distribution Combining the results of Sections IV and V, the conditional likelihood distribution, for each of the analyzed energies, can be formulated as an \(N\)-dimensional multivariate Gaussian distribution: \[p(\mathbf{y},\mathbf{x}\mid\mathbf{\Theta},\mathbf{\kappa}) =\mathcal{N}(\mathbf{\mu}(\mathbf{\Theta},\mathbf{\kappa},\mathbf{x}),\mathbf{\Lambda})\] \[=\frac{\exp\Bigl{(}-\tfrac{1}{2}(\mathbf{y}-\mathbf{\mu})^{\mathrm{T}} \mathbf{\Lambda}^{-1}(\mathbf{y}-\mathbf{\mu})\Bigr{)}}{\sqrt{(2\pi)^{N}|\mathbf{\Lambda}|}}. \tag{14}\] Hereby, the vectors \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{N}\) contain the entirety of the \(N\in\mathbb{N}\) used polarization observable data points and the corresponding angular values at which they were measured, respectively: \[\mathbf{y} =[\mathbf{y}^{\sigma_{0}},\mathbf{y}^{\bar{C}},\mathbf{y}^{\bar{E}},\mathbf{y}^{ \bar{E}},\mathbf{y}^{\bar{T}},\mathbf{y}^{\bar{F}}], \tag{15}\] \[\mathbf{x} =[\mathbf{x}^{\sigma_{0}},\mathbf{x}^{\bar{G}},\mathbf{x}^{\bar{\Sigma}},\mathbf{ x}^{\bar{E}},\mathbf{x}^{\bar{T}},\mathbf{x}^{\bar{F}}]. \tag{16}\] The parameters of the model can be divided into two groups. On the one hand, the real- and imaginary parts of multipoles, i.e. Eq. (4), denoted by \(\mathbf{\Theta}\in\mathbb{R}^{8\ell_{\mathrm{max}}-1}\) are used to model the underlying physical process. On the other hand, the parameters \(\mathbf{\kappa}\in\mathbb{R}^{5}\) which are used to model the systematic uncertainties of the involved data sets: \[\mathbf{\kappa}=[\kappa^{\sigma_{0}},\kappa^{\hat{C}},\kappa^{\hat{E}},\kappa^{\hat{T} },\kappa^{\hat{F}}]. \tag{17}\] The multivariate normal distribution in Eq. (14) is constructed with the model predictions \(\mathbf{\mu}\in\mathbb{R}^{N}\) for the expectations of \(\mathbf{y}\): \[\mathbf{\mu}(\mathbf{\Theta},\mathbf{\kappa},\mathbf{x}) =[\kappa^{\sigma_{0}}\cdot\mathbf{\mu}^{\sigma_{0}},\kappa^{\hat{C}} \cdot\mathbf{\mu}^{\hat{C}},1\cdot\mathbf{\mu}^{\hat{\Sigma}},\] \[\kappa^{\hat{E}}\cdot\mathbf{\mu}^{\hat{E}},\kappa^{\hat{T}}\cdot\mathbf{ \mu}^{\hat{T}},\kappa^{\hat{F}}\cdot\mathbf{\mu}^{\hat{F}}]. \tag{18}\] The \(\mathbf{\mu}^{\alpha}(\mathbf{\Theta},\mathbf{x}^{\alpha})\) are the model predictions for the individual profile functions, i.e. Eq. (2). Hence, in order to model the systematic uncertainties, one additional parameter per relevant data set is introduced and multiplied with the corresponding theoretical prediction for the profile function. Thus, the model gets additional degrees of freedom to adjust for possible systematic uncertainties. However, these parameters are restricted to physical meaningful bounds, further discussed in Section VI.2. As explained in Section IV, the systematic uncertainty of the polarization observable \(\Sigma\) can not be modeled. Finally, there is the covariance matrix \(\mathbf{\Lambda}\in\mathbb{R}^{N\times N}\). Its off-diagonal terms are not identical, and therefore the data-pairs are not exchangeable7. This will become relevant when discussing the predictive performance in Section VII.5. Footnote 7: If the joint probability density function \(p(\mathbf{y},\mathbf{x}|\mathbf{\Theta},\mathbf{\kappa})\) is invariant under permutations of the data-pairs \((y,x)_{i}\), then the data-pairs are said to be exchangeable [28; 54]. ### Prior distribution The priors for the multipole parameters are chosen as uniform priors with bounds corresponding to the physically allowed ranges of the parameters (see [18]). Thus, the priors incorporate physical knowledge while being uninformative compared to the likelihood distribution. In principle a uniform prior for the systematic parameters would be reasonable. However, in this case the hard boundaries in the parameter space lead to numerical issues. Thus, the prior distributions for the scaling parameters \(\mathbf{\kappa}\) are assumed to be normal distributed, and centered around the value one. The standard deviation is chosen such that8 99% of the distribution are within the range \(1\pm\Delta_{\text{sys}}^{\alpha}\), which results in (rounded to five digits): Footnote 8: This can be calculated by solving numerically the following equation for the standard deviation \(\sigma\): \[\int_{-\infty}^{1-\Delta_{\text{sys}}^{\alpha}}\frac{\exp\Bigl{(}-\frac{1}{2} \bigl{(}\frac{x-1}{\sigma}\bigr{)}^{2}\Bigr{)}}{\sigma\sqrt{2\pi}}\,\mathrm{d }x=\frac{1-0.99}{2}.\] \[\kappa^{\sigma_{0}} \sim\mathcal{N}(1,0.01552), \tag{19}\] \[\kappa^{\hat{C}} \sim\mathcal{N}(1,0.04542),\] (20) \[\kappa^{\hat{E}} \sim\mathcal{N}(1,0.03882),\] (21) \[\kappa^{\hat{T}} \sim\mathcal{N}(1,0.03222),\] (22) \[\kappa^{\hat{F}} \sim\mathcal{N}(1,0.04076). \tag{23}\] This choice is in accordance with the conservative combination of the systematic uncertainties as discussed in Section IV. The treatment of systematic errors within this paper is similar to that in Refs. [55; 38; 56]. ## VII Analysis steps This section explains in detail the analysis steps in order to determine the complex multipole parameters using Bayesian statistics, from which predictions of polarization observables are then obtained. The posterior, which may in many cases be explicitly multimodal, and the goal to analyze the structure of the mathematical ambiguities, bear a major challenge with respect to the sampling of the posterior distribution. On the one hand, posteriors with multiple modes connected by regions of low posterior density persuade the Markov chains to get stuck within a certain mode, unable to explore multiple ones [28]. This results in drastically9 failing Markov chain Monte Carlo convergence diagnostics, such as the potential-scale-reduction statistic \(\hat{R}\). Footnote 9: This behavior was to be expected since \(\hat{R}\) is a measure whether all chains have converged to the same distribution. On the other hand, the number of possible modes increases exponentially with the truncation order \(\ell_{\text{max}}\). An upper limit can be given by \(2^{4\ell_{\text{max}}}-2\), as this is the maximal possible number of accidental ambiguities of the four group \(\mathcal{S}\) observables (note that the bulk of this number is probably not realized as actual ambiguities, due to the multiplicative constraint Eq. (10)) [18]. Capturing consistently all modes of the marginal posterior distributions via a large number of chains, with randomized starting values is computationally inefficient. Furthermore, randomized starting values will lead to traceplots where one can not distinguish between chains that have not converged yet and chains which have explored more than one mode. An illustrative example is shown in Fig. 3. These difficulties can be overcome by specifying well chosen starting values for the Markov chain Monte Carlo algorithm, explained in more detail in Sections VII.2 and VII.3. On that account, certain parts of the typical Bayesian workflow [57] have to be adapted. ### Monte Carlo maximum a posteriori estimation In order to compare between different solutions, found within the same analysis, it is important to find all modes of the marginal posterior distributions, especially the global maximum. As already mentioned, the number of accidental ambiguities rises exponentially with the truncation order. Thus, the utilization of an optimization routine is substantially more efficient10 than a large number of Markov chain Monte Carlo chains. With this in mind, a Monte Carlo maximum a posteriori estimation of the proposed posterior is employed as a preparatory step for the Bayesian inference procedure. The results of the following approach are cross checked via an implementation in Mathematica [58], using the Levenberg-Marquardt algorithm [59; 60], as well as in Julia [61], using the L-BFGS-algorithm [62; 63; 64; 65; 66] via Optim.jl [67]. Footnote 10: Integration is far more computation-intensive than differentiation. At first, one needs to fix the overall phase of the multipoles, due to the bilinear product in Eq. (3). Indeed, without such a constraint the minimization algorithms would have convergence problems, as the solutions are no longer located at isolated points in the parameter space but on continuous connected regions. Without loss of generality, a valid choice is \(\text{Re}(E_{0+})>0\), \(\text{Im}(E_{0+})=0\)[18]. Second, the minimization algorithm is performed for \(n\) different starting values. The starting values are chosen within the physically allowed parameter space, which solely depends on the total cross-section \(\sigma_{\text{tot}}\)[18; 68]. Fortunately, the unpolarized differential cross-section is the most accurately measured observable in \(p\eta\)-photoproduction [1], thus yielding accurate limits. An appropriate amount of \(n\) equidistant points is chosen on each axis of this \(8\ell_{\text{max}}-1\) dimensional hyper-rectangle, such that the volume is sufficiently covered. Each of these parameter configurations is then used as starting values for the minimization algorithm. Finally, the non redundant solutions, of the \(n\) possible mode candidates, can be extracted via a clustering algorithm. Hereby, all values of the multipole parameters are rounded to six digits. Then the unique solution vectors can be filtered out. A rough estimate for the uncertainty of each parameter solution is calculated via the inverse of the Hesse matrix [69], i.e. assuming a Gaussian shape of the parameters. ### Sampling of the posterior Within this work, the well established probabilistic programming software Stan [70] has been used to encode the employed model and to run the posterior sampling with the state-of-the-art Hamiltonian Monte Carlo algorithm [31; 32] in combination with the No-U-Turn sampler [71]. The employed Stan model can be found in the supplementary material Ref. [72]. For each mode of the posterior distribution, determined within Section VII.1, \(N_{\text{c}}\) chains are sampled with starting values for the multipole and systematic parameters equal to the corresponding \((8\ell_{\text{max}}+4)\)-dimensional solution vector. This approach ensures adequate sampling of all marginal posterior modes and enables again a meaningful convergence diagnostics, further discussed in Section VII.3. Hence, this is true as long as the posterior modes are in the vicinity of the 'typical set' 11, which is the case within this paper. Footnote 11: An illustration of the ‘typical set’ can be found in [73]. The following tuning-parameters of the Hamiltonian Monte Carlo algorithm and the No-U-Turn sampler are adapted to the problem at hand. The average Metropolis acceptance probability \(\delta\in[0,1]\) is increased from its default value of \(0.8\) to \(\delta=0.99\). Thus, preferring a more fine-grained sampling, i.e. smaller leapfrog12 steps \(\epsilon\)[71], over the additional computation time. The maximum tree depth, with a default value of \(10\), is increased to \(50\), so that the algorithm can explore even challenging posterior regions without hitting the termination conditions [70]. Figure 3: Illustration of the first \(1000\) sampling points of a chain with initial value at \(3.7\) (the blue vertical line). The first sampling point is drawn in red. The chain converges from its starting point to a more likely solution, i.e. with higher log posterior density value. ### Monitor Markov chain Monte Carlo convergence Naturally one is interested in how well the structure of the posterior was explored by the applied Markov chain Monte Carlo algorithm. The goal is to diagnose whether all Markov chains have explored the same part of the posterior distribution [28], i.e. whether the obtained distribution is reliable or accrued due to a random effect. This can be monitored by convergence diagnostics such as the potential-scale-reduction statistic \(\hat{R}\)[34] and Monte Carlo standard error [33] (which depends on the effective sample size [28]). Within this work, the adapted versions of these diagnostics, as proposed by Vehtari et al. [74], are employed. In addition, trace plots [35] can be used to monitor the behavior of chains which explore multiple marginal modes. For each of these diagnostics, it is essential to use multiple chains [74; 35] for a reliable result. However, a multimodal posterior provides some pitfalls. As already mentioned at the beginning of Section VII, the Markov chains can get stuck in certain, isolated modes. Thus not all chains would have seen the same parts of the posterior distribution and the convergence diagnostics would indicate that the chains have not converged. Therefore, in case a multimodal posterior is studied, where all modes are of interest, the usual methods are not applicable. An adaptation has to be made. Under the assumption that all modes of the posterior were found via Monte Carlo maximum a posteriori estimation, see Section VII.1, the following strategy is employed. A schematic representation of the adapted approach can be found in Fig. 4. Instead of applying the convergence diagnostics to all chains at once, the chains are clustered into groups according to their sampled parameter space and the convergence diagnostics are then applied onto each group separately13. Consequently, the convergence for the whole posterior is monitored. Footnote 13: A similar approach was used in Ref. [75]. The chains can be grouped according to their similarity as follows: To avoid problems during the clustering process, coming from high dimensional data [76], a dimensional reduction of the chains is performed. Each chain, consisting of \(S\) sampling points, is characterized via a vector of its quantiles, in this case the \([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]\) - quantiles. Subsequent, the corresponding distance matrix [77] of the quantile vectors is calculated using the Euclidean metric. The constructed matrix serves as input for the DBSCAN algorithm [78]. The minimal cluster size should be at least two, as this is the minimal amount of chains required to perform the \(\hat{R}\) diagnostic [35]. An appropriate \(\epsilon\) - neighborhood for the DBSCAN algorithm can be graphically determined, for example by visualizing the Euclidean distances of the quantile vectors to each other. Afterwards, the correct clustering of chains can be checked visually. Alternatively, the two-sample Kolmogorov-Smirnov test [79; 80] or the K-Sample Anderson-Darling test [81] could be employed to compare two distributions with each other. The outlined approach still allows to adjust the number of chains \(N_{\mathrm{c}}\) per group and the sampling points \(S\) in order to gain adequate convergence diagnostics and the desired precision for the parameter estimates. Within this paper, one is aiming for \(\hat{R}<1.01\)[74] and a relative Monte Carlo standard error in the region of a few percent. ### Posterior predictive check In general, a posterior predictive check [28] is useful for determining flaws within the Bayesian analysis, such as problematic data points, programming errors, systematic effects or the inadequacy of the employed model. Furthermore, it allows for a more detailed investigation of the ambiguities, as it helps to clarify if a clear distinction between the different solutions is even possible. In conjunction with this, it is possible to check whether additional measured observables could resolve the ambiguity. Furthermore, predictions of polarization observables, which were not included Figure 4: Adapted workflow to monitor Markov chain Monte Carlo convergence due to a multimodal posterior. be calculated. The central component is the probability distribution of reproduced data points \(\mathbf{y}^{\rm rep}\) given the used data \(\mathbf{y}\), which is called the posterior predictive distribution [28; 37]: \[p(\mathbf{y}^{\rm rep}\mid\mathbf{y}) =\mathbb{E}_{\mathbf{\Theta}}[p(\mathbf{y}^{\rm rep}\mid\mathbf{\Theta})],\] \[=\int p(\mathbf{y}^{\rm rep}\mid\mathbf{\Theta})p(\mathbf{\Theta}\mid\mathbf{y}) \,\mathrm{d}\mathbf{\Theta}\,, \tag{24}\] where \(\mathbb{E}_{\mathbf{\Theta}}[\ldots]\) denotes the expectation value over the parameter-vector \(\mathbf{\Theta}\). Hence, this allows to compare each data point \(y_{i}\) directly with its corresponding replicated marginal distribution \(p(y_{i}^{\rm rep}\mid\mathbf{y})\). Both should look similar under a reasonable model [28]. Irregularities, such as outliers or statistically weak data points, can be detected. As in the last section, the posterior predictive check is performed for each group of chains. Hence, one can check if different marginal modes give the same posterior predictive distribution, which is indeed very helpful to determine potentially harmful ambiguities. ### Predictive performance In principle, predictive performance could be used to compare different solutions found within the same analysis or even different models. However, as described in Section IV, the statistical analysis takes correlations between the used data points into account. As a consequence, the off-diagonal elements of the calculated covariance matrices, for all six energies, are non-identical. Thus, the data pairs \((y,x)_{i}\) are no longer exchangeable. Unfortunately, methods like cross-validation [28] or information criteria like the widely applicable information criterion [37] are not applicable any more in order to estimate the predictive performance. This is explained in detail in Appendix E. As an alternative approach to compare different solutions, found within one specific analysis, the log posterior density distributions are used. It is assumed that higher log posterior density values correspond to more likely parameter values (within one specific analysis). ### Analysis of generated data It is crucial to prove the correct implementation and validity of the used model. An ideal testing scenario would be the a priori knowledge of the correct outcome of the analysis using the model under consideration. Therefore the partial-wave analysis solution EtaMAID2018 [82] is employed for the electromagnetic multipoles in Eq. (4) up to the desired truncation order \(\ell_{\rm max}\). By these means, pseudo data for the profile functions \(\hat{\Omega}^{\alpha}(W,\theta)\) can be generated via Eq. (2) for certain energies and angular positions for the observables \(\sigma_{0}\), \(\Sigma\), \(T\), \(E\), \(F\) and \(G\). These data are then used as input for the truncated partial-wave analysis following the described steps in Sections VII.1 to VII.4. The analysis should yield again the EtaMAID2018 multipole solutions. ## VIII Results In a first step, Bayesian inference was applied to the generated data, see Section VII.6, and was successful in extracting the EtaMAID2018 multipole solutions again. In a second step, Bayesian inference, i.e. the outlined approach in Sections VII.1 to VII.5, was applied to the experimental data sets introduced in Section IV. The results of these analyses are discussed in the following. The maximum a posteriori approach implemented in Julia and Mathematica gave the same results for the analyzed truncation orders and energy-bins, i.e. with two different minimization algorithms. Predictions for \(H\), \(P\), as well as the polarization observables of group \(\mathcal{BR}\) and \(\mathcal{TR}\) are generated 14. All predicted data distributions are within the physical bounds between -1 and 1 and their overall course over the angular range shows the correct tendency at \(\cos(\theta)=\pm 1\) towards the mathematically expected values [17], see for example Fig. 15 and Ref. [72]. In general, the posterior predictive checks are plotted together with the theory values of EtaMAID2018 [82], BnGa-2019 [46] and JuBo-2022 [56]. Footnote 14: To get from the profile functions to the dimensionless polarization observables, the predicted distribution is divided by a certain \(\sigma_{0}\)-value, corresponding to the \(\cos(\theta)\)-value at which the prediction were calculated. Bayesian inference gives more insight into the relevance of ambiguities, due to the Hamiltonian Monte Carlo algorithm. When multiple chains sample consistently multiple marginal modes together, this is a sign of a problematic ambiguity, as they tend to have comparable log posterior densities. An example is shown in the top left of Fig. 12. The presentation of the multipole parameter results is quite detailed and deserves an explanation. The top part shows the solutions found via Monte Carlo maximum a posteriori estimation and their corresponding \(\chi^{2}/ndf\) values, together with the \(1\sigma\)-uncertainty (see Section VII.1). The middle part shows the marginal-parameter distributions obtained via Bayesian inference, as explained in Sections VI and VII.2. For a better comparison of the two approaches for \(\ell_{\rm max}=1\), the \([0.16,0.5,0.84]\)-quantiles of the distributions, corresponding to the median of the distribution and the \(1\sigma\)-uncertainty boundaries, are drawn as dashed lines through all parts of the figure. Whereas, for \(\ell_{\rm max}=2\) a solid vertical line is drawn for each peak of the multimodal distribution, i.e. the most probable values. The bottom part of the figure is a contour plot of the log posterior density distribution and the corresponding marginal-parameter distribution. The outermost contour line is at 1% of the maximum altitude, each subsequent line represents an \(11\%\) increase. It is assumed that a log posterior distribution centered around a higher log posterior value, corresponds to more likely parameter values, as this solution contributes more probability mass to the posterior. Each solution group is drawn in a different color. These colors are consistent between the shown figures (Monte Carlo convergence-, multipole-, predictive performance plots, etc.) for a certain energy and truncation order. Within the following discussion of the results a representative selection of figures is shown. All parameter figures, for all analyzed energies and truncation orders can be found in the supplementary material Ref. [72]. ### Truncation order \(\ell_{\text{max}}=1\) The number of warmup and post-warmup samples are set to \(2\times 10^{4}\), respectively, for each of the six energy-bins. The number of chains started at each solution, found via the Monte Carlo maximum a posteriori approach, is set to \(N_{\text{c}}=10\). The corresponding convergence diagnostics, which are shown in Appendix F, Fig. 17, validate this choice, i.e. \(\hat{R}<1.01\) and a relative Monte Carlo standard error in the region of a few percent or less. The following discussion is separated according to two sets of energy-bins, in each of which the analyses exhibit a certain behavior. #### iv.1.1 \(E_{\gamma}^{\text{lab}}=[750,850,950]\) MeV For the energies in the neighborhood of the \(\eta p\) production threshold one has two distinct groups of chains. The corresponding log posterior density distributions are clearly distinguishable. A typical example is shown at the top of Fig. 5. The estimated parameter values associated with the curves around \(-50\) are more likely than the ones corresponding to the curves around \(-90\). The interpretation is that all \(N_{\text{c}}\) chains of one starting value configuration explore exclusively the same single mode. Hence, the two modes are disconnected by regions of low probability, thus the chains are unlikely to explore both modes [32]. In general, the sampled marginal-parameter distributions from Bayesian inference share the median and \(1\sigma\)-uncertainty very well with the solutions of the Monte Carlo maximum a posteriori approach. The multipole parameters \(\text{Re}(E_{0+})\) and \(\text{Im}(M_{1-})\) are shown as examples in Fig. 6 in place of the other multipole parameters, which show a similar behavior (see Ref. [72]). The reproduced data distributions can be seen in Fig. 7 and show an interesting behavior. For the unpolarized differential cross-section \(\sigma_{0}\) and the profile functions \(\hat{G},\hat{\Sigma}\) and \(\hat{E}\) the posterior predictive check looks reasonable, as it resembles the original data and do not show significant evidence of overfitting nor systematic effects. However, the different distributions are hardly distinguishable. In contrast, the distributions for the different solutions for \(\check{T}\) and \(\check{F}\) can be clearly distinguished. It seems that certain outliers in the original polarization observable data facilitate the emergence of an ambiguity, because these outliers are not able to exclude either of the two mathematical descriptions. Thus, a remeasurement or a reanalysis of \(\check{T}\) or \(\check{F}\) could be able to resolve the remaining mathematical ambiguity. #### iv.1.2 \(E_{\gamma}^{\text{lab}}=[1050,1150,1250]\) MeV These energies do not show any mathematical ambiguities, i.e. there is exactly one group of chains. The maximum a posteriori approach and Bayesian inference yield again very similar results, see Ref. [72]. The posterior predictive checks for the profile functions look reasonable (no overfitting, no systematic effects vis Figure 5: Examples for log posterior density distributions. Top) Two groups of chains with clearly distinct distributions, \(E_{\gamma}^{\text{lab}}=750\) MeV and truncation order of \(\ell_{\text{max}}=1\). Bottom) Five groups of chains with similar distributions, \(E_{\gamma}^{\text{lab}}=1150\) MeV and truncation order of \(\ell_{\text{max}}=2\). ible), with the two exceptions \(\sigma_{0}\) and \(\tilde{\Sigma}\). At each of the three energy bins, the measured data for \(\sigma_{0}\) are systematically higher for \(\cos(\theta)>0\) than the model predictions would suggest and for \(\tilde{\Sigma}\) it seems they do not resemble the original data points at all, see Fig. 7. It seems that the employed statistical model with truncation order \(\ell_{\rm max}=1\) is not able to reproduce data points for all observables equally well. This behavior could be explained by an emerging resonance in the energy range between 950 and 1050 MeV, which couples to a higher orbital angular momentum and contributes predominantly to \(\sigma_{0}\) and \(\tilde{\Sigma}\). The reaction of \(\eta\)-photoproduction acts as an isospin-filter due to the isospin conservation in the strong interaction, therefore only \(N^{*}\) resonances are relevant for the following discussion. There are two \(N^{*}\) resonances which fulfill the conservation laws, couple to \(\ell_{\rm max}=2\) and are within the required energy range (taking into account the Breit-Wigner width [2] of the resonances), namely \(N(1675)5/2^{-}\)[2] at \(E_{\gamma}^{\rm lab}\approx 1026\) MeV and \(N(1700)3/2^{-}\)[2] at \(E_{\gamma}^{\rm lab}\approx 1071\) MeV. There is also a resonance which opens up already at \(E_{\gamma}^{\rm lab}\approx 762\) MeV namely \(N(1520)3/2^{-}\)[2]. However, this resonance has a \(\sim 10\) times smaller branching ratio to \(\eta N\)[2] than the two previously mentioned resonances and the used data sets do not seem to provide the required sensitivity for it. As the applied theoretical model is not able to reproduce the used data, neither the predicted observable distributions nor multipole or systematic parameters are shown here, but can be found in the supplementary material Ref. [72]. ### Truncation order \(\ell_{\rm max}=2\) The number of warmup and post-warmup samples are increased to \(5\cross 10^{4}\), respectively, for each of the six energies. The number of chains, started at each solution, found via the Monte Carlo maximum a posteriori approach, is set to \(N_{\rm c}=10\). The corresponding convergence diagnostics, which are shown in Appendix F, Fig. 18, validate this choice, i.e. \(\hat{R}<1.01\) and a relative Monte Carlo standard error in the region of a few percent. The diagnostics for 750 MeV are fine, despite their slightly increased values, caused by the highly multimodal marginal parameter distribution. In general, the log posterior density distributions are less separated in comparison to \(\ell_{max}=1\), an example is shown at the bottom of Fig. 5. There are several phenomena appearing at this truncation order, which are discussed in the following: #### iv.2.1 1.1 The convergence diagnostics, see Fig. 18, for the four energies \(E_{\gamma}^{\rm lab}=[950,1050,1150,1250]\) MeV look suspicious. In each case one group of chains show \(\hat{R}\)-values way above 1.01 and relative Monte Carlo standard errors of over 100%. This results from two modes separated in phase space by a small region of low probability, so that the Metropolis acceptance probability [32] for a transition between the two high probability regions is quite small but nonzero. Hence, just a small number of chains is able to explore both marginal modes at once, which is the reason for the suspicious convergence diagnostics. For the case of 1050 MeV, the blue distribution corresponds to a cluster with just one group member. Hence, it is not possible to calculate an \(\hat{R}\)-value for this cluster. It is important to note that this behavior can not be prevented as it is inherently a random effect. As an example how this phenomenon manifests within a parameter distribution, see the blue distributions of \({\rm Im}(M_{1+})\) and \({\rm Im}(M_{2+})\) in Fig. 8. Despite their convergence diagnostics, these types of distributions are shown within the multipole parameter and posterior predictive plots for their illustrative purposes. Figure 6: Parameter distributions for the multipoles \({\rm Im}(M_{1-})\) at \(E_{\gamma}^{\rm lab}=750\) MeV and \({\rm Re}(E_{0+})\) at \(E_{\gamma}^{\rm lab}=950\) MeV for a truncation order of \(\ell_{\rm max}=1\). The corresponding log posterior density and the solutions found via the Monte Carlo maximum a posteriori approach, as explained in Section VII.1, are shown as well. Further illustrations can be found in the supplementary material Ref. [72]. #### V-B2 The truncated partial-wave analysis with \(\ell_{\rm max}=2\) is able to reproduce the data points for all used profile functions for all energies, as can be seen in Fig. 9. Indeed the used model is now able to describe the data points for \(\sigma_{0}\) and \(\tilde{\Sigma}\) for \(E_{\gamma}^{\rm lab}=[1050,1150,1250]\) MeV, in contrast to \(\ell_{\rm max}=1\). #### V-B3 As the two former points validate the Markov chain Monte Carlo sampling and the employed model, the multipole results can be studied. As to be expected, in comparison to truncation order \(\ell_{\rm max}=1\), more ambiguities emerge. Their corresponding log posterior distributions, likewise \(\chi^{2}/ndf\)-values, have moved closer together in terms of their range of values. Thus, in some cases the most likely solution can not be determined. Each marginal distribution, for all systematic parameters, for all analyzed energy-bins, is exclusively unimodal. Examples are shown in Fig. 10, see also Ref. [72]. The solutions for \(E_{0+}\) and \(M_{2+}\) are shown as representative examples of the multipole parameters in Figs. 11 to 13. The solutions for all multipole parameters can be found in Ref. [72]. Typically, the peaks of the marginal distributions are in agreement with the first few 'best' a posteriori estimates. However, not every a posteriori solution has a corresponding peak within the marginal distributions. The reason could be twofold. On the one hand, the interpretation of a marginal distribution differs that of a maximum a posteriori estimate. On the other hand, the reason might be in the Hamiltonian Monte Carlo algorithm [31; 32], i.e. it is observed that some of the starting values are not in the direct vicinity of the 'typical set' [73] but adjust rather quickly. An example Figure 7: Posterior predictive check for the profile functions \(\sigma_{0},\tilde{G},\tilde{\Sigma},\tilde{E},\tilde{T}\) and \(\tilde{F}\), for truncation order \(\ell_{\rm max}=1\) and energy-bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV. The reproduced data distributions for the different solutions are shown together with the original data with statistical uncertainties as black points. In addition, the corresponding values from EtaMAID2018 [82] (dashed line), BnGa-2019 [46] (dotted line) and JuBo-2022 [56] (dash-dotted line) are shown as well. is shown in Fig. 3. iv.2.44 Within Fig. 14 the solution clusters of the \(8\ell_{\rm max}-1\) multipole parameters are shown as a function of the photon energy \(E_{\gamma}^{\rm lab}\) together with the results of EtaMAID2018 [82], BnGa-2019 [46] and JuBo-2022 [56]. For a detailed comparison between the different solution clusters and their relevance to each other, the reader is advised to the tripartite multipole parameter figures in Figs. 11 to 13 and Ref. [72]. In general, the results of this paper are in good agreement with their results. However, for the multipole parameter \({\rm Re}(E_{0+})\) at 750 MeV the results of EtaMAID2018, BnGa-2019 and JuBo-2022 are at a value of \(\sim 20\) mfm. The data sets of \(\eta\)-photoproduction (see Section IV) used within this analysis do not emphasize such high values. While the marginal parameter distribution does indeed have a non vanishing probability at \(\sim 19\) mfm, it favors values of \(\sim 8.5\) mfm. The strength of the multipole \(E_{0+}\) near the \(\eta p\) production threshold, as seen in EtaMAID2018, BnGa-2019 and JuBo-2022, comes from the dominant \(N(1535)1/2^{-}\) resonance, which couples to the \(S\)-wave \(E_{0+}\). The reason for this is probably the conceptual difference between truncated partial-wave and a full partial wave analysis. On the one hand, this information could come from the fact that BnGa-2019 and JuBo-2022 are coupled-channel analyses, which include at the same time multiple different final states [83]. On the other hand, do EtaMAID2018, BnGa-2019 and JuBo-2022 use the \(\pi N\) partial-wave amplitudes from SAID [83] as input, in which the \(N(1535)1/2^{-}\) resonance is included as well [84]. In contrast, the present analysis is not a coupled-channel analysis and does not use the SAID solutions. In addition, the partial wave analyses use the full available data sets, which, especially for the differential cross section, increase the amount of data in the fits by a large factor. However, a truncated partial-wave analysis is completely model independent, but it is only performed at discrete points in energy, and thus can only use data from selected energy bins. Furthermore, while a dominant \(S\)-wave \(E_{0+}\) leads to for example a nearly constant maximal allowed value of one for the observable \(E\) for all angles, the reverse conclusion is not always true. Thus, the observable \(E\) can not distinguish between \(S\)-wave \(E_{0+}\) and \(P\)-wave \(M_{1-}\) as both can lead to these maximum values. As can be seen for example in our results for 750 MeV (see Fig. 14), the anticipated strength of \(E_{0+}\) has migrated to other multipoles, as for example into \(M_{1-}\). Improved statistics of the involved data sets in terms of the angular range or using additional observables in a future analysis, may shift the probability mass of the distribution of \({\rm Re}(E_{0+})\) at 750 MeV to the values of the three above-mentioned partial wave models. iv.2.55 In relation to truncation order \(\ell_{\rm max}=1\): if different groups of chains are present for a given energy, these have nearly identical reproduced data distributions, see Fig. 9. Hence, one has to look for additional observables which could resolve the ambiguities. The prediction of observables, which were not utilized in this analysis, can be used for this purpose. Specifically, one is looking for observables for which the different groups of chains give distinct predictions, i.e. unique functional behaviors over the \(\cos(\theta)\)-range. The utilization of such an observable could resolve the remaining ambiguities. Promising candidates for future measurements of polarization observables are listed in Table 3 and shown in Fig. 15. In particular, the polarization observable \(C_{z^{\prime}}\) seems suitable to reduce the ambiguities at all six energy-bins. Figure 8: Examples for a group of chains with inadequate Markov chain Monte Carlo convergence diagnostics, i.e. the orange distribution (left) and the cyan distribution (right). Compare this with the corresponding diagnostics in Fig. 18. The different parts of the tripartite plots are explained at the beginning of Section VIII. Further illustrations can be found in the supplementary material Ref. [72]. ### Truncation order \(\ell_{\rm max}>2\) In general, the truncation order should be as high as possible, as lower partial waves interfere with higher ones and can create non-negligible contributions. That said, as the truncation order increases so does the number of accidental ambiguities, e.g. 43 posterior modes were found for \(\ell_{\rm max}=3\) and 1250 MeV. This leads to a numerical demanding situation for reaching the targeted Markov chain Monte Carlo convergence diagnostics. Through the required, huge number of chains the visual check of clustering becomes challenging as well. Furthermore, the theoretical model with truncation order \(\ell_{\rm max}=2\) is able to describe the data very well, see the reproduced data distributions in Fig. 9. The statistical \begin{table} \begin{tabular}{c l} \(E_{\gamma}^{\rm lab}\) / MeV & Observables \\ 750 & \(C_{x^{\prime}},C_{z^{\prime}},L_{x^{\prime}},L_{z^{\prime}}\) \\ 850 & \(C_{x^{\prime}},C_{z^{\prime}},L_{x^{\prime}},L_{x^{\prime}},T_{x^{\prime}},T_{ x^{\prime}}\) \\ 950 & \(C_{x^{\prime}},C_{z^{\prime}},L_{x^{\prime}},L_{x^{\prime}},T_{x^{\prime}}\) \\ 1050 & \(C_{x^{\prime}},C_{z^{\prime}},L_{x^{\prime}},O_{x^{\prime}},T_{z^{\prime}}\) \\ 1150 & \(C_{z^{\prime}},O_{x^{\prime}},T_{x^{\prime}},T_{z^{\prime}}\) \\ 1250 & \(C_{z^{\prime}}\) \\ \end{tabular} \end{table} Table 3: Promising polarization observable candidates to resolve the ambiguities for truncation order \(\ell_{\rm max}=2\). The corresponding predicted data distributions are shown in Fig. 15. Figure 9: Posterior predictive check for the profile functions \(\sigma_{0},\tilde{G},\tilde{\Sigma},\tilde{E},\tilde{T}\) and \(\tilde{F}\), for truncation order \(\ell_{\rm max}=2\) and energy-bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV. The reproduced data distributions for the different solutions are shown together with the original data with statistical uncertainties as black points. In addition, the corresponding values from EtaMAID2018 [82] (dashed line), BnGa-2019 [46] (dotted line) and JuBo-2022 [56] (dash-dotted line) are shown as well. quality of the used data do not allow to see any F-wave contributions, for example from the \(N(1680)5/2^{+}\)[2] resonance at \(E_{\gamma}^{\rm lab}\approx 1035\) MeV. Due to the mentioned points, the truncation orders \(\ell_{\rm max}>2\) are part of further research and are not shown within this paper. ## IX Summary and outlook For the first time, Bayesian statistics has been applied to truncated partial-wave analysis. The analysis was performed with the six polarization observables \(\sigma_{0},\Sigma,T,E,F\) and \(G\) of \(\eta\)-photoproduction for the energy Figure 10: Examples for solutions of systematic parameters for a truncation order of \(\ell_{\rm max}=2\), for the energy-bin \(E_{\gamma}^{\rm lab}=950\) MeV. bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV and different truncation orders. Hereby, highly multimodal posterior distributions were encountered which enforced adaptations for monitoring the Markov chain Monte Carlo convergence diagnostics. The analysis itself consisted of a nonlinear model which considered correlations between data points as well as systematic uncertainties. The used data show clear \(D\)-Wave contributions above \(E_{\gamma}^{\rm lab}=950\) MeV, but are not sensitive to \(F\)-Wave or higher partial wave contributions. The results were marginal distributions as well as a posteriori estimates of the electromagnetic multipole Figure 12: Solutions of the multipole parameter \(\mathrm{Re}(M_{2+})\) for a truncation order of \(\ell_{\rm max}=2\), for the energy-bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV. The different parts of the tripartite plots are explained at the beginning of Section VIII. Figure 13: Solutions of the multipole parameter \(\mathrm{Im}(M_{2+})\) for a truncation order of \(\ell_{\rm max}=2\), for the energy-bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV. The different parts of the tripartite plots are explained at the beginning of Section VIII. Figure 14: Marginal multipole solutions for the truncation order \(\ell_{\rm max}=2\) for the energy-bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV. In addition, the multipole parameter predictions from EtaMAID2018 [82] (dashed line), BnGa-2019 [46] (dotted line) and JūBo-2022 [56] (dash-dotted line) are shown as well. The relevance of a solution is represented by a transition from sienna (less relevant) to blue (more relevant) hues. However, for a detailed comparison between the solutions and their relevance to each other, the reader is advised to the tripartite multipole parameter figures in Figs. 11 to 13 and Ref. [72]. Figure 15: Predicted data distributions for the polarization observables \(C_{x^{\prime}},H,T_{x^{\prime}},O_{x^{\prime}},L_{x^{\prime}},P,T_{x^{\prime}},O _{x^{\prime}},L_{x^{\prime}}\) and \(C_{x^{\prime}}\) for the energy-bins \(E_{\gamma}^{\rm lab}=[750,850,950,1050,1150,1250]\) MeV, using a truncation order of \(\ell_{\rm max}=2\). In addition, the corresponding values from EtaMAID2018 [82] (dashed line), BnGa-2019 [46] (dotted line) and JuBo-2022 [56] (dash-dotted line) are shown as well. parameters. Despite the fact that truncated-partial wave analysis is a simpler, yet model-independent, approach and used significantly less data than a full partial wave analysis, the results of this paper are in good agreement with the results of MAID2018, BnGa-2019 and JuBoo222. In addition, predictions for the polarization observables \(H\), \(P\) as well as of group \(\mathcal{BR}\) and \(\mathcal{TR}\) were given, which includes previously unmeasured polarization observables. Based on this, promising future measurements could be determined in order to minimize the remaining ambiguities. In a future study, the role of the prior distribution with regard to resolving the mathematical ambiguities could be investigated. And for even more challenging phase spaces, where the Markov chain Monte Carlo starting values from the maximum a posteriori approach are not in the vicinity of the 'typical set', one might use the expectation maximization algorithm to determine the marginal posterior modes and use these as starting values. ###### Acknowledgements. The authors would like to thank Prof. Dr. Sebastian Neubert, Prof. Dr. Carsten Urbach, and Dr. Jan Hartmann for several fruitful discussions. Furthermore, special thanks go to Prof. Dr. Reinhard Beck for his support. ## Appendix A Discrete ambiguities of the analyzed set of six polarization observables Within this appendix, the discrete partial-wave ambiguities of the six observables \(\left\{\sigma_{0},\dot{\Sigma},\dot{T},\dot{F},\dot{G},\dot{E}\right\}\) analyzed within this work (cf. Section IV and Table 2) are discussed. It is argued that this specific set is mathematically complete in a truncated partial-wave analysis. As has been demonstrated already in other works (e.g. Ref. [18]), such mathematical considerations can still serve as a useful precursor to analyses of real data. The following discussion is based on the 'Omelaenko formalism' [26]. The basic definitions of the sixteen observables in pseudoscalar meson photoproduction, expressed in the transversity basis, are used. The expressions are collected in Table 4. ### Discrete ambiguities of the group \(\mathcal{S}\) observables in truncated partial-wave analysis As is well-known from Omelaenko's work, in the case of a truncated partial wave analysis with maximum angular momentum \(\ell_{\max}\), the four transversity amplitudes can be expressed in terms of linear factorizations: \[b_{1}\left(\theta\right) =-\mathcal{C}\,a_{2L}\,\frac{\exp\left(-i\frac{\theta}{2}\right) }{\left(1+t^{2}\right)^{L}}\,\prod_{k=1}^{2L}\left(t+\beta_{k}\right), \tag{101}\] \[b_{2}\left(\theta\right) =-\mathcal{C}\,a_{2L}\,\frac{\exp\left(i\frac{\theta}{2}\right) }{\left(1+t^{2}\right)^{L}}\,\prod_{k=1}^{2L}\left(t-\beta_{k}\right),\] (102) \[b_{3}\left(\theta\right) =\mathcal{C}\,a_{2L}\,\frac{\exp\left(-i\frac{\theta}{2}\right) }{\left(1+t^{2}\right)^{L}}\,\prod_{k=1}^{2L}\left(t+\alpha_{k}\right),\] (103) \[b_{4}\left(\theta\right) =\mathcal{C}\,a_{2L}\,\frac{\exp\left(i\frac{\theta}{2}\right) }{\left(1+t^{2}\right)^{L}}\,\prod_{k=1}^{2L}\left(t-\alpha_{k}\right), \tag{104}\] where \(t=\tan\frac{\theta}{2}\) (with the center-of-mass scattering angle \(\theta\)) and \(\{\alpha_{k},\beta_{k}\}\) are the Gersten/Omelaenko-roots which are, in essence, equivalent to multipoles. Furthermore, all permissible solutions have to satisfy Omelaenko's constraint, i.e. Eq. (10). The solution theory for the case where all four group \(\mathcal{S}\) observables have been selected, and thus only ambiguities of the four moduli \(\left|b_{1}\right|,\left|b_{2}\right|,\left|b_{3}\right|,\left|b_{4}\right|\) have to be considered, has been worked out at length in Ref. [18]. This solution theory leads to the known complete sets of five (e.g.: \(\left\{\sigma_{0},\dot{\Sigma},\dot{T},\dot{P},\dot{F}\right\}\)). In the following subsection, the special case where less than four diagonal observables are selected is considered. \begin{table} \begin{tabular}{l c c} Observable & Transversity-representation / \(\rho\) & Type \\ \hline \(\dot{\Omega}^{1}=\sigma_{0}\) & \(\frac{1}{2}\left(\left|b_{1}\right|^{2}+\left|b_{2}\right|^{2}+\left|b_{3} \right|^{2}+\left|b_{4}\right|^{2}\right)\) & \\ \(\dot{\Omega}^{4}=-\dot{\Sigma}\) & \(\frac{1}{2}\left(\left|b_{1}\right|^{2}+\left|b_{2}\right|^{2}-\left|b_{3} \right|^{2}-\left|b_{4}\right|^{2}\right)\) & \(\mathcal{S}\) \\ \(\dot{\Omega}^{10}=-\dot{T}\) & \(\frac{1}{2}\left(-\left|b_{1}\right|^{2}+\left|b_{2}\right|^{2}+\left|b_{3} \right|^{2}-\left|b_{4}\right|^{2}\right)\) & \\ \(\dot{\Omega}^{12}=\dot{P}\) & \(\frac{1}{2}\left(-\left|b_{1}\right|^{2}+\left|b_{2}\right|^{2}-\left|b_{3} \right|^{2}+\left|b_{4}\right|^{2}\right)\) & \\ \(\dot{\Omega}^{3}=\dot{G}\) & \(\mathrm{Im}\left[-b_{1}b_{3}^{*}-b_{2}b_{4}^{*}\right]\) & \(\mathcal{BT}\) \\ \(\dot{\Omega}^{5}=\ddot{H}\) & \(\mathrm{Re}\left[b_{1}b_{3}^{*}-b_{2}b_{4}^{*}\right]\) & \(\mathcal{BT}\) \\ \(\dot{\Omega}^{9}=-\dot{E}\) & \(\mathrm{Re}\left[b_{1}b_{3}^{*}+b_{2}b_{4}^{*}\right]\) & \\ \(\dot{\Omega}^{11}=\dot{F}\) & \(\mathrm{Im}\left[b_{1}b_{3}^{*}-b_{2}b_{4}^{*}\right]\) & \\ \(\dot{\Omega}^{14}=\dot{O}_{x^{\prime}}\) & \(\mathrm{Re}\left[-b_{1}b_{4}^{*}+b_{2}b_{3}^{*}\right]\) & \\ \(\dot{\Omega}^{7}=-\dot{O}_{x^{\prime}}\) & \(\mathrm{Im}\left[-b_{1}b_{4}^{*}-b_{2}b_{3}^{*}\right]\) & \(\mathcal{BR}\) \\ \(\dot{\Omega}^{16}=-\dot{C}_{x^{\prime}}\) & \(\mathrm{Im}\left[b_{1}b_{4}^{*}-b_{2}b_{3}^{*}\right]\) & \\ \(\dot{\Omega}^{2}=-\dot{C}_{x^{\prime}}\) & \(\mathrm{Re}\left[b_{1}b_{4}^{*}+b_{2}b_{3}^{*}\right]\) & \\ \(\dot{\Omega}^{6}=-\dot{T}_{x^{\prime}}\) & \(\mathrm{Re}\left[-b_{1}b_{2}^{*}+b_{3}b_{4}^{*}\right]\) & \\ \(\dot{\Omega}^{13}=-\dot{T}_{x^{\prime}}\) & \(\mathrm{Im}\left[b_{1}b_{2}^{*}-b_{3}b_{4}^{*}\right]\) & \(\mathcal{TR}\) \\ \(\dot{\Omega}^{8}=\dot{L}_{x^{\prime}}\) & \(\mathrm{Im}\left[-b_{1}b_{2}^{*}-b_{3}b_{4}^{*}\right]\) & \\ \(\dot{\Omega}^{15}=\dot{L}_{x^{\prime}}\) & \(\mathrm{Re}\left[-b_{1}b_{2}^{*}-b_{3}b_{4}^{*}\right]\) & \\ \end{tabular} \end{table} Table 4: The definition of the sixteen polarization observables in terms of transversity amplitudes \(b_{i}\) are displayed. The table is adapted from [21]. The definition of the observables in terms of the required polarization configurations can be found in Table 1. ### Discrete ambiguities of the three group \(\mathcal{S}\) observables \(\{\sigma_{0},\Sigma,T\}\) The set of observables, used within this work, contains only three simultaneously diagonalized observables \((\sigma_{0},\check{\Sigma},\check{T}\), see Table 4). Therefore, one has to investigate which kinds of discrete ambiguities are allowed by this set of three observables, using the root-formalism described in Appendix A.1. For this purpose, one can look at the'minimal' linear combinations of squared moduli: \[\sigma_{0}-\check{\Sigma}=2\left(\left|b_{1}\right|^{2}+\left|b_{ 2}\right|^{2}\right), \tag{100}\] \[\sigma_{0}+\check{\Sigma}=2\left(\left|b_{3}\right|^{2}+\left|b_ {4}\right|^{2}\right),\] (101) \[\sigma_{0}+\check{T}=2\left(\left|b_{1}\right|^{2}+\left|b_{4} \right|^{2}\right),\] (102) \[\sigma_{0}-\check{T}=2\left(\left|b_{2}\right|^{2}+\left|b_{3} \right|^{2}\right),\] (103) \[-\check{\Sigma}+\check{T}=2\left(\left|b_{1}\right|^{2}-\left|b_ {3}\right|^{2}\right),\] (104) \[-\check{\Sigma}-\check{T}=2\left(\left|b_{2}\right|^{2}-\left|b_ {4}\right|^{2}\right). \tag{105}\] Upon reducing the problem to the non-redundant amplitudes \(b_{2}\) and \(b_{4}\) in the truncated partial-wave analysis (by using \(b_{4}(W,\theta)=b_{3}(W,-\theta)\) and \(b_{2}(W,\theta)=b_{1}(W,-\theta)\), cf. Eqs. (6) to (9)), one obtains: \[\sigma_{0}-\check{\Sigma}\propto\prod_{k=1}^{2\ell_{\text{max}}} (t+\alpha_{k}^{*})(t+\alpha_{k})+\prod_{k=1}^{2\ell_{\text{max}}}(t-\alpha_{k }^{*})(t-\alpha_{k}), \tag{106}\] \[\sigma_{0}+\check{\Sigma}\propto\prod_{k=1}^{2\ell_{\text{max}} }(t+\beta_{k}^{*})(t+\beta_{k})+\prod_{k=1}^{2\ell_{\text{max}}}(t-\beta_{k}^ {*})(t-\beta_{k}),\] (107) \[\sigma_{0}+\check{T}\propto\prod_{k=1}^{2\ell_{\text{max}}}(t+ \alpha_{k}^{*})(t+\alpha_{k})+\prod_{k=1}^{2\ell_{\text{max}}}(t-\beta_{k}^{*} )(t-\beta_{k}),\] (108) \[\sigma_{0}-\check{T}\propto\prod_{k=1}^{2\ell_{\text{max}}}(t- \alpha_{k}^{*})(t-\alpha_{k})+\prod_{k=1}^{2\ell_{\text{max}}}(t+\beta_{k}^{*} )(t+\beta_{k}),\] (109) \[-\check{\Sigma}+\check{T}\propto\prod_{k=1}^{2\ell_{\text{max}}} (t+\alpha_{k}^{*})(t+\alpha_{k})-\prod_{k=1}^{2\ell_{\text{max}}}(t+\beta_{k}^ {*})(t+\beta_{k}),\] (110) \[-\check{\Sigma}-\check{T}\propto\prod_{k=1}^{2\ell_{\text{max}} }(t-\alpha_{k}^{*})(t-\alpha_{k})-\prod_{k=1}^{2\ell_{\text{max}}}(t-\beta_{k} ^{*})(t-\beta_{k}). \tag{111}\] The problem is now to find out which kinds of discrete ambiguity-transformations, when applied to the roots \(\{\alpha_{k},\beta_{k}\}\), leave the full set of quantities Eqs. (106) to (109) invariant, while also satisfying the multiplicative constraint Eq. (10). The first set of transformations which comes to mind is given by the well-known double ambiguity: \[\alpha_{k}\to\alpha_{k}^{*}\,\text{ and }\,\beta_{k}\to\beta_{k}^{*}\,\,\, \forall k=1,\ldots,2\ell_{max}. \tag{112}\] But other transformations may also be possible in addition, since the observable \(\check{P}\) is missing from the full diagonalizable set \(\left\{\sigma_{0},\check{\Sigma},\check{T},\check{P}\right\}\). Ideas that one would have to test are for instance exchange symmetries \[\alpha_{k}\to\beta_{k}\,\text{ and }\,\beta_{k}\to\alpha_{k}\,\,\,\forall k=1, \ldots,2\ell_{max}, \tag{113}\] sign-changes \[\alpha_{k}\to-\alpha_{k}\,\text{ and }\,\beta_{k}\to-\beta_{k}\,\,\,\forall k=1, \ldots,2\ell_{max}, \tag{114}\] or combinations of both \[\alpha_{k}\to-\alpha_{k}^{*}\,\text{ and }\,\beta_{k}\to-\beta_{k}^{*}\,\,\, \forall k=1,\ldots,2\ell_{max}. \tag{115}\] All of these ideas indeed do not violate the constraint Eq. (10). In case any such additional symmetry of the quantities Eqs. (106) to (111) were found, the next step would be to test which of the remaining three observables \(\{F,G,E\}\) resolves the symmetry. Neither of the proposed symmetries Eqs. (113) to (115) leaves all the six quantities Eqs. (106) to (111) invariant. It remains to be asked whether such additional symmetries actually exist. In case they do not exist, the discussion would be simplified significantly (since \(\check{F}\) and \(\check{G}\) in this case already resolve the double ambiguity Eq. (112)). Due to information-theoretical reasons, it only seems permissible to simultaneously use three of the quantities from Eqs. (106) to (111), i.e. to use three new quantities obtained via invertible and linear transformations from the three diagonal, initial observables \(\left\{\sigma_{0},\check{\Sigma},\check{T}\right\}\). As an example, one can select the three quantities given by Eqs. (106) to (108). The full set of discrete ambiguity-transformations, which, when applied to the roots \(\{\alpha_{k},\beta_{k}\}\), leaves Eqs. (106) and (107) invariant while maintaining the constraint in Eq. (10), is given by the two transformations in Eqs. (112) and (114). Under the exchange symmetry Eq. (113), Eqs. (106) and (107) are transformed into each other and thus are not invariant. Now considering additionally the quantity in Eq. (108), one can see that while the transformation Eq. (112) leaves this quantity invariant, transformation Eq. (114) does not. This only leaves one possible conclusion, namely that also for the case of only three diagonal observables \(\left\{\sigma_{0},\check{\Sigma},\check{T}\right\}\), or equivalently the three new quantities in Eqs. (106) to (108), the double ambiguity is the only relevant discrete ambiguity of the problem15. The argument given above can be repeated for any other case where a combination of three quantities from the six definitions Eqs. (16) to (17) is taken as a starting point. None of the other starting-combinations is necessary for a proof, since this would give a redundant derivation, with the same outcome. ### Completeness of the set \(\{\sigma_{0},\check{\Sigma},\check{T},\check{F},\check{G},\check{E}\}\) It has already been shown in Refs. [17; 18] that the observables \(\check{F}\) and \(\check{G}\) change sign under the double-ambiguity transformation. All the arguments made up to this point prove that the set \(\left\{\sigma_{0},\check{\Sigma},\check{T},\check{F},\check{G},\check{E}\right\}\) is free of discrete ambiguities in the truncated partial-wave analysis. Assuming furthermore that this set of six observables has no continuous ambiguities, the set is complete. ## Appendix B Covered phase space of the used data The phase-space coverages of the used polarization observable data are illustrated in Fig. 16. For a detailed description of the data see Section IV and Table 2. The vertical orange lines correspond to the energy bins of the statistically weakest polarization observable \(G\) and indicate by which amount the data set of an other observable has to be shifted to match these energies. ## Appendix C On the correlation of profile functions The correlation of two random variables \(X\) and \(Y\) can be calculated using the Pearson correlation coefficient defined as [48]: \[\mathrm{Corr}(X,Y)=\frac{\mathrm{Cov}(X,Y)}{\sqrt{\mathrm{Var}[X]\mathrm{Var}[ Y]}}, \tag{18}\] with their respective variances \(\mathrm{Var}\) and the covariance \(\mathrm{Cov}\) between the two random variables. Under the assumption that the dimensionless observables do not have any correlation with each other, the covariance of the unpolarized differential cross-section \(\sigma_{0}(W,\theta)\) (denoted with \(X\)) and a profile function (denoted as \(Y^{\prime}=XY\), as \(\sigma_{0}(W,\theta)\) was used to calculate the profile function) is: \[\mathrm{Cov}(X,Y^{\prime}) =\mathrm{E}[XXY]-\mathrm{E}[X]\cdot\mathrm{E}[XY],\] \[=\left(\mathrm{E}[X^{2}]-\mathrm{E}[X]^{2}\right)\cdot\mathrm{E }[Y],\] \[=\mathrm{Var}[X]\cdot\mathrm{E}[Y]. \tag{19}\] And similarly for the covariance of one profile function (denoted as \(Y^{\prime}=XY\)) to another (denoted as \(Z^{\prime}=XZ\)): \[\mathrm{Cov}(Y^{\prime},Z^{\prime}) =\mathrm{E}[XYXZ]-\mathrm{E}[XY]\cdot\mathrm{E}[XZ],\] \[=\left(\mathrm{E}[X^{2}]-\mathrm{E}[X]^{2}\right)\cdot\mathrm{E }[Y]\cdot\mathrm{E}[Z],\] \[=\mathrm{Var}[X]\cdot\mathrm{E}[Y]\cdot\mathrm{E}[Z]. \tag{20}\] Substituting Eq. (19), and Eq. (20) respectively, into Eq. (18) the correlation for both cases is: \[\mathrm{Corr}(X,Y^{\prime}) =\sqrt{\frac{\mathrm{Var}[X]}{\mathrm{Var}[Y^{\prime}]}}\cdot \mathrm{E}[Y], \tag{21}\] \[\mathrm{Corr}(Y^{\prime},Z^{\prime}) =\frac{\mathrm{Var}[X]}{\sqrt{\mathrm{Var}[Y^{\prime}]\cdot \mathrm{Var}[Z^{\prime}]}}\cdot\mathrm{E}[Y]\cdot\mathrm{E}[Z]. \tag{22}\] ## Appendix D Probability distributions for the quotient and product of two Gaussian random-variables Assuming the original observables to follow a Gaussian probability distribution up to a very good approximation, the result of forming the quotient and/or product is generally non-Gaussian. This appendix collects some basic facts about the quotient- and the product-distribution and considers some limiting cases. ### The quotient distribution: \(Z:=X/Y\) Given are two independent, uncorrelated, Gaussian distributed random variables \(X\) and \(Y\): \[X\sim\mathcal{N}\left(\mu_{X},\sigma_{X}\right),\ Y\sim\mathcal{N}\left(\mu_{Y },\sigma_{Y}\right), \tag{23}\] Figure 16: Energy and angular coverage of the six observables \(\sigma_{0},\Sigma,G,E,T\) and \(F\)[42; 43; 44; 45; 46] which were used for the analysis. The used energies \(E_{\gamma}^{\mathrm{lab}}=[750,850,950,1050,1150,1250]\) MeV are determined by the observable \(G\). together with the integral defining the probability distribution function of the quotient variable \(Z:=X/Y\)[85]: \[\mathrm{P}_{X/Y}(u) =\int_{-\infty}^{\infty}\mathrm{d}x\int_{-\infty}^{\infty}\mathrm{ d}y\,\delta\left(\frac{x}{y}-u\right)\] \[\times\frac{\exp\left[-\frac{1}{2}\Big{(}\frac{(x-\mu_{X})^{2}}{ \sigma_{X}^{2}}+\frac{(y-\mu_{Y})^{2}}{\sigma_{Y}^{2}}\Big{)}\right]}{2\pi \sigma_{X}\sigma_{Y}}\] \[=\int_{-\infty}^{\infty}\mathrm{d}y\,|y|\,\frac{\exp\left[-\frac{ 1}{2}\Big{(}\frac{(uy-\mu_{X})^{2}}{\sigma_{X}^{2}}+\frac{(y-\mu_{Y})^{2}}{ \sigma_{Y}^{2}}\Big{)}\right]}{2\pi\sigma_{X}\sigma_{Y}}. \tag{106}\] Mathematica yields the following result (for positive values of \(\sigma_{X}\) and \(\sigma_{Y}\)): \[\mathrm{P}_{X/Y}(u) =f_{1}(u)\cdot\Big{[}\sqrt{2}\;f_{2}(u)\;c\] \[+\sqrt{\pi}\;f_{3}(u)\;\mathrm{erf}(f_{4}(u))\exp\left(f_{4}(u)^ {2}\right)\big{]}\,, \tag{107}\] with the declarations: \[f_{1}(u) :=\frac{\exp\Bigl{(}-\frac{1}{2}\left(\frac{\mu_{X}^{2}}{\sigma_{ X}^{2}}+\frac{\mu_{Y}^{2}}{\sigma_{Y}^{2}}\right)\Bigr{)}}{\sqrt{2}\pi f_{2}(u)^{3}}, \tag{108}\] \[f_{2}(u) :=\sqrt{\sigma_{X}^{2}+\sigma_{Y}^{2}u^{2}},\] (109) \[f_{3}(u) :=\mu_{Y}\sigma_{X}^{2}+\mu_{X}\sigma_{Y}^{2}u,\] (110) \[f_{4}(u) :=\frac{f_{3}(u)}{\sqrt{2}\;f_{2}(u)\;c},\] (111) \[c :=\sigma_{X}\sigma_{Y} \tag{112}\] and the error function 'erf' [86]. In the following, two limiting cases for Eq. (107) are analyzed: first, the vanishing of the expectation values (i.e. \(\mu_{X}=\mu_{Y}=0\)): \[\mathrm{P}_{X/Y}(u)=\frac{\sigma_{X}\sigma_{Y}}{\pi(\sigma_{X}^{2}+\sigma_{Y}^ {2}u^{2})}. \tag{113}\] This is a result which is known from earlier publications on the quotient distribution, for instance [87]. Second, considering also unit standard-deviations (i.e. \(\sigma_{X}=\sigma_{Y}=1\)) the result Eq. (113) further simplifies to: \[\mathrm{P}_{X/Y}(u)=\frac{1}{\pi\left(1+u^{2}\right)}. \tag{114}\] This is the well-known Cauchy distribution. ### The product distribution: \(Z:=Xy\) Similar to Eq. (106) the probability-distribution function for the product of two independent, uncorrelated Gaussian distributed random variables can be written [88]: \[\mathrm{P}_{XY}(u) =\int_{-\infty}^{\infty}\mathrm{d}x\int_{-\infty}^{\infty} \mathrm{d}y\,\delta(xy-u)\] \[\times\frac{\exp\left[-\frac{1}{2}\Big{(}\frac{(x-\mu_{X})^{2}}{ \sigma_{X}^{2}}+\frac{(y-\mu_{Y})^{2}}{\sigma_{Y}^{2}}\Big{)}\right]}{2\pi \sigma_{X}\sigma_{Y}}. \tag{115}\] By introducing an integral-representation for the \(\delta\)-function \[\delta(xy-u)=\int_{-\infty}^{+\infty}\frac{\mathrm{d}k}{2\pi}e^{ik(xy-u)}= \int_{-\infty}^{+\infty}\frac{\mathrm{d}k}{2\pi}e^{ikxy}e^{-iku}, \tag{116}\] one can bring Eq. (115) into the following form: \[\mathrm{P}_{XY}(u)=\int_{-\infty}^{+\infty}\frac{\mathrm{d}k}{2\pi}e^{-iku}F_ {k}[\mu_{X},\sigma_{X};\mu_{Y},\sigma_{Y}], \tag{117}\] where \[F_{k}[\mu_{X},\sigma_{X};\mu_{Y},\sigma_{Y}]=\int_{-\infty}^{ \infty}\mathrm{d}x\int_{-\infty}^{\infty}\mathrm{d}y\,e^{ikxy}\] \[\times\frac{\exp\left[-\frac{1}{2}\Big{(}\frac{(x-\mu_{X})^{2}}{ \sigma_{X}^{2}}+\frac{(y-\mu_{Y})^{2}}{\sigma_{Y}^{2}}\Big{)}\right]}{2\pi \sigma_{X}\sigma_{Y}}. \tag{118}\] This characteristic function can be solved analytically: \[F_{k}[\mu_{X},\sigma_{X};\mu_{Y},\sigma_{Y}]\] \[=\int_{-\infty}^{\infty}\mathrm{d}y\exp\left[-\frac{1}{2}ky \big{(}k\sigma_{X}^{2}y-2i\mu_{X}\big{)}\right]\frac{\exp\left[-\frac{(y-\mu_{Y} )^{2}}{2\sigma_{Y}^{2}}\right]}{\sqrt{2\pi}\sigma_{Y}}\] \[=\frac{\exp\left[-\frac{k\left(k\mu_{Y}^{2}\sigma_{X}^{2}+k\mu_{X }^{2}\sigma_{Y}^{2}-2i\mu_{X}\mu_{Y}\right)}{2+2k^{2}\sigma_{X}^{2}\sigma_{Y}^ {2}}\right]}{\sqrt{1+k^{2}\sigma_{X}^{2}\sigma_{Y}^{2}}}. \tag{119}\] The final result has the shape of a Fourier integral: \[\mathrm{P}_{XY}(u) =\int_{-\infty}^{+\infty}\frac{dk}{2\pi}\exp\left[-iku\right]\] \[\times\frac{\exp\left[-\frac{k\left(k\mu_{Y}^{2}\sigma_{X}^{2}+k \mu_{X}^{2}\sigma_{Y}^{2}-2i\mu_{X}\mu_{Y}\right)}{2+2k^{2}\sigma_{X}^{2}\sigma_{ Y}^{2}}\right]}{\sqrt{1+k^{2}\sigma_{X}^{2}\sigma_{Y}^{2}}}. \tag{120}\] In analogy to the quotient distribution, the limiting case \(\mu_{X}=\mu_{Y}=0\) shall be analyzed. The Fourier coefficients become: \[F_{k}\left[0,\sigma_{X};0,\sigma_{Y}\right]=\frac{1}{\sqrt{1+k^{2}\sigma_{X}^{2} \sigma_{Y}^{2}}}. \tag{121}\] The result for the product distribution can in this case be written with a modified Bessel function of the second kind \(K_{n}(z)\): \[\mathrm{P}_{XY}(u)=\frac{K_{0}\left(\frac{|u|}{\sigma_{X}\sigma_{Y}}\right)}{ \pi\sigma_{X}\sigma_{Y}}. \tag{122}\] This is the analogue of Eq. (113) from the case of the quotient distribution. For unit standard deviations, Eq. (122) becomes simply \(K_{0}(|u|)/\pi\), which is the analogue of Eq. (114). For the product distribution, especially one limiting case is of interest for this paper, namely where the standard deviation of one random variable almost vanishes (i.e. \(\sigma_{Y}\to 0\)). The characteristic function becomes: \[\lim_{\sigma_{Y}\to 0}F_{k}=\exp{\left(-\frac{k\big{(}k\mu_{Y}^{2}\sigma_{X}^{2}-2i \mu_{X}\mu_{Y}\big{)}}{2}\right)}. \tag{119}\] Substituting Eq. (119) into Eq. (116) and solving the integral gives the result: \[\mathrm{P}_{XY}(u)=\frac{\exp{\left(-\frac{(u-\mu_{XY})^{2}}{2\mu_{Y}^{2} \sigma_{X}^{2}}\right)}}{\sqrt{2\pi}|\mu_{Y}||\sigma_{X}|}, \tag{120}\] which is indeed a Gaussian probability distribution function. This result is used in Section V. ## Appendix E Predictive performance To allow for the comparison of different models (only possible when each model uses the same data points), a certain measure of model performance is needed. A valid, yet intuitive, choice is its predictive accuracy [36] of a future data point \(\tilde{y}_{i}\). This is also known under the name of 'expected out-of-sample log predictive density' [28] or 'Bayes generalization loss' [37] and it is defined as [28; 36; 37]: \[\mathcal{BGL}:= -\mathbb{E}_{\tilde{y}}[\log\mathbb{E}_{\mathbf{\Theta}}[p(\tilde{y} \mid\mathbf{\Theta})]],\] \[= -\int p_{t}(\tilde{y})\log p(\tilde{y}|\mathbf{y})\,\mathrm{d}\tilde {y}\,, \tag{121}\] with the true probability density function \(p_{t}(\tilde{y})\) of the future data [36]. The quantity, defined in Eq. (121) uses a local, proper utility function, as recommended by [89]. However, in a regression problem the quantity \(p_{t}(\tilde{y})\) is not known a priori, indeed the overall goal of such an analysis is to gain a deeper understanding of the data generating process. An approximation to Eq. (121) is required, given for example by the Akaike information criterion, deviance information criterion, leave-one-out cross-validation [28] or the widely applicable information criterion [37]. These criteria require the conditional independence of the data points. According to De Finetti's representation theorem16[90; 91], a sequence of random variables is conditional independent on the empirical distribution function, i.e. the sampling distribution of \((y,x)_{i}\), if they are exchangeable [54]. However, as this paper incorporates correlations between the used data points and the off-diagonal elements of the resulting covariance matrix are non-identical, see Section IV, the data pairs \((y,x)_{i}\) are no longer exchangeable. Thus, none of the criteria can be applied to the problem at hand. Footnote 16: The theorem was extended to finite sequences of random variables by Diaconis and Freedman [90]. Nevertheless, for future analysis that have exchangeable data pairs and involving mathematical ambiguities, it shall be discussed why one should use the widely applicable information criterion. The widely applicable information criterion was chosen for two reasons. On the one hand, it "_is fully Bayesian in that it uses the entire posterior distribution_" [36; p. 2] and is an improvement over the Akaike information criterion and deviance information criterion [28; 36]. On the other hand, methods such as K-fold cross validation [36], which require the data to be split into holdout and training sets, should not be used within this analysis. Running the Markov chain Monte Carlo sampling with holdout data points could emphasize one or even produce more mathematical ambiguities, making the estimate of Eq. (121) unreliable. In other words: such a method "_has a different problem in that it relies on inference from a smaller subset of the data being close to inference from the full data set, an assumption that is typically but not always true_". [36; p. 16]. The widely applicable information criterion can be calculated by [37]: \[\mathcal{WALC}:=\mathcal{BTL}+\mathcal{FV}, \tag{122}\] with the Bayes training loss [37]: \[\mathcal{BTL}:=-\frac{1}{N}\sum_{i=1}^{N}\log\mathbb{E}_{\mathbf{\Theta}}[p(y_{i} \mid\mathbf{\Theta})], \tag{123}\] and the functional variance [37]: \[\mathcal{FV}:=\frac{1}{N}\sum_{i=1}^{N}\mathbb{V}_{\mathbf{\Theta}}[\log p(y_{i} \mid\mathbf{\Theta})], \tag{124}\] with the sample variance \(\mathbb{V}_{\mathbf{\Theta}}\). Hence, Eq. (124) serves as a bias term for models comprising more parameters [28]. The quantities in Eqs. (123) and (124) can be calculated from sampled parameter distributions \(\mathbf{\Theta}^{s}\)[36]: \[\widehat{\mathcal{BTL}}=-\frac{1}{N}\sum_{i=1}^{N}\log{\left(\frac {1}{S}\sum_{s=1}^{S}p(y_{i}\mid\mathbf{\Theta}^{s})\right)}, \tag{125}\] \[\widehat{\mathcal{FV}}=\frac{1}{N}\sum_{i=1}^{N}\mathbb{V}_{\mathbf{ \Theta}^{s}}[\log p(y_{i}\mid\mathbf{\Theta}^{s})]. \tag{126}\] The asymptotic equivalence of Bayes generalization loss and \(\mathcal{WALC}\) can be shown to be [37]: \[\mathbb{E}[\mathcal{BGL}]=\mathbb{E}[\mathcal{WALC}]+\mathcal{O}(\frac{1}{N}), \tag{127}\] where \(N\) data points were used. ## Appendix F Convergence diagnostics Markov chain Monte Carlo convergence diagnostics for the truncation orders \(\ell_{\mathrm{max}}=1\) and \(\ell_{\mathrm{max}}=2\) for all analyzed energies are shown in Figs. 17 and 18. Figure 17: Markov chain Monte Carlo convergence diagnostics for the truncation order \(\ell_{\rm max}=1\). Shown are the potential-scale-reduction statistic \(\hat{R}\) (the grey, dashed line indicates the value of 1.01) and the Monte Carlo standard error (MCSE) for the median divided by the median in percent (the grey, dashed line indicates the value of 1%). Figure 18: Markov chain Monte Carlo convergence diagnostics for the truncation order \(\ell_{\rm max}=2\). Shown are the potential-scale-reduction statistic \(\hat{R}\) (the grey, dashed line indicates the value of 1.01) and the Monte Carlo standard error (MCSE) for the median divided by the median in percent (the grey, dashed line indicates the value of 1%).
2301.12087
Optimization and scheduling for a large scale urban transportation system in a fast-changing world
This paper proposes a set of technological solutions to transform existing transport systems into more intelligent, interactive systems by utilizing optimization and control methods that can be implemented in the near future. This will result in improved public services and quality of life for residents. Three application scenes that are closely related to people's daily life are discussed. We first propose a traffic light scheduling strategy using a model predictive control (MPC) method, with the aim of fairly minimizing delays for both pedestrians and vehicles. Then, a combined dispatching-operation system is proposed to increase control flexibility, with a corresponding implementation solution for boarding control. Finally, a possible scheme to combine both public transport and autonomous vehicle systems is proposed to improve existing public transport systems.
Yi Zhang
2023-01-28T04:35:13Z
http://arxiv.org/abs/2301.12087v1
# Shape the Future of ITS - ###### Abstract The rapid growth of the population facilitates the urban sprawl, automobile production and leads to heavy traffic congestions, pollution, noises and traffic fatalities. Therefore, it is indispensable to develop an eco-friendly sustainable transport with high performance of the existing traffic network. Although many transformational products have been developed and used on the road, such as autonomous vehicles (AV) and drones, and their quality is continually upgraded, 100% replacement of traditional cars to AVs requires not only the safety and security evaluation but also corresponding physical infrastructures, e.g., non-signalized intersection, charging stations. Also, we still have a long way to go in order to completely commercialize AVs with level 5 automation. Thus, it is predictable that traffic signals will still play an important role for a certain long time in this hybrid transportation world, which involves both human-driven cars and AVs under different automated levels. At the period of transition, a feasible and implementable approach is more helpful and practical to reshape the transportation system in the near future. The concept of the pedestrian/transit-oriented transport, which is designed for the human body instead of the car body, has been proposed by researchers [2]: "cities with very high walking, cycling and transit mode share (i.e., 75% or more) typically have high density, mixed use urban centres at or above 100-200 people per hectare and are supported by a transportation strategy that prioritizes pedestrians first, then cyclists followed by transit users." However, vehicle traffic still gets excessively high attention in many cities' guidelines, and walkability is seldom taken into account. Also, walking is the access to public transport (PT), thereby PT services could accordingly be promoted if the pedestrian walking safety and pleasantness can be satisfied. PT system has been studied for several decades due to its large ridership and sustainability on economic efficiency, environmental protection and social equity. Proper bus dispatching and operation can attract more passengers, which encourages commuters to change their travel mode from private automobiles to public buses to further alleviate the traffic congestion and air pollution. However, PT system, a typical ride-sharing transport serving a variety of access needs and providing an equal social value, now faces the challenges from the mobility-on-demand (MOD) system, which is famed for its easy transactions and convenient access via mobile phones. Therefore, we need to explore and create a win-win cooperation model between both PT and AV systems at current transitional period. The flexible demand-driven pattern in AVs could make up the shortages due to fixed routes in the PT system. On the other hand, PT system is still coping with the high-volume transfer tasks, which may not be feasible for AVs due to the safety concern. The objective of this essay is to propose a set of coordinated technological solutions to transform existing transport system to a more intelligent interactive system by adopting optimization and control methods implementable in the near future, thereby improving public services and quality of life for residents. In this essay, three different application scenes that closely related to people's daily life are discussed. We firstly propose a traffic light scheduling strategy via model predictive control (MPC) method, with the aim to fairly minimize both pedestrians' and vehicles' delay. After that, a combined dispatching-operation system is proposed to further increase the control flexibility, and corresponding implementation solution for boarding control is also illustrated. Finally, a possible scheme to combine both PT and AV systems is proposed to improve existing PT system. ## II Working Packages and Tasks ### _Traffic light scheduling for vehicle-pedestrian mixed-flow networks_ Most traffic signal controllers differentiate vehicles and pedestrians, and focus on their great contributions to vehicle flows, this is reasonable when pedestrian volume is low. However, in the downtown areas where large number of pedestrians interfere with vehicular traffic, optimizing traffic signals only for vehicles may create more conflicts between both traffic participants and potentially reduce the economic interest, since pedestrians in CBD areas are usually potential customers of nearby shopping malls. According to SGS Economics and Planning [1], optimized pedestrian flow can increase $1.3 billion a year for Melbourne CBD area. On the other hand, rich data can be obtained with the help of the advanced sensor equipments, such as V2X, 5G communication, Lidar and so on. Powerful machine learning algorithms can be effectively utilized to help predict the traffic flows based on the large amount of database, which better serve the optimized signal controller. In view of this, we propose this signal controller with the aim to fairly minimize both vehicles' and pedestrians' delay [4][7]. Fig. 1 illustrates the framework of the proposed real-time traffic light scheduling strategy implemented in simulator software VISSIM. The macroscopic flow model for both pedestrians and vehicles are developed, and the impacts of the signal on pedestrian crossing capacity is captured in the model. Benefitted from the advanced sensors, signal priority level could also be incorporated into the model to enable higher priority for public buses. The mixed-flow model is then solved by adopting the commercial optimization solver or evolutionary algorithms. After that, the optimized signal phases and duration are sent to the simulator VISSIM, which mimics the real urban traffic environment. Meanwhile, the traffic information, such as traffic volumes, turning ratios, is stored in the back-end database, which is used to re-train the AI models by adopting the machine learning algorithms to predict the required information. The predicted traffic parameters and current real-time information are all sent to the controller side, which is solved in a rolling horizon manner, and the whole process continues with the evolve of the traffic system to form a closed-loop control strategy. _A combined dispatching-operation strategy for public bus management incorporated with the boarding control_ The bus operation systems in most today's studies are proposed based on known bus dispatching time or schedule headways/frequencies. On the other hand, the bus dispatching systems seldom consider operation control for on-road buses. It is understandable that separately deal with these two problems could reduce the computational complexity significantly compared with the complicated combined model. However, the increasing application and perfection of telematics (e.g., Automatic Vehicle Location (AVL), Automatic Passenger Counting (APC), etc.) in the bus management system, the collection of the real-time information becomes possible, also, the constantly upgraded computer with more computing power is keeping breaking the records, thus, it is predictable that to solve a combined optimization problem is implementable [5]. Decision variables, such as bus dispatching time, bus speed between any two adjacent stops, bus dwell time at each stop, OD-based boarding volume, could all be described into a holistic model to enable this combined dispatching-operation bus management system [6]. Also, boarding control has been studied in some literature and proved its high efficiency in improving the bus service quality. However, it still has not been widely implemented in the real world. Fig. 2 gives an graph illustration of future bus stop, which makes the boarding control applicable. Imagine a message via mobile apps or screen board located at the bus stop could be sent or displayed to passengers when a bus is approaching the stop, and only passengers at the front queue will be selected to enter the designated boarding area at the bus stop. The identification of the front-queue passenger can be realized by the camera or Lidar installed at the bus stop. When the bus reaches the bus stop, only passengers who are waiting at the designated area are allowed to board the bus. This requires infrastructural enhancement at the bus stop, not only the area re-design but also the advanced sensors. ### _Autonomous bus fleet management for mobility-on-demand system_ To increase the vehicle utilization and reduce the carbon dioxide emissions, the on-line carsharing services have been carried out in many cities in the past few years. Also, the adoption of the electric vehicles could potentially reduce emissions to promote a sustainable environment, meanwhile, the forthcoming commercialization of AVs has constantly reshaping the transportation system. All of the above emerging concepts and technologies have combined and led to an autonomous mobility-on-demand system (AMOD). AMOD system is not a new concept and has been proposed since 2014 [3], however, the implementation and research studies about AMOD system mainly focus on private cars and taxi services, and this may bring challenges on the traditional public transport, which has been regarded as the symbol of equity and accessibility. Therefore, it is strongly necessary to identify and explore the synergistic possibilities between AMOD and public transports, especially under this transitional period. Fig. 3 illustrates one possible case capturing how AVs could support the existing public bus lines. The left subfigure describes two traditional bus lines served by corresponding buses dispatched from the terminal. The orange bar represents the demand size at each bus stop. All control variables mentioned in Section II-B could be adopted and tuned to facilitate the operation of the traditional bus service. Meanwhile, we could also dispatch AVs to serve bus stops on multiple bus lines but with high demands due to its flexible on-demand characteristics. AVs are normally commercialized as electric cars with limited battery capacity, thus, it is inevitable to develop a smart AV routing strategy catering for the demands of the users as well as the battery capacity level, as illustrated in Fig. 4. As AVs do not follow a fixed bus route and on-board passengers need to be sent to their destined stops, thus, on-demand autonomous bus fleet dispatching strategy inevitably requires the boarding control at each bus stop, and this enables the system to consider not only the routing decision but also the volume dynamics, which is different with current one passenger pick-up and drop-off problem. The optimization involving in routing, subsequent bus stop selection and corresponding boarding volume makes the problem more challenging but also interesting. ## III Conclusion In this essay, technological solutions from a pedestrian/transit-oriented perspective have been provided. Firstly, an adaptive traffic signal control framework is presented, with consideration of a macroscopic mixed-flow optimization model, VISSIM simulation platform, historical database and machine learning-enabled AI prediction model. The experiment results in our paper [4] have demonstrated that the proposed strategy in a Manhattan-shaped network can strike a good balance between pedestrians' and vehicle drivers' needs. Therefore, we are optimistic to implement this strategy in the real world to enhance the existing traffic signals in the near future. Next, the bus dispatching and its on-road operation, especially the boarding control, are combined together to minimize the passenger delay time and the operating bus vacancy, which makes our strategy more flexible and adaptive to meet the passenger demand. In our previous study [6], a multi-bus dispatching and boarding control strategy can reduce roughly 50% of remaining passenger volumes when compared with the timetable-based fixed schedule, which is quite promising and exciting. Moreover, to achieve a synergistic objective between PT system and AV system, a scheme is proposed, where AVs are dispatched to pick up passengers at high-demand stops to support the PT lines. This scheme shall be further studied in my future work to incorporate multiple optimized variables, including AV routing, volume dynamics and boarding limits. Overall, the essay presents three application scenes in the intelligent transportation field. The suggested strategies and guidelines can assist in developing an intelligent future urban transport system in order to provide citizens a smarter, safer and more interactive transportation experience.
2306.02233
Bulk and film synthesis pathways to ternary magnesium tungsten nitrides
Bulk solid state synthesis of nitride materials usually leads to thermodynamically stable, cation-ordered crystal structures, whereas thin film synthesis tends to favor disordered, metastable phases. This dichotomy is inconvenient both for basic materials discovery, where non-equilibrium thin film synthesis methods can be useful to overcome reaction kinetic barriers, and for practical technology applications where stable ground state structures are sometimes required. Here, we explore the uncharted Mg-W-N chemical phase space, using rapid thermal annealing to reconcile the differences between thin film and bulk powder syntheses. Combinatorial co-sputtering synthesis from Mg and W targets in a N$_2$ environment yielded cation-disordered Mg-W-N phases in the rocksalt (0.1< Mg/(Mg+W) <0.9), and hexagonal boron nitride (0.7< Mg/(Mg+W) <0.9) structure types. In contrast, bulk synthesis produced a cation-ordered polymorph of MgWN$_2$ that consists of alternating layers of rocksalt-like [MgN$_6$] octahedra and nickeline-like [WN$_6$] trigonal prisms (denoted "rocksaline"). Thermodynamic calculations corroborate these observations, showing rocksaline MgWN$_2$ is stable while other polymorphs are metastable. We also show that rapid thermal annealing can convert disordered rocksalt films to this cation-ordered polymorph near the MgWN$_2$ stoichiometry. Electronic structure calculations suggest that this rocksalt-to-rocksaline structural transformation should also drive a metallic-to-semiconductor transformation. In addition to revealing three new phases (rocksalt MgWN$_2$ and Mg$_3$WN$_4$, hexagonal boron nitride Mg$_3$WN$_4$, and rocksaline MgWN$_2$), these findings highlight how rapid thermal annealing can control polymorphic transformations, adding a new strategy for exploration of thermodynamic stability in uncharted phase spaces.
Christopher L. Rom, Rebecca W. Smaha, Callan A. Knebel, Karen N. Heinselman, James R. Neilson, Sage R. Bauers, Andriy Zakutayev
2023-06-04T02:05:51Z
http://arxiv.org/abs/2306.02233v1
# Bulk and film synthesis pathways to ternary magnesium tungsten nitrides ###### Abstract Bulk solid state synthesis of nitride materials usually leads to thermodynamically stable, cation-ordered crystal structures, whereas thin film synthesis tends to favor disordered, metastable phases. This dichotomy is inconvenient both for basic materials discovery, where non-equilibrium thin film synthesis methods can be useful to overcome reaction kinetic barriers, and for practical technology applications where stable ground state structures are sometimes required. Here, we explore the uncharted Mg-W-N chemical phase space, using rapid thermal annealing to reconcile the differences between thin film (on ambient or heated substrates) and bulk powder syntheses. Combinatorial co-sputtering synthesis from Mg and W targets in a N\({}_{2}\) environment yielded cation-disordered Mg-W-N phases in the rocksalt (0.1 \(<\) Mg/(Mg+W) \(<\) 0.9), and hexagonal boron nitride (0.7 \(<\) Mg/(Mg+W) \(<\) 0.9) structure types. In contrast, bulk synthesis produced a cation-ordered polymorph of MgWN\({}_{2}\) that consists of alternating layers of rocksalt-like [MgN\({}_{6}\)] octahedra and nickeline-like [WN\({}_{6}\)] trigonal prisms (denoted "rocksaline"). Thermodynamic calculations corroborate these observations, showing rocksaline MgWN2 is stable while other polymorphs are metastable. We also show that rapid thermal annealing can convert disordered rocksalt films to this cation-ordered polymorph, near the MgWN2 stoichiometry. Electronic structure calculations suggest that this rocksalt-to-rocksaline structural transformation should also drive a metallic-to-semiconductor transformation, but our resistivity measurements were only able to confirm the semiconducting behavior of rocksaline MgWN2 and rocksalt Mg3WN4. In addition to revealing three new phases (rocksalt MgWN2 and Mg3WN4, hexagonal boron nitride Mg3WN4, and rocksaline MgWN2), these findings highlight how rapid thermal annealing can control polymorphic transformations, adding a new strategy for exploration of thermodynamic stability in uncharted phase spaces. American Chemical Society, Department of Chemistry, University of California, Berkeley, CA 94720, USA ## 1 Introduction Ternary nitrides are an emerging class of ceramic materials with applications in solid-state lighting, electrochemical energy storage, optoelectronics, piezoelectrics, ferroelectrics, and wide-bandgap semiconductor devices [1]. However, nitrides are underexplored, lagging behind oxides with an order of magnitude fewer scientific publications and known structures [2, 3, 1]. Therefore, exploring chemical phase space to find new ternary nitrides will open avenues to identifying novel materials with intriguing functional properties that may underlie future technologies. Recent breakthroughs in high-throughput computational techniques successfully predicted many new ternary nitrides [3], and combinatorial co-sputtering has proven to be a powerful tool for experimentally realizing these predicted materials [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 211, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 173, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 201, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19 tend to produce the respective wurtzite-derived or rocksalt-derived structures. For example, exploration of the Zn-Mo-N phase space by combinatorial sputtering revealed a wurtzite-like structure across a range of compositions, from metallic ZnMoN\({}_{2}\) to semiconducting wurtzite-like Zn\({}_{3}\)MoN\({}_{4}\) (with a bandgap of 2.4 eV) [17]. Similarly, the Mg-W-N phase space is a promising area of exploration because W has multiple possible oxidation states (between 0 and 6+, inclusive), potentially leading to varied structures and properties. Combinatorial co-sputtering is a good choice to rapidly survey this potentially complex phase space. However, materials discovered by combinatorial sputtering often deviate from those predicted by computational methods or synthesized in bulk on a key detail: cation (dis)order [4, 18]. This discrepancy can potentially be beneficial, such as when cation-disorder lowers the bandgap into the visible range [8, 18]. In other cases, cation disorder negatively impacts optoelectronic properties by localizing charge carriers [18, 19] or even leading to polymorphism [10]. How to control this structural polymorphism and cation disorder is still an open question. For example, annealing conditions are known to affect the degree of cation (dis)order [20, 21], but that control is often material-specific and difficult to explore in a high-throughput manner [18]. Therefore, understanding metastable phase formation and cation (dis)order in ternary nitrides remains a pressing challenge for the field to fully realize the tunable properties of this promising class of materials. In this report, we describe the discovery of several new Mg-W-N compounds in this previously-unexplored ternary phase space. We show that thin film combinatorial co-sputtering methods yielded cation-disordered rocksalt (RS, space group \(Fm\bar{3}m\), \(0.1<\) Mg/(Mg+W)\(<0.9\)) and hexagonal boron nitride structures (h-BN, space group \(P6_{3}/mmc\), \(0.7<\) Mg/(Mg+W)\(<0.9\)) covering the MgWN\({}_{2}\) and Mg\({}_{3}\)WN\({}_{4}\) stoichiometries. In contrast, our bulk ceramic methods yielded cation-ordered MgWN\({}_{2}\) with space group \(P6_{3}/mmc\). We call this cation-ordered structure "rocksaline" (RL) as a portmanteau of the rocksalt-like Mg and nickeline-like W layers. Thermodynamic calculations confirm that the RL polymorph is the ground state of the MgWN\({}_{2}\) composition. Rapid thermal annealing (RTA) of thin films converted the disordered RS structure to this ordered RL structure, in a narrow composition window near the MgWN\({}_{2}\) stoichiometry, resolving this differ ence between the thin film and the bulk synthesis results. The Mg3WN4 was only produced in thin films, and formed in either the RS or h-BN structure. Thermodynamic calculations reveal these polymorphs to be close in energy to one another and slightly metastable. Electronic structure calculations suggest that RL MgWN2 should be a semiconductor, while RS MgWN2 should be metallic. Resistivity measurements of the synthesized films as a function of composition and temperature show both RL MgWN2 and RS Mg3WN4 are semiconducting, but were unable to verify the charge transport behavior of RS MgWN2. These findings show how RTA treatment of disordered films can build upon existing combinatorial co-sputtering techniques to rapidly assess the thermodynamic synthesizability of a predicted cation-ordered phase. ## 2 Methods ### Bulk structural measurements and analysis Powder X-ray diffraction (PXRD) measurements were performed using a Bruker DaVinci diffractometer with Cu K\(\alpha\) X-ray radiation. All samples were prepared for PXRD from within the glove-box by placing powder on off-axis cut silicon single crystal wafers to reduce the background, and then covered with polyimide tape to slow exposure to the atmosphere. However, as PXRD showed that the product (MgWN2) is air stable, a PXRD pattern was collected without tape to minimize the large scattering background (Figure 1). Full-pattern fitting of thin film XRD, GIWAXS, and PXRD data was performed using TOPAS v6 [22]. For thin film samples, 2D diffraction images showed texturing (i.e., preferred orientation), meaning that integrated peak intensities may not directly correspond to electron density. Therefore, we performed LeBail fits using the appropriate space group and refined lattice parameters and crystallite size broadening. For the MgWN2 phase in the RL structure, a model was created by substituting W for Mo in the previously reported MgMoN2 structure in space group \(P6_{3}/mmc\)[23]. Rietveld analysis was then performed to refine the lattice parameters, crystallite size broadening, and site occupancy. In all cases, 10-term polynomial functions were refined to fit the background. Structural visualizations were performed with VESTA [24]. ### Thin film synthesis and annealing experiments Combinatorial co-sputtering of Mg-W-N film libraries were conducted in two custom vacuum chambers, both with base pressures of \(<10^{-7}\) Torr. Mg and W targets (2 inch diameter, Kurt J. Lesker, 99.95% purity) were angled towards a stationary substrate and sputtered using radiofrequency (RF) excited plasma of the Ar/N2 gas mixture in the chamber. Sputter powers ranged from 30 W to 90 W for each target, to shift the Mg/(Mg+W) ratio across the whole composition window. Gases were introduced at 50 sccm Ar and 50 sccm N2, with a 10 Torr process pressure during deposition. The N plasma intensity was enhanced by RF plasma source at 350 W. Most samples were deposited on 2 inch by 2 inch (001)-oriented Si substrates. Select samples were deposited on insulating substrates (e.g., 100 nm SiO2 on Si or 100 nm SiN\({}_{x}\) on Si) for electronic property measurements, as indicated in the text. Select samples were coated with a 15 nm TiN capping layer, sputtered from a 2 inch diameter Ti target, to protect against atmospheric exposure. During these capping depositions, the substrate was rotated to ensure a homogeneous capping layer. A diagram for this experimental setup is shown in Figure S1A. Rapid thermal annealing (RTA) experiments were conducted on individual compositionally-graded library rows in flowing N2 atmosphere at ambient pressure. Heating profiles started with a +100 \({}^{\circ}\)C/min ramp to 100 \({}^{\circ}\)C and a 3 min dwell to drive off adsorbed water, followed by a +100 \({}^{\circ}\)C/min ramp to a \(T_{\text{anneal}}\) set-point in the 600-1200 \({}^{\circ}\)C range for a 3 min dwell. Samples were cooled by turning off the heating source. A diagram for this experimental setup is shown in Figure S1B. ### Thin film composition and structure Combinatorial libraries were measured using the standard 4\(\times\)11 grid employed at NREL, with data analysis conducted using the COMBIgor software package [25]. Each library was mapped with X-ray diffraction (XRD) using a Bruker D8 Discover with Cu K\(\alpha\) radiation and an area detector. Select samples were measured by high-resolution synchrotron grazing incidence wide angle X-ray scattering (GIWAXS) at the Stanford Synchrotron Radiation Lightsource (SSRL) at a wavelength of 0.9744 A with a Rayonix MX225 CCD area detector, a 3\({}^{\circ}\) incident angle, and a 50 \(\mu\)m \(\times\) 150 \(\mu\)m spot size. GIWAXS detector images were integrated with GSAS-II [26]. Compositional analysis was performed with X-ray fluorescence (XRF) and Rutherford Back-Scattering (RBS). Metal ratios were mapped using a Bruker M4 Tornado XRF with a Rh source operating at 50 kV and 200 \(\mu\)A. The spot size was 25 \(\mu\)m in diameter. The measurements were performed under vacuum (\(<\)20 mbar) with an exposure time of 200 s for each measurement. Nitrogen and oxygen ratios for select samples were quantified with RBS. RBS was run in a 168\({}^{\circ}\) backscattering configuration using a model 3S-MR10 RBS system from National Electrostatics Corporation with a 2 MeV He\({}^{+}\) beam energy. Samples were measured for a total integrated charge of 160 \(\mu\)C. RBS spectra were modeled with the RUMP software package [27]. ### Thin film property measurements Room temperature resistivity was measured on thin films using a custom-built collinear four-point probe instrument by sweeping current between the outer two pins while measuring voltage between the inner pins (1 mm between each pin). Conventional geometric corrections were applied to convert the measured resistance into sheet resistance and then resistivity [28]. The measured films were deposited on insulating substrates (either 100 nm thick SiO\({}_{2}\) on Si or 100 nm thick SiN\({}_{x}\) on Si) to avoid contribution from the substrates. Temperature-dependent electrical resistivity was measured on thin films using a Lake Shore Cryotronics Model 8425. Small squares (5 mm \(\times\) 5 mm) were cleaved out of libraries deposited on insulating substrates. For compositions near MgWN\({}_{2}\), indium contacts were pressed into the film near the corners of the squares. Indium contacts were non-ohmic on Mg\({}_{3}\)WN\({}_{4}\) films, so Ti/Au contacts were deposited by evaporation. Temperature-dependent sheet resistance was measured from 104 K to 298 K for most samples, with RL MgWN\({}_{2}\) measured from 36 K to 298 K. Resistivity was calculated using XRF-measured film thickness. ### Bulk synthesis Powders of Mg\({}_{3}\)N\({}_{2}\) (Alfa Aesar, \(>\) 99.6%, 325 mesh) and W (Sigma-Aldrich, 99%, 42 \(\mu\)m) were used as received. As these reagents are air sensitive, they were prepared and stored in an argon-filled glovebox (O\({}_{2}\)\(<\) 0.1 ppm, H\({}_{2}\)O \(<\) 0.5 ppm). Bulk reactions were prepared by grinding together the reagent powders with an agate mortar and pestle, pelletizing the mixture by cold-pressing in a 0.25 in die at 300 MPa (approximately 100-200 mg per pellet), loading the pellet into a cylindrical alumina crucible held horizontally in an alumina boat, and loading the boat into a mullite or quartz process tube. A Zr foil cap was fit into the mouth of the alumina crucible to decrease Mg\({}_{3}\)N\({}_{2}\) loss by volatization and to sacrificially react with any trace O\({}_{2}\). Without air exposure, the samples were reacted in a tube furnace under flowing purified N\({}_{2}\) (ca. 20 mL/min flow rate). A diagram for this system is shown in Figure S1C. Reactions were conducted by heating the sample at +10 \({}^{\circ}\)C/min to the dwell temperature, dwelling for approximately 5-20 h at various temperatures up to 1100 \({}^{\circ}\)C, and then cooling by switching off the furnace. Samples were recovered into the Ar glovebox. This procedure was adapted from the strategy used by Verrelli, et al., to synthesize MgMoN\({}_{2}\)[23]. ### Computational methods Formation energies were calculated using density functional theory (DFT) using the corrected generalized gradient approximation (GGA+U) implemented in the Vienna Ab initio Structural Package (VASP). These calculated values were sourced from the Materials Project when available (v2021.11.10) [29, 30]. Calculations for additional structures that were not already in the Materials Project database (i.e., all MgWN\({}_{2}\) polymorphs, RS and h-BN Mg\({}_{3}\)WN\({}_{4}\)) were conducted using Atomate (v1.0.3) [31] and Fireworks (v2.0.2) [32] to execute the structure optimization workflow with compatibility with Materials Project entries. Calculations were carried out on cation-ordered versions of the experimentally observed cation-disordered structures. Pymatgen (v2022.4.19) was used to construct the ternary phase diagram shown in Figure 4[33]. ## Results and Discussion ### Bulk synthesis of cation-ordered MgWN2 Bulk syntheses yielded MgWN2 in a cation-ordered layered hexagonal crystal structure (Figure 1) previously reported for MgMoN2[23, 34]. We call this structure "rocksaline" (RL) for short, a portmanteau of rocksalt and nickeline, because this structure has interleaved layers of octahedrally-coordinated Mg2+ (rocksalt-like) and W4+ in a trigonal-prismatic site (nickeline-like). The RL MgWN2 phase formed as a black powder from a reaction between Mg3N2 and W powders in a 2:3 ratio heated at 1080 \({}^{\circ}\)C for 10 h. As the balanced reaction is Mg3N2+3W+2N2 - 3MgWN2, this synthesis requires a full excess equivalent of Mg3N2 to proceed to completion. Still, W often persisted as an impurity owing to the volatility of Mg at elevated temperatures and the refractory nature of W. Syntheses conducted at lower temperatures did not induce reaction, suggesting a significant kinetic barrier to reactivity between Mg3N2 and W. Figure 1: A) Rietveld refinement (orange trace) of a PXRD pattern (black dots) of MgWN2 produced by heating 2Mg3N2+3W (100% excess Mg3N2) under flowing N2 for 10 h at 1080 \({}^{\circ}\)C. The difference trace is shown in blue. The simulated pattern of RL MgWN2 is shown for reference in the box above. W (*), MgO (\(\ddagger\)), and an unidentified phase (\(\dagger\)) are trace impurities in the pattern. B) Ball-and-stick models of RL MgWN2 from different perspectives. Crystallographic analysis via refining the degree of site inversion (\(x\)) for (Mg\({}_{1\text{-}x}\)W\({}_{x}\))(W\({}_{1\text{-}x}\)Mg\({}_{x}\))N\({}_{2}\) using the Rietveld method leads to \(x=0.115(10)\), suggesting some cation disorder (Table 1). For comparison, \(x=0.5\) would indicate complete cation disorder, and \(x=0\) would indicate a fully ordered phase. However, site occupancy is modeled by fitting relative peak intensities, and peak intensities also vary with preferred orientation which may be present in these data but which were not included in the model [35]. Cation ordering is most clearly defined by a (002) reflection at \(2\theta=17^{\circ}\) (Figure S5), and the strong reflection observed in Figure 1 suggests a substantial degree of cation ordering. The isostructural MgMoN\({}_{2}\) synthesized by the same method was modeled to be fully ordered by combined analysis of synchrotron PXRD and neutron powder diffraction data [23]. The formation of RL MgWN\({}_{2}\) by high-temperature ceramic synthesis indicates that the RL polymorph defines the thermodynamic ground state. Excess Mg\({}_{3}\)N\({}_{2}\) used in bulk syntheses did not lead to any signs of a more Mg-rich phase (i.e., Mg\({}_{3}\)WN\({}_{4}\)), so we hypothesize any ordered configurations of those materials (e.g., an ordered wurtzite structure, \(Pmn2_{1}\)) may be destabilized at the elevated temperatures (and thus lower nitrogen chemical potential, \(\mu_{\text{N}}\)) required for ceramic synthesis. The bulk synthesis results differed from the the thin-film work presented next, showing the contrast between different precursor options: diffusion-limited bulk-powders compared to atomically-dispersed films. ### Synthesis of Mg-W-N thin films by combinatorial co-sputtering Combinatorial co-sputtering from Mg and W targets in a N\({}_{2}\)/Ar environment resulted in cation-disordered phases with either the RS or the h-BN structure, as determined by laboratory XRD (Figure 2). The RS structure shows the greatest degree of stability, crystallizing across a wide range of compositions (0.1 \(<\) Mg/(Mg+W)\(<\) 0.9) and substrate temperatures (up to 600 \({}^{\circ}\)C). At elevated substrate temperatures (ca. 700 \({}^{\circ}\)C), Mg volatilizes, leaving behind metallic W. At Mg/(Mg+W) ratios near 0.75 (i.e., Mg\({}_{3}\)WN\({}_{4}\)), a h-BN structure is observed in some libraries; it was characterized in greater detail by GIWAXS (Figure 2B). This h-BN structure only appeared in depositions using one of the custom vacuum chambers, but not the other. This suggests a subtle (and yet Figure 2: A) Phase diagram of thin film Mg-W-N extracted from combinatorial growths at various temperatures (\(T_{\rm dep}\)). B) GIWAXS patterns from a library deposited at ambient conditions, showing the transition between the rocksalt (RS) and h-BN structures. The wurtzite (WZ) and anti-bixbyite (BX) structures are not observed. Ball-and-stick models of C) h-BN Mg\({}_{3}\)WN\({}_{4}\). D) RS MgWN\({}_{2}\). undetermined) process parameter, such as nitrogen-plasma density or oxygen content, may play a role. Even within the one chamber that yielded h-BN Mg\({}_{3}\)WN\({}_{4}\), some Mg-rich samples still show the RS structure, suggesting these two polymorphs may be close in energy. Other Mg-rich points did not exhibit any crystalline phases and are marked as amorphous in Figure 2A. The coexistence of h-BN and RS polymorphs near the Mg\({}_{3}\)WN\({}_{4}\) stoichiometry suggests the phases may be energetically similar for this Mg/(Mg+W) ratio. Indeed, they are structurally related, with the h-BN structure being an intermediate in a displacive transformation between the RS and WZ structures [36]. This h-BN structure is uncommon among ternary nitrides. The only prior report we can identify in literature is that of Zn-rich compositions for ZnZrN\({}_{2}\)[10]. However, the five-fold coordination environment of the h-BN is analogous to the transition state experienced by WZ-type ferroelectric materials (e.g., Al\({}_{1-x}\)Sc\({}_{x}\)N) as they undergo switching [37]. As another example of a similar motif, Mg\({}_{3}\)Al\({}_{3}\)N\({}_{5}\) has an Al\({}^{3+}\) ion split across two face-sharing tetrahedral sites [38], which is structurally similar to the WZ \(\rightarrow\) h-BN \(\rightarrow\) WZ displacement of ferroelectrics. Lastly, a prior study predicted the ground state for Mg\({}_{2}\)NbN\({}_{3}\) and Mg\({}_{2}\)TaN\({}_{3}\) to be this h-BN structure type [4], although sputtering experiments subsequently showed that Mg\({}_{2}\)NbN\({}_{3}\) crystallizes as a cation-disordered rocksalt [9, 39]. The infrequent occurrence of this polymorph suggests decreased stability relative to other high-symmetry phases like the RS polymorph, a hypothesis supported by our RTA experiments (Figure S3) and inability to produce it in bulk. ### Rapid thermal annealing of combinatorial libraries RTA experiments of combinatorial film libraries show that annealing can induce cation ordering near the MgWN\({}_{2}\) stoichiometry (Figure 3). The samples near the stoichiometric MgWN\({}_{2}\) composition retained the RS structure at \(T_{\text{anneal}}\) = 600 \({}^{\circ}\)C, but a clear structure transition to the RL polymorph occurred by \(T_{\text{anneal}}\) = 900 \({}^{\circ}\)C (Figure 3A). This indicates that the as-deposited RS structure is kinetically-stable up to moderately high temperatures (ca. 600 \({}^{\circ}\)C). High temperatures (ca. 900 \({}^{\circ}\)C) are needed to allow local diffusion of the randomly-dispersed metals in octahedral environments (the RS structure) to their energetically-preferred coordination environments (octahedral Figure 3: A) Synchrotron GIWAXS patterns of MgWN\({}_{2}\) annealed at 600 \({}^{\circ}\)C and 900 \({}^{\circ}\)C. B, C) Laboratory XRD heatmaps as a function of Mg/(Mg+W) for library rows annealed at 600 \({}^{\circ}\)C and 900 \({}^{\circ}\)C. Labels for the (100) and (102) RL MgWN\({}_{2}\) reflections are omitted for clarity. D) Phase diagram of Mg-W-N depositions as a function of annealing temperature. Samples which were manually identified are indicated by colored markers. Empty markers were not manually labeled but were measured by XRF and XRD, and phases can be inferred from neighboring points. Mg\({}^{2+}\) and trigonal-prismatic W\({}^{4+}\) in the RL structure. For Mg-poor compositions (Mg/[Mg+W]\(<0.4\)), annealing produces a slightly different structure than the RS observed in depositions at elevated temperatures, a structure we call WN\({}_{x}\). XRD patterns show two reflections that are similar to the RS (111) and (200) reflections, but which are spaced by slightly too large a gap in 2\(\theta\) to be consistent with the \(Fm\bar{3}m\) structure (Figure S2). However, we are not able to precisely identify the space group of this phase. Only two reflections were detected, and diffraction images show substantial texturing, which suggests that additional reflections may exist outside the measured \(\chi\) range. Furthermore, the W-N binary system is complex, with 13 unique structures reported in the Inorganic Crystal Structure Database (ICSD) ranging in composition from W\({}_{2}\)N to WN\({}_{2}\)[40, 41, 42, 43, 44]. Given this complexity and ambiguity, we simply refer to these Mg-poor phases as WN\({}_{x}\). This difference may stem from the elevated nitrogen chemical potential present in combinatorial depositions but absent during annealing, which may affect how much nitrogen is present in the film [45, 14]. However, annealed samples labeled RS in Figure 3D (i.e., those with Mg/[Mg+W] \(\geq\) 0.5) are well fit with the \(Fm\bar{3}m\) space group. The RS to RL transformation only occurs in a narrow composition window near Mg/(Mg+W) = 0.5 (i.e., MgWN\({}_{2}\), Figure 3D). For Mg-poor compositions with Mg/(Mg+W)\(<\) 0.42 and Mg-rich compositions with Mg/(Mg+W)\(>\) 0.62, the WN\({}_{x}\) and RS structures persisted at \(T_{\rm anneal}\) = 900 \({}^{\circ}\)C. This shows that the ordered RL structure has a narrow compositional tolerance, while the WN\({}_{x}\) and RS structures can accommodate a large degree of off-stoichiometry. These results, along with the thermodynamic calculations presented next (Figure 4) confirm that the RL phase is the thermodynamic ground state up to approximately 1000 \({}^{\circ}\)C, as initially shown by bulk syntheses. ### Thermodynamic analysis Calculated formation energies relative to the binaries show that RL MgWN\({}_{2}\) is the only thermodynamically stable ternary in the Mg-W-N system, according to DFT calculations of the cation-ordered structures (Figure 4). The striking favorability of the RL polymorph of MgWN\({}_{2}\) is driven by the electronic preference of \(d^{2}\) metals (like W\({}^{4+}\)) for trigonal-prismatic coordination environ Figure 4: A) Ternary phase diagram for the Mg-W-N system calculated using pymatgen [33]. B) Pseudobinary isopleth calculated with a 1:1::(Mg+W):N ratio (corresponding to the black dotted trace from A). The vertical axis shows the relative formation energy (\(\Delta H\)) at \(T=0\) K compared to the most stable point in the binary hulls at this cation:anion ratio (W\({}_{2}\)N\({}_{3}+\) W and Mg\({}_{3}\)N\({}_{2}+\frac{1}{2}\) N\({}_{2}\)). Several highly metastable ternary phases in the NREL matDB and Materials Project databases are omitted for clarity [30, 46]. ments [47]. The next lowest energy polymorph for MgWN\({}_{2}\) is RS, followed by h-BN, then WZ. In the case of the Mg\({}_{3}\)WN\({}_{4}\) stoichiometry, all three polymorphs (RS, h-BN, and WZ) are much closer to the hull than the metastable MgWN\({}_{2}\) polymorphs. A RS Mg\({}_{3}\)WN\({}_{4}\) structure (space group \(I4/mmm\)) is closest to the hull (+0.031 eV/atom above the hull), but the h-BN structure is only slightly higher in energy (+0.034 eV/atom above the hull). The WZ-derived phase of Mg\({}_{3}\)WN\({}_{4}\), with a desirable predicted bandgap of ca. 5 eV [1], is only slightly higher (+0.063 eV/atom above the hull). The DFT calculations shown in Figure 4 agree with our synthetic results. RL MgWN\({}_{2}\) was the only ternary phase formed by bulk synthesis, where high temperatures are sufficient to overcome kinetic barriers to produce thermodynamic ground-state phases. The formation of RS MgWN\({}_{2}\) by combinatorial sputtering is also consistent with the trend from calculations and with prior literature [9, 10]. In the case of physical vapor deposition methods (like sputtering), atoms arrive at the film surface in a disordered configuration (i.e., high effective temperature). Under these conditions, configurational entropy favors structures with a single type of cation site (like RS, h-BN, and WZ) and enthalpy penalizes structures with two or more distinct cation sites (like RL), as demonstrated for the Zn-Zr-N system [10]. In other words, RS is a disorder-tolerant structure that becomes energetically favorable under sputtering synthesis conditions. While we do not consider disorder in the calculations shown in Figure 4B, the ordered RS phase is lower in energy than the ordered WZ or h-BN phases, suggesting Mg\({}^{2+}\) and W\({}^{4+}\) prefer octahedral coordination environments over tetrahedral (WZ) and trigonal bipyramidal (h-BN) environments. Lastly, oxygen substitution on nitrogen sites is common in nitrides [1, 18], and these materials are no exception. RBS measurements detect O/(N+O) = 15% for Mg\({}_{3}\)WN\({}_{4}\) with a h-BN structure (Figure S6). Auger electron spectroscopy measurements on Mg\({}_{3}\)WN\({}_{4}\) with a RS structure detect lower levels of oxygen (O/(N+O)\(<2\)%, Figure S8). These measurements suggest that oxygen incorporation may stabilize the h-BN structure over the RS structure for Mg\({}_{3}\)WN\({}_{4}\). Oxygen impurities affect the energy landscape but are not accounted for in these calculations. ## Electronic properties The polymorphic differences for MgWN2 should lead to different properties. To assess this possibility, we conducted electronic structure calculations on the cation-ordered RL polymorph and a cation-ordered model of RS MgWN2. As these electronic structure calculations cannot be conducted on disordered models, we created a cation-ordered RS MgWN2 phase based on the \(\gamma-\)LiFeO2 structure type (space group \(I4_{1}/amd\)). Calculated density of states (DoS) diagrams show that RS MgWN2 has states at the Fermi level and should exhibit metallic behavior, while RL MgWN2 is calculated to be a semiconductor with a 1.18 eV bandgap (Figure 5). This latter finding is consistent with the 0.7 eV bandgap calculated for RL MgMoN2 (albeit that phase was calculated without the use of hybrid functionals) [23], and with the band structure of MoS2, where Mo4+ takes a trigonal-prismatic coordination environment [48, 49]. Band structure diagrams are shown in Figures S9 and S10. This difference can be rationalized via a simple ligand field splitting model. The RL polymorph has the \(5d^{2}\) valence electrons fully occupying a \(d_{z^{2}}\) orbital (Figure 5B). The lowest unoccupied orbitals are degenerate \(d_{x^{2}-y^{2}}\) and \(d_{xy}\), suggesting a bandgap defined by \(d-d\) transitions. In contrast, the W4+ in the RS polymorph undergoes octahedral ligand field splitting. That leads to metallic conductivity via three degenerate orbitals (\(d_{xy}\), \(d_{xz}\), and \(d_{yz}\)) for the \(5d^{2}\) valence electrons (Figure Figure 5: Calculated density of states (DoS) for the A) RL MgWN2 and C) RS MgWN2 (calculated using the ordered structure with space group \(I4_{1}/amd\)). Ligand field splitting diagrams for W4+ in B) trigonal prismatic and D) octahedral environments. 5D). Such splitting is consistent with the calculated DoS, where W states make up a large fraction of the valence and conduction bands for RL MgWN2 and states near the Fermi level for RS MgWN2. Temperature-dependent resistivity measurements of thin films indicate semiconducting behavior for RL MgWN2 and RS Mg3WN4 (Figure 6A). Resistivity decreases with increasing temperature for both samples, although the trend for MgWN2 is significantly weaker than for Mg3WN4 (Figure S11). This trend suggests thermally activated charge transport. The semiconductivity of RS Mg3WN4 is consistent with the 6+ oxidation state for W in that phase (5\(d^{0}\) electron configuration). The change in slope near 230 K is an artefact of the instrument [50]. The resistivity of RL MgWN2 is low (ca. 0.001 \(\Omega\)-cm), suggesting a high level of doping and/or a small bandgap. The resistivity of RS Mg3WN4 is substantially larger, indicating a lower dopant content and/or a large Figure 6: A) Temperature-dependent resistivity measurements of select samples of RS Mg3WN4 and RL MgWN2 (\(T_{\rm anneal}=900\)\({}^{\circ}\)C). B) Colinear four-point probe measurements of Mg-W-N library rows annealed at 600 \({}^{\circ}\)C and 900 \({}^{\circ}\)C. bandgap. We were not able to reliably measure temperature-dependent resistivity of RS MgWN2, possibly owing to compositional gradients within the film or sample degradation from air exposure over time. Similar trends in conductivity were observed in the Zn-Mo-N system, where films of a wurtzite structure spanned low-resistivity ZnMoN2 to insulating Zn3MoN4.[17] Electronic properties of these films are affected by film quality and composition. Room temperature resistivity measurements show that annealing at 900 \({}^{\circ}\)C decreases resistivity slightly across the whole composition range (compared to samples annealed at 600 \({}^{\circ}\)C), consistent with decreased grain-boundary resistance (Figure 6B). Additionally, oxygen is present in these films (Figure S6 and S8), which decreases resistivity by introducing charge carriers or increases resisitivity by producing interfacial oxide layers (i.e., MgO). Figure 6 also shows that resistivity can change dramatically with composition. Resistivity (\(\rho\)) increases as a function of Mg content, with Mg-poor samples exhibiting \(\rho<0.01\)\(\Omega\)-cm and Mg-rich samples exhibiting \(\rho>100\)\(\Omega\)-cm. In sum, these trends shows that the Mg-W-N system holds potential for tunable electronic properties, although future work should focus on higher quality films to bring that promise to fruition. ## 4 Conclusions We synthesized three new polymorphs of magnesium tungsten nitrides by bulk and film synthesis methods in a previously empty ternary phase space, and demonstrated how rapid thermal annealing can be a powerful tool to reconcile thermodynamic and non-equilibrium synthesis pathways. Combinatorial co-sputtering yielded cation-disordered rocksalt structures across a wide composition range including MgWN2, while samples near the Mg3WN4 stoichiometry crystallized in either a cation-disordered rocksalt or a cation-disordered hexagonal boron nitride structure. Rapid thermal annealing treatments of these combinatorial libraries show that rocksalt MgWN2 converts to a cation-ordered rocksaline structure at \(T_{\text{anneal}}\) = 900 \({}^{\circ}\)C, in a narrow composition window around the nominal stoichiometry. This cation-ordered MgWN2 phase also appeared in bulk ceramic syntheses and was predicted as the ground state structure by theoretical calculations, indicating that an nealing of thin film libraries can potentially access the thermodynamically stable ternary nitrides. Density of state calculations suggest cation-disordered rocksalt MgWN2 should exhibit metallic properties while cation-ordered rocksaline MgWN2 should exhibit semiconducting behavior. Resistivity measurements show that rocksaline MgWN2 and rocksalt Mg3WN4 are semiconductors, but we were unable to experimentally confirm the metallic behavior of rocksalt MgWN2. Resistivity varies by six orders of magnitude as a function of Mg content. In sum, these findings expand the toolkit through which combinatorial co-sputtering experiments can explore the thermodynamic landscape in search of new nitride compounds. ## Author Contributions C.L.R., J.R.N., and A.Z. conceptualized the project. R.W.S. conducted GIWAXS measurements. C.A.K. conducted bulk syntheses and analysis with support from C.L.R. and J.R.N. K.H. conducted RBS measurements. C.L.R. and A.Z. conducted thin film co-sputtering experiments. A.Z. conducted annealing experiments A.Z. and C.L.R. conducted electronic property measurements. J.R.N. conducted DFT calculations. C.L.R. wrote the manuscript with guidance from R.W.S., J.R.N., S.R.B., and A.Z., as well as with feedback from all other co-authors. ## Acknowledgements This work was performed in part at the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE), under Contract No. DE-AC36-08GO28308. Funding provided by Office of Science (SC), Office of Basic Energy Sciences (BES), Materials Chemistry program, as a part of the Early Career Award "Kinetic Synthesis of Metastable Nitrides" (thin film studies, work conducted at NREL). Bulk syntheses were supported by the National Science Foundation (DMR-1653863, work conducted at Colorado State University). C.L.R. acknowledges support from the DOE Science Graduate Research Program (SCGSR). R.W.S. acknowledges support from the Director's Fellowship within NREL's Laboratory Directed Research and Development program. Use of the Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-76SF00515. Thanks to Nicholas Strange for on-site support with GIWAXS measurements and to Laura Schelhas for support analyzing the data. We thank the Analytical Resources Core at Colorado State University for instrument access and training (RRID: SCR_021758). The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government.
2304.14228
Optical variability in Quasars: Scaling with black hole mass and Eddington ratio depend on the observed timescales
Quasars emission is highly variable, and this variability gives us clues to understand the accretion process onto supermassive black holes. We can expect variability properties to correlate with the main physical properties of the accreting black hole, i.e., its mass and accretion rate. It has been established that the relative amplitude of variability anti-correlates with the accretion rate.The dependence of the variance on black hole mass has remained elusive, and contradicting results, including positive, negative, or no correlation, have been reported. In this work, we show that the key to these contradictions lies in the timescales of variability studied (e.g., the length of the light curves available). By isolating the variance on different timescales as well as mass and accretion rate bins we show that there is indeed a negative correlation between black hole mass and variance and that this anti-correlation is stronger for shorter timescale fluctuations. The behavior can be explained in terms of a universal variability power spectrum for all quasars, resembling a broken power law where the variance is constant at low temporal frequencies and then drops continuously for frequencies higher than a characteristic frequency $f_b$, where $f_b$ correlates with the black hole mass. Furthermore, to explain all the variance results presented here, not only the normalization of this power spectrum must anti-correlate with the accretion rate, but also the shape of the power spectra at short timescales must depend on this parameter as well.
Patricia Arévalo, Paulina Lira, Paula Sánchez-Sáez, Priyanjali Patel, Elena López-Navas, Eugene Churazov, Lorena Hernández-García
2023-04-27T14:38:14Z
http://arxiv.org/abs/2304.14228v2
Optical variability in Quasars: Scalings with black hole mass and Eddington ratio depend on the observed timescales ###### Abstract Quasars emission is highly variable, and this variability gives us clues to understand the accretion process onto supermassive black holes. We can expect variability properties to correlate with the main physical properties of the accreting black hole, i.e., its mass and accretion rate. It has been established that the relative amplitude of variability anti-correlates with the accretion rate.The dependence of the variance on black hole mass has remained elusive, and contradicting results, including positive, negative, or no correlation, have been reported. In this work, we show that the key to these contradictions lies in the timescales of variability studied (e.g., the length of the light curves available). By isolating the variance on different timescales as well as mass and accretion rate bins we show that there is indeed a _negative_ correlation between black hole mass and variance and that this anti-correlation is stronger for shorter timescale fluctuations. The behavior can be explained in terms of a universal variability power spectrum for all quasars, resembling a broken power law where the variance is constant at low temporal frequencies and then drops continuously for frequencies higher than a characteristic frequency \(f_{b}\), where \(f_{b}\) correlates with the black hole mass. Furthermore, to explain all the variance results presented here, not only the normalization of this power spectrum must anti-correlate with the accretion rate, but also the _shape_ of the power spectra at short timescales must depend on this parameter as well. keywords: keyword1 - keyword2 - keyword3 ## 1 Introduction Establishing a statistically significant correlation between the observed brightness variability and the physical properties of quasars, such as the mass of the supermassive black hole (M) and accretion rate normalized to the Eddington limit (\(\rm R_{Edd}\)), is the goal of many current studies. With the arrival of new and revolutionary large-scale surveys in time-domain astronomy, these potential correlations could allow the characterization of millions of supermassive black holes, which has been unfeasible until now. Early works found a clear anti-correlation between luminosity and variance in the brightness fluctuations of quasars (Angione & Smith, 1972; Hook et al., 1994; Cristiani et al., 1997; Vanden Berk et al., 2004). However, as luminosity is the product of the black hole mass and normalized accretion rate (\(\rm R_{Edd}\)), the dependence of variability on these more intrinsic properties remained hidden. More recently, a statistically significant anti-correlation between variance and accretion rate has emerged (Kelly et al., 2009; MacLeod et al., 2010; Zuo et al., 2012; Kelly et al., 2013; Simm et al., 2016; Rakshit & Stalin, 2017; Sanchez-Saez et al., 2018; Lu et al., 2019), but whether there is a correlation with black hole mass has remained a matter of debate. Positive correlations were found in Wold et al. (2007); Wilhite et al. (2008); MacLeod et al. (2010); Lu et al. (2019), while negative correlations were found by Kelly et al. (2009, 2013) and no or unclear correlations were reported by Zuo et al. (2012); Simm et al. (2016); Rakshit & Stalin (2017) and Li et al. (2018). These conflicting results can be reconciled when a well-defined sample of quasars is analyzed considering the different timescales of variation. Our first step in this study was to constrain our sample to a narrow redshift range so that all analyses are performed at the same rest-frame wavelength, and the same intrinsic emission of the quasar is captured. Secondly, all quasars have homogeneous estimations of their physical properties, mass (M), and accretion rate normalized to the Eddington limit (\(\rm R_{Edd}\)). Third, the variance analysis was done by isolating the variations on different timescales, ranging from 30 to 300 days in the quasar rest-frame, and the variability analysis was conducted separately for each of them. This method is similar to measuring the variance with light curves of different lengths, which can only capture variations on timescales shorter than the length of the light curve, and with different binnings, which can only capture variations on timescales longer than the width of the time bins. Fourth, we sampled carefully defined bins in M _and_ R\({}_{\rm Edd}\) so that both properties are disentangled, and their correlation with the variance determined at different timescales could be assessed separately. ## 2 Sample selection We selected quasars with optical spectral classification from the catalog of Rakshit et al. (2020). These authors performed a homogeneous analysis of all quasar spectra observed by the SDSS and reported, among other many quantities, black hole masses and Eddington ratios for the majority of their half-million sources. The Eddington ratio R\({}_{\rm Edd}\) has been estimated by taking the ratio of \(L_{\rm bol}\) to Eddington luminosity \(L_{\rm Edd}=1.3\times 10^{38}(M_{\rm BH}/M_{\odot})\) erg s\({}^{-1}\) to measure the accretion rate. The amplitude of variability depends on the rest-frame wavelength of the emission that is captured by the light curves used (e.g. Sanchez-Saez et al., 2018). If a sample contains quasars at different redshifts, this dependence needs to be accounted for before searching for the dependence of the variance on other parameters. In order to minimize the effects of different rest-frame wavelengths, we selected only quasars for a narrow redshift bin \(z=0.6-0.7\). The median redshift is high enough to include a large number of sources and low enough to allow the H\(\beta\) line to fall well within the SDSS spectral range. With this requirement, all selected sources have H\(\beta\)-derived masses, which is the best-calibrated single-epoch mass estimator. Further requirements include (i) a reported (statistical) error on the black hole mass less than 0.2 dex, which allows us to measure variability amplitudes as a function of mass and accretion rate for fine mass bins, and (ii) a g-band magnitude \(g<20.5\) in the SDSS data release 12 photometric catalog. This selection returned 5881 objects. Other spectral considerations regarding the effect of emission lines in the observed range are discussed in Appendix A. Optical light curves were extracted for these 5881 selected quasars from the Zwicky Transient Facility (ZTF Masci et al., 2019) Data Release 14, obtaining data for 5651 objects. These light curves cover the period March 2018 to September 2022 and have approximately 4-day cadence, with yearly gaps, although some regions of the sky have been observed much more frequently. In order to homogenize the light curves of different objects, we require observations to be taken at least 1 day apart, retaining only the first observation in a given night and discarding the rest. This procedure produces a slightly more homogeneous sample of light curves for different objects (in terms of cadence), as most objects only have one observation in a given night, while a few have many observations in a few nights. We chose light curves in the \(g\) band because the variations are stronger here than in the other available band (\(r\)), as expected from the anti-correlation between variance and restframe emitted wavelength seen in quasars (e.g. Sanchez-Saez et al., 2018). The \(g\) band is also less contaminated by the star light of the host galaxies. The photometric quality of the individual observations was controlled using the _limitmag_ values present in the DR14 light curves. We retained only observations with _limitmag_\(>20\), which ensures that objects with \(g\leq 20\) could be detected at least at the \(5-\sigma\) level in all epochs. We note that _limitmag_ is a property of the observation, not of the object, so this process only removes observing nights with bad conditions, regardless of the brightness of each object. This filtering removed 5-10% of the epochs in each light curve. We also selected only observations with processing quality flag _catflags_\(=0\). The ZTF light curves are composed of observations performed on different CCDs of the detector, which might have cross calibration offsets and in most light curves the different CCDs are approximately alternated. We circumvented the cross calibration uncertainties by constructing light curves for individual CCDs. This procedure results in multiple light curves for some objects. We calculated the average flux of the filtered ZTF light curves and removed objects whose corresponding \(g\)-band magnitude was greater than 20. We further restricted the sample to include only light curves at least 900 days long in the observer frame and with at least 90 data points in the single-CCD, quality-filtered data sets. This sampling allows us to measure fluctuations on timescales of several tens to hundreds of days, which are similar to previous quasar variability studies (see references in the Introduction). The final sample contains 4770 individual objects; for 616 objects there were two acceptable light curves; for 21 objects there were three or more light curves, resulting in 5433 valid light curves. Multiple lightcurves for single sources were kept for the analysis considering that they represent a different realization of the same variability process, with a different noise pattern. Therefore including both in the calculation of median values results in better estimates of the variance. The average flux of each light curve was subtracted and the resulting zero-mean light curves were further normalized by their respective (pre-subtraction) mean so that the amplitude of variations and the variance can be directly compared between objects of different flux levels. The valid light curves have a mean (median) length in the observer frame of 1544 (1547) days with a standard deviation of 60 days and contain a mean (median) of 245 (233) good data points with a standard deviation of 90 data points. ## 3 Estimation of the variance We isolated variations on different timescales using the Mexican Hat filter (Arevalo et al., 2012), which is ideally suited to deal with uneven sampling light curves with gaps. In short, the light curves are convolved with two Gaussian kernels of comparable widths and takes the difference of the convolved light curves, correcting for effects of the sampling pattern, and calculates the variance of the filtered light curve. For a given value of Gaussian width \(\sigma\), the filter applied on the power spectrum peaks at \(k_{p}=0.225/\sigma\) and has a width of \(1.16k_{p}\). This relatively broad filter in frequency space has the net effect of averaging together the power of a few independent frequencies, reducing the scatter in power from independent frequency bins, expected from red noise variability processes. This reduction comes at the cost of limit ing the spectral resolution. The filtered powers are estimates of the normalized power density and have units of days. We estimated the observational noise contribution to the filtered power from the reported errors on the flux, as described below, and subtracted this value from the measured power before plotting and fitting. Finally, the normalized power estimates were converted into dimensionless variance estimates by multiplying each one by the peak frequency \(k_{p}\) of each frequency filter. We stress that the variability timescales studied here do not necessarily correspond to characteristic timescales of the quasars, they are simply a few selected timescales at which we can reliably estimate a band-limited variance. The filtering process described above is akin to cutting the light curves in segments slightly longer than the timescale studied, therefore removing fluctuations on longer timescales, and binning and averaging the data points on bins slightly shorter than the selected timescale, thereby removing faster variations. The published correlations between variance and mass cited above use either the total variance of the light curves, which is normally dominated by the variations on the longest timescales available (i.e. defined by the length of the light curves), or an estimate of the variance at a given timescale, for example by evaluating the structure function (e.g. de Vries et al., 2005) at a single value of the time delay \(\tau\). The present analysis studies four separate timescales, which can be interpreted as studying the correlations we would find if the objects had been observed by four monitoring campaigns of different lengths and samplings, or if the structure function were evaluated at four different values of \(\tau\). The contribution of observational noise to the filtered variance was estimated through simulated light curves with additional noise as described in Appendix B. ## 4 Results The timescales selected for the detailed study satisfy two criteria: they are covered at least twice in the length of the light curve and are sufficiently separated in frequency space to produce independent variance estimates, considering the width of the Mexican filter \(\delta k\sim k\). For \(z=0.65\), this results in a longest timescale of about \(T=1500/2/(1+z)=454\) days in the rest-frame of the quasar. To be conservative, we limited the longest studied timescale to 300 days in the quasars rest-frame and chose a separation by a factor \(\sim 2\) for the other timescales: 150, 75, and 30 days. The variance as a function of black hole mass is plotted in Fig. 1 for two of the four different timescales probed-- left: 300 days, right: 30 days. Variations on timescales of 150 and 75 days show an intermediate behavior. All timescales refer to the quasar rest-frame, i.e., \(\rm T_{\rm rest}\)=\(\rm T_{\rm obs}/(1+z)\). To explore the dependence of variability on quasar parameters, we split the sample according to their mass and accretion rate. We grouped the quasars in bins of width 0.33 dex in both M and \(\rm R_{\rm Edd}\), starting from \(\rm log(R_{\rm Edd})\) = -2 and from \(\rm log(M/M\odot)\) = 7.5. For each M-\(\rm R_{\rm Edd}\) bin, we calculated the median variance, median M, and median \(\rm R_{\rm Edd}\). The standard errors on these medians were estimated using the bootstrapping method, calculating the standard deviation of the medians of 1000 random re-samples for each M-\(\rm R_{\rm Edd}\) bin. The variance median was calculated as the median value of the median variances of these bootstrapping samples. We discarded bins with less than 10 quasars for all the analyses described below. These median variances are plotted as a function of black hole mass in Fig. 2 color-coded by the median \(\rm R_{\rm Edd}\) of each bin, and as a function of \(\rm R_{\rm Edd}\) in Fig.3, color-coded by the median M of each bin. The different panels correspond to the four different timescales of variability studied. ### Correlations of variance with M, \(\rm R_{\rm Edd}\) and Bolometric Luminosity To evaluate the significance of the correlations between _unbinned_ variance and black hole mass M, we calculated the Spearman rank coefficients (\(\rho\)), which vary between -1 and 1 with 0 implying no correlation, and their \(p\)-values, which roughly indicate the probability of an uncorrelated system producing data sets that have the same Spearman correlation coefficient. A Spearman correlation coefficient \(\rho\) = -1 implies an exact monotonic negative relationship. This analysis was repeated for sub-samples constrained to narrower ranges in \(\rm R_{\rm Edd}\) in order to separate the dependence of variance on mass from the dependence of variance on accretion rate. The ranges in \(\rm R_{\rm Edd}\) with sufficient quasars and the resulting values of \(\rho\) and \(p\) are summarized in Table 1. The remaining quasars are in ranges of \(\rm R_{\rm Edd}\) with too few points to make a significant correlation analysis. To estimate the relation between variance and mass independently of the \(\rm R_{\rm Edd}\), we fitted the logarithm of the median variance of each M-\(\rm R_{\rm Edd}\) bin described above, assuming a linear form \(\rm log(var)=a\times\rm log(M/M_{8.5})+b\) using Orthogonal Distance Regression (ODR algorithm implemented in SciPy) and the errors on both axes, separately for each range in \(\rm R_{\rm Edd}\). The best fitting values of the parameters \(a\) and \(b\) and their 1\(-\sigma\) errors are tabulated in Table 2. The relation between variance and \(\rm R_{\rm Edd}\) was similarly modeled separately for each range in M as \(\rm log(var)=a\times\rm log(R_{\rm Edd}/0.1)+b\). The results of these fits are tabulated in Table 3. These best-fitting linear relations to the binned data are over-plotted to the data points in Fig. 3. We also computed the median variance in logarithmic bins of the bolometric luminosity, \(\rm L_{\rm bol}\), for \(\rm L_{\rm bol}\) between \(10^{45}\) and \(10^{46.8}\) erg s\({}^{-1}\), for the four timescales of variability studied. These median variances, together with the best-fitting linear models, \(\rm log(variance)\)= \(a\times\rm(log(L_{\rm Bol})-45.8)+b\) are plotted in Fig. 4. The best-fitting values of the linear parameters \(a\) and \(b\) are summarized in Table 4. ## 5 Discussion The first striking fact to notice when looking at the variance of fluctuations on different timescales as a function of black hole mass M (see Fig. 1), is that the direction of the correlation changes depending on which timescale of variability is considered. As seen in Fig. 1 for variations on 300 days there is only a weak, positive correlation between M and variance. When only shorter timescale variations are considered, however, the relation becomes negative. The Spearman rank coefficients and related p-values for the full sample (summarized in Table 1) corroborate this observation, returning a weak but significant positive correlation for timescales of 300 days, and very weak correlations at 150 days and 75 days, becoming negative and more significant for 30 days fluctuations. It is important to note that, since both high mass and high accretion rate quasars are rare when compared to lower masses and lower accretion rates (e.g. Aird et al., 2017; Kelly & Merloni, 2011), flux-limited samples of quasars such as this one, have a built-in anti-correlation between both parameters. This happens because the numerous quasars with both low mass and low accretion rates are too dim to be detected, allowing only the high accretion rate low-mass objects to be included. On the other hand, if the mass is high then both high and low accretion rate objects can be detected. However, objects with both high mass and high accretion rate are rare, so the high mass quasars in the sample have on average lower accretion rates than the low mass objects. This fact is a problem when studying correlations of variance with mass because the variance is known to depend on the accretion rate. To circumvent this problem, it is useful to split the sample into a grid based on both parameters (e.g. Zuo et al., 2012). The correlations between variance and mass can then be studied, for example, on sub-samples with a narrow range in R\({}_{\rm Edd}\) and a broad range in M. By selecting only quasars within the narrow ranges in R\({}_{\rm Edd}\) noted in Table 1 the picture changes: when the dependence on R\({}_{\rm Edd}\) is controlled, _the relation between the variance and M is always negative, becoming stronger and more significant for fluctuations on shorter timescales_ e.g., as measured with shorter light curves. For all timescales, it can be seen that the negative correlation is stronger for each sub-sample than for the full sample. This is due to the fact that for the full sample the anti-correlations between R\({}_{\rm Edd}\) and both M and variance tend to flip the anti-correlation between variance and M. This effect is more easily seen in Fig. 1 where the markers represent individual variance measurements color-coded by R\({}_{\rm Edd}\) ranges. There is a noticeable anti-correlation between \begin{table} \begin{tabular}{c c c c c c} \hline & \multicolumn{3}{c}{Spearman rank coefficient (p-value) between variance and M} \\ \hline & 300 days & 150 days & 75 days & 30 days & N \\ \hline \hline Full sample & 0.15 (1.4e-29) & 0.04(1e-3) & -0.07(5e-7) & -0.14(2e-26) & 5433 \\ \hline -2.0\(<\) log(R\({}_{\rm Edd}\))\(<\) -1.7 & -0.19(0.007) & -0.20(4e-3) & -0.35(2e-7) & -0.24 (6e-4) & 210 \\ -1.7\(<\) log(R\({}_{\rm Edd}\))\(<\) -1.3 & -0.09(0.006) & -0.18(9e-9) & -0.35(1e-30) & -0.33 (4e-27) & 1018 \\ -1.3\(<\) log(R\({}_{\rm Edd}\))\(<\) -1.0 & -0.10(1e-4) & -0.23(1e-18) & -0.41(7e-62) & -0.36(1e-45) & 1462 \\ -1.0\(<\) log(R\({}_{\rm Edd}\))\(<\) -0.7 & -0.08(1e-3) & -0.23(1e-18) & -0.41(6e-61) & -0.36(8e-45) & 1462 \\ -0.7\(<\) log(R\({}_{\rm Edd}\))\(<\) -0.3 & -0.04(0.17) & -0.24(1e-15) & -0.42(6e-47) & -0.38(1e-38) & 1062 \\ -0.3\(<\) log(R\({}_{\rm Edd}\))\(<\) 0.0 & -0.04(0.59) & -0.20(1e-4) & -0.55(8e-17) & -0.50(2e-12) & 174 \\ \hline \end{tabular} \end{table} Table 1: Spearman rank coefficient and associated p-value between the variance and M, for variances measured at different variability timescales. In the full sample, the Spearman coefficients are close to zero (i.e., almost no correlation) but due to the large number of points, the largest of these correlations are significant. Importantly, the correlation coefficients are positive for long timescales and negative for short timescales. Separating the data according to their log(R\({}_{\rm Edd}\)) results in negative correlation coefficients for all timescales, with stronger and more significant correlations at shorter timescales. The last column shows the number of light curves included in each range in R\({}_{\rm Edd}\). Figure 1: Variance as a function of black hole mass (M): the panels show this relation for the same quasars but measuring the variance at two different timescales. Markers show individual variance measurements, with colors assigned by ranges of R\({}_{\rm Edd}\) as noted in the legend. At the longest timescales, the correlation between the variance and mass is very shallow so the sample selection effects (anti-correlation between M and R\({}_{\rm Edd}\)) and the known anti-correlation between R\({}_{\rm Edd}\) and variance lead to the spurious, weak positive correlation between variance and M. At shorter timescales the anticorrelation between variance and M becomes evident and is offset for different values of R\({}_{\rm Edd}\). variance and M, but this is only apparent once R\({}_{\rm Edd}\) is considered, for example, by focusing on a fixed R\({}_{\rm Edd}\) (one color in this plot). For shorter variability timescales --i.e. right plot in Fig. 1-- the anti-correlation between variance and M is stronger. The steepening of the anti-correlation between variance and M can also be seen in the binned variances by comparing the panels in Figs. 2, which plot these relations for the four timescales of variability studied. When looking at variations on shorter timescales, for example, by using shorter light curves, the slope of the relation between variance and both M and R\({}_{\rm Edd}\) becomes steeper. These changes in slope are significant, as can be seen in the linear fits to these relations with parameters and errors given in Tables 2 and 3. ### Implications for the power spectrum Light curves can be characterized through the power spectrum, which quantifies the amount of variance found on different timescales \(t\) or, equivalently, on different frequencies \(f=1/t\). In the past, attempts have been made to fit quasars optical power spectra with a damped random walk model (DRW), which produces a flat, \(P(f)=A\) power spectrum at low temporal frequencies \(f\), steepening to \(P(f)=A(f/f_{b})^{-2}\) above a characteristic frequency \(f_{b}\). Even if the shape is not exactly correct, this picture is useful to interpret the variance results: first, if all quasars have essentially the same power spectrum, differing only on the break timescales, the plots in Fig. 3 can be qualitatively explained if the break frequency decreases with increasing mass so that, at a given variability timescale, the power spectra of larger black hole masses are probed further above the break, where the power is increasingly lower than \(A\). In this sense, a positive relation between M and the characteristic timescale of DRW models fit to individual quasars was recently reported (Burke et al., 2021). This scenario is schematically presented in the left panel in Fig. 5. In the present case, the break timescale (\(T_{b}=1/f_{b}\)) would be about 300 days or shorter for all the masses in the sample since the mass dependence starts to appear more strongly on shorter timescales. An exception can be perhaps the highest mass bin, where even 300 days is not long enough to reach the Figure 2: Variance as a function of black hole mass, for sub-samples restricted to narrow ranges in accretion rate, as labeled in the plots, for variations on four different timescales, top left to bottom right: 300, 150, 75, and 30 days. The dependence of variance on M is very weak for the long term fluctuations but is clear for variations on timescales of 150 days and becomes stronger for shorter timescale variations. For all timescales, the variance is larger for lower R\({}_{\rm Edd}\), which produces the large spread in variance at a given mass for the sample as a whole. All the error bars on the median variance of the binned data (markers) were calculated as the root-mean-squared scatter of the medians obtained by bootstrapping using 1000 re-samples per M-R\({}_{\rm Edd}\) bin. flat part of the power spectrum and the measured variance is still below A. In other words, for fluctuations on timescales of 300 days there is almost no mass dependence of the variance because, for most of the masses, this timescale is longer than the characteristic (break) timescale. As a result, the variance on this timescale is probing the flat part of the power spectrum, which would be the same for all masses in a given R\({}_{\rm Edd}\) bin. For the range of masses and accretion rates used here, the variance anti-correlates with mass in all accretion rate bins. The relation between the logarithms of both variance and mass appears linear in the parameter range studied and can be modeled as a linear relation as log(variance)= \(a\times\log\rm{M/Ms}+b\), where \(\rm{Ms}=10^{8}\rm{M_{\odot}}\). The slope of these relations is small or consistent with 0 for the longest timescale fluctuations probed (300 days) and gets consistently steeper for shorter timescale variations, as can be seen comparing the panels in Fig. 2, and the fitted values of \(a\) in Table 2. This behavior reconciles two apparently conflicting results: the independence of variance with mass at about one year timescales, even when controlling for different accretion rates (Wilhite et al., 2008; Sanchez-Saez et al., 2018), with the mass dependence of a characteristic timescale in the variability (Burke et al., 2021). The dependence of variance on mass only appears for variations on short timescales (i.e., shorter than the characteristic timescale of the power spectrum) and is small or negligible for longer timescales. The power spectrum must depend on R\({}_{\rm Edd}\) as well as the M, or all the different color lines in the plots in Fig. 2 would fall on top of each other. The simplest interpretation would be that the normalization of the power spectrum \(A\) increases with decreasing R\({}_{\rm Edd}\). This conclusion can be reached if only the total variance of the light curves or the limiting value of the long timescale power spectrum is calculated, as opposed to this timescale-dependent approach. An anti-correlation between power spectral normalization \(A\) and R\({}_{\rm Edd}\) is, however, not sufficient to explain why the dependence of variance on R\({}_{\rm Edd}\) differs for different timescales, as evident in the different panels in Fig. 3 If that were the case then the power spectra of objects of the same mass and different accretion rates would simply be shifted vertically with respect to each other. By construction, the dependence Figure 3: Variance as a function of R\({}_{\rm Edd}\), for sub-samples restricted to narrow ranges in mass, as labeled in the plots, for variations on four different timescales, top left to bottom right: 300, 150, 75, and 30 days. The dependence of variance on R\({}_{\rm Edd}\) is significant on all variability timescales. The separation between mass bins increases towards shorter timescales, as expected from Fig.2. Interestingly, the slope of the relationship between variance and R\({}_{\rm Edd}\) changes with timescale, becoming steeper for shorter timescales. of variance on \(\rm R_{Edd}\) (for a fixed M) would be independent of the timescale, i.e. it would have the same slope, fixed by the overall amplitude as a function of \(\rm R_{Edd}\)\(A(\rm R_{Edd})\), in all the panels in Fig. 3. A linear fit to the relations between variance and accretion rate, as log(variance)\(=a\times\rm log(\rm R_{Edd}/0.1)\)\(+\)\(b\) results in the best-fitting \(a\) and \(b\) parameters listed in Table 3. For each bin in mass, the slope of the relation between variance and \(\rm R_{Edd}\) becomes steeper for shorter variability timescales. For the two most populated ranges in M (\(8.2<\)log(M)\(<8.5\) and \(8.5<\)log(M)\(<8.8\), the slopes change from \(a=-0.53\pm 0.04\) and \(a=-0.61\pm 0.06\) at timescales of 300 days to \(a=-0.95\pm 0.14\) and \(a=-0.96\pm 0.08\) for timescales of 30 days (see Table 3). This small but significant steepening proves that the power spectral _shape_ must depend on \(\rm R_{Edd}\) as well as M. This conclusion is independent of the power-spectral model considered. Returning to the bending power law model for the power spectrum, the dependence of variance on accretion rate can be achieved if either the high-frequency slope is steeper for higher accretion rates or if the break timescale \(T_{b}\) scales not \begin{table} \begin{tabular}{c c c c} timescale[d] & \(a\) & \(b\) \\ \hline \hline 300 & -0.33 & \(\pm\)0.03 & -3.01 & \(\pm\)0.01 \\ 150 & -0.53 & \(\pm\)0.02 & -3.25 & \(\pm\)0.01 \\ 75 & -0.76 & \(\pm\)0.01 & -3.48 & \(\pm\)0.01 \\ 30 & -1.01 & \(\pm\)0.02 & -3.95 & \(\pm\)0.01 \\ \end{tabular} \end{table} Table 4: Results of a linear fit to the relation between log(variance) and log(\(\rm L_{Bol}\)), for the four timescales shown in Fig. 4. The parameter \(a\) represents the slope of the relation and parameter \(b\) the log(variance) at log(\(\rm L_{Bol}\)) \(=45.8\), i.e., log(variance)\(=a\times(\rm log(\rm L_{Bol})-45.8)+\)\(b\). Figure 4: Variance on four different variability timescales as a function of bolometric luminosity (Lbol). The markers show the median variance for different bins in Lbol for the whole range covered by our sample. The solid lines represent best-fitting linear relations to log(var) vs log(Lbol). All the error bars on the median variance of the binned data (crosses) were calculated as the root-mean-squared scatter of the medians obtained by bootstrapping using 1000 re-samples per bin in Lbol. \begin{table} \begin{tabular}{c c c c c c} log(\(\rm R_{Edd}\)) & timescale[d] & \(a\) & \(b\) \\ \hline \hline -1.7 – -1.3 & 300 & -0.19 & \(\pm\)0.03 & -2.68 & \(\pm\)0.01 \\ & 150 & -0.30 & \(\pm\)0.08 & -2.86 & \(\pm\)0.07 \\ & 75 & -0.53 & \(\pm\)0.09 & -3.00 & \(\pm\)0.03 \\ & 30 & -0.93 & \(\pm\)0.05 & -3.34 & \(\pm\)0.02 \\ \hline -1.3 – -1 & 300 & -0.13 & \(\pm\)0.05 & -2.85 & \(\pm\)0.01 \\ & 150 & -0.35 & \(\pm\)0.05 & -3.06 & \(\pm\)0.01 \\ & 75 & -0.66 & \(\pm\)0.02 & -3.24 & \(\pm\)0.01 \\ & 30 & -1.03 & \(\pm\)0.04 & -3.66 & \(\pm\)0.02 \\ \hline -1 – -0.7 & 300 & -0.20 & \(\pm\)0.03 & -3.04 & \(\pm\)0.01 \\ & 150 & -0.38 & \(\pm\)0.04 & -3.24 & \(\pm\)0.01 \\ & 75 & -0.65 & \(\pm\)0.01 & -3.46 & \(\pm\)0.01 \\ & 30 & -1.01 & \(\pm\)0.09 & -3.96 & \(\pm\)0.03 \\ \hline -0.7 – -0.3 & 300 & -0.11 & \(\pm\)0.05 & -3.14 & \(\pm\)0.02 \\ & 150 & -0.35 & \(\pm\)0.02 & -3.42 & \(\pm\)0.01 \\ & 75 & -0.65 & \(\pm\)0.04 & -3.46 & \(\pm\)0.01 \\ & 30 & -0.94 & \(\pm\)0.06 & -4.24 & \(\pm\)0.03 \\ \hline -0.3 – 0 & 300 & 0.00 & \(\pm\)0.17 & -3.30 & \(\pm\)0.11 \\ & 150 & 0.20 & \(\pm\)0.11 & -3.60 & \(\pm\)0.06 \\ & 75 & -0.61 & \(\pm\)0.05 & -3.71 & \(\pm\)0.01 \\ & 30 & -1.04 & \(\pm\)0.17 & -4.53 & \(\pm\)0.08 \\ \end{tabular} \end{table} Table 2: Results of a linear fit to the relation between log(variance) and log(M), for the four timescales shown in in Fig. 2 and the five ranges in \(\rm R_{Edd}\) with four or more bins in black hole mass. The parameter \(a\) represents the slope of the relation and parameter \(b\) the log(variance) at log(\(\rm M/M_{\odot}\)) \(=8.5\), i.e., log(variance)\(=a\times\rm log(\rm M/M_{8.5})+b\). For each bin in \(\rm R_{Edd}\), the slope \(a\) becomes more negative when the variance is measured for shorter timescale fluctuations. \begin{table} \begin{tabular}{c c c c c} log(\(\rm M\)) & timescale[d] & \(a\) & \(b\) \\ \hline \hline 7.8 – 8.2 & 300 & -0.47 & \(\pm\)0.04 & -2.87 & \(\pm\)0.02 \\ & 150 & -0.60 & \(\pm\)0.08 & -2.97 & \(\pm\)0.03 \\ & 75 & -0.88 & \(\pm\)0.08 & -3.02 & \(\pm\)0.03 \\ & 30 & -0.97 & \(\pm\)0.13 & -3.34 & \(\pm\)0.05 \\ \hline 8.2 – 8.5 & 300 & -0.53 & \(\pm\)0.04 & -2.89 & \(\pm\)0.01 \\ & 150 & -0.62 & \(\pm\)0.03 & -3.08 & \(\pm\)0.01 \\ & 75 & -0.74 & \(\pm\)0.06 & -3.25 & \(\pm\)0.02 \\ & 30 & -0.95 & \(\pm\)0.14 & -3.64 & \(\pm\)0.04 \\ \hline 8.5 – 8.8 & 300 & -0.61 & \(\pm\)0.06 & -2.98 & \(\pm\)0.02 \\ & 150 & -0.58 & \(\pm\)0.03 & -3.21 & \(\pm\)0.01 \\ & 75 & -0.82 & \(\pm\)0.05 & -3.46 & \(\pm\)0.01 \\ & 30 & -0.96 & \(\pm\)0.08 & -3.97 & \(\pm\)0.02 \\ \hline 8.8 – 9.2 & 300 & -0.55 & \(\pm\)0.08 & -3.05 & \(\pm\)0.03 \\ & 150 & -0.68 & \(\pm\)0.03 & -3.35 & \(\pm\)0.01 \\ & 75 & -0.79 & \(\pm\)0.04 & -3.67 & \(\pm\)0.02 \\ & 30 & -0.95 & \(\pm\)0.04 & -4.25 & \(\pm\)0.01 \\ \hline \end{tabular} \end{table} Table 3: Results of a linear fit to the relation between log(variance) and log(\(\rm R_{Edd}\)), for the four timescales shown in Fig. 3 and the four ranges in M with four or more bins in \(\rm R_{Edd}\). The parameter \(a\) represents the slope of the relation and parameter \(b\) the log(variance) at log(\(\rm R_{Edd}\))\(=-1\), i.e., log(variance)\(=a\times\rm log(\rm R_{Edd}/0.1)+\)\(b\). For each bin in M, the slope \(a\) becomes more negative when the variance is measured for shorter timescale fluctuations. just with mass but with accretion rate as well. This possibility is exemplified in the right panel in Fig. 5. As seen above, the optical variance in quasars with BH masses in the range \(7.5<\log(\mathrm{M})<9.5\) and accretion rates in the range \(-2<\log(\mathrm{R_{Edd}})<0\) anti-correlates with both mass and accretion rate, at least for variability timescales of 150 days or shorter. On longer timescales of 300 days, the dependence on mass becomes almost negligible, and the dependence on accretion rate is reduced but still significant. Since the quasars' luminosity is the product of M and \(\mathrm{R_{Edd}}\), the anti-correlation of the variance with both factors results in a strong anti-correlation of variance with luminosity. The anti-correlation is especially strong for short timescales of 150 days and below where the variance has a strong anti-correlation with both mass and accretion rate. Fig. 4 shows the filtered power on the four timescales studied, as a function of the bolometric luminosity, \(L_{\mathrm{bol}}\), reported by Rakshit et al. (2020). This well known anti-correlation can have different slopes depending, for example, on the length of the light curves used, which affects the variability timescales probed when calculating the total variance. In our case, for long term fluctuations of 300 days, the relation has a best-fitting exponential slope of \(-0.33\pm 0.03\), steepening for shorter timescale fluctuations down to \(-0.53\pm 0.02\) for variations on 150 days timescales, \(-0.76\pm 0.01\) for variations on 75 days timescales, and \(-1.01\pm 0.02\) for variations on 30 days timescales. ## 6 Conclusion We have shown that at a given timescale, the amplitude of optical variability in quasars decreases with the black hole mass. For the first time, we show that this trend is stronger on shorter timescales (30 - 150 days in the quasar rest frame) and weaker on scales \(\sim\)300 days. These results reconcile previously reported weak or null dependence of variance on the black hole mass detected at long timescales with the detection of a mass-dependent break timescale in the optical light curves (Burke et al., 2021). Namely, at timescales longer than the break, the correlation is weak or absent, while at shorter timescales there is a negative correlation. Such a behavior is expected in this model (See Fig. 5, left) and is indeed found in our work (Fig. 2). For the whole sample, the correlation between the variance and mass is weak, with a shallow trend and large scatter (Fig. 1 and Table 1). As we show, this scatter is largely due to the additional dependence of variance on the accretion rate and that for sub-samples covering a narrower range of accretion rates, the negative correlation between the variance and mass is stronger and correlation is tighter. The positive correlation between variance and mass that was seen on long timescales in a flux-limited sample of quasars is spurious. It is caused by the anti-correlation of the accretion rate and mass in the sample and by the well-known anti-correlation between the accretion rate and variance. Our analysis shows that to properly determine a link between black hole physical properties and variability, it is necessary to account not only for such properties in a well-controlled fashion but also to quantify the variability taking into account the shape of the power spectrum and _its_ dependence on properties such as M and \(\mathrm{R_{Edd}}\). We have shown that this approach can explain many previous conflicting results, and yields important clues about the variability origin, as shown by the discovered dependence of the power spectrum slope at high frequencies on \(\mathrm{R_{Edd}}\). It is noteworthy that the variability of quasars in the optical band does not follow the pattern observed in the X-ray domain, where the break timescale shows a linear relation with \(\mathrm{M}\) and an inverse relation with \(\mathrm{R_{Edd}}\)(McHardy et al., 2006; Gonzalez-Martin and Vaughan, 2012). In the optical bands, we see a positive correlation between the break timescale and Figure 5: A possible model for the dependence of the power spectrum on mass and accretion rate. Mass dependence (Left): Schematic representation of a broken power law model power spectra where the break frequency scales inversely with mass. The three shaded regions mark three timescales where the filtered variance is estimated, producing the variance levels marked by the stars. This setup produces no mass dependence of the variance at low frequencies, a strong dependence on mass at high frequencies and a weaker dependence for intermediate frequencies. Accretion rate dependence (Right): Power spectral models schematically representing different accretion rate levels for a fixed black hole mass, where the variance anti-correlates with the accretion rate. If only the normalization of the power spectra changed, the vertical distance between the low and high accretion rate power spectra would be the same in all the shaded regions (solid symbols). Instead, we observe a _larger_ dependence of variance with accretion rate at higher frequencies, which requires an accretion-rate dependent break timescale or high-frequency slope (symbols on the semi-transparent spectrum). \(\rm R_{Edd}\), which points to a different mechanism as the driver behind these variations. Results of the continuum reverberation mapping also show that besides the intrinsic variability, the optical band includes reprocessing of a high-energy, highly variable emission, which introduces a variable signature at short time scales. Finally, as suggested by theoretical modeling (e.g. Kubota and Done, 2018), \(\rm R_{Edd}\) might control the level of X-ray reprocessing into the optical band and could explain our observed dependence of power spectrum shape on the accretion rate. **Acknowledgements:** The authors acknowledge support from the National Agency for Research and Development (ANID) grants: Millennium Science Initiative Program ICN12_12009 (PSS,LHG), and NCN19_058 (PA, PL); FONDECYT Regular 1201748 (PL); FONDECYT Postdoctorado 3200250 (PSS); Programa de Becas/Doctorado Nacional 21200718 (EL) and xxxx(PP), and from the Max-Planck Society through a Partner Group grant (PA). **Data Availability Statement:** All data can be downloaded from the Zwicky Transient Facility (ZTF) Data Release 14 (DR14)(Masci et al., 2019). **Code Availability Statement:** The Mexican Hat filter code (Arevalo et al., 2012) can be requested to the corresponding author.
2305.14101
Impacts of symmetry energy slope on the oscillation frequencies of neutron stars with short-range correlation and admixed dark matter
Oscillation modes of compact stars, in general, can serve as a fingerprint in determining the equation of state (EOS) of dense matter. In this study, we examine the impact of symmetry energy slope ($L$) on the oscillation frequencies of neutron stars (NSs) with nucleon-nucleon short range correlation (SRC) and admixed dark matter (DM) for the first time within the relativistic mean-field theory. By adjusting the $L$, we revise the EOS and coupling parameters in light of the SRC and DM effects, and construct the new sets. The results reveal that NSs containing SRC and DM inside are more likely to satisfy the observational constraints, and we find that smaller $L$ exhibits larger fundamental non-radial and radial frequencies, and that the effect on Large Separation (LG) is also mainly concentrated in the low-mass region. Moreover, we update the linear relationship between the non-radial frequency and mean density, and we further give empirical relations between non-radial and radial frequencies and tidal deformability at different $L$ for 1.4$M_{\odot}$ and 2$M_{\odot}$. These findings will enable us to more effectively confine the NS EOSs, in turn, also provide a strategy to place constraints on the $L$.
Bin Hong, ZhongZhou Ren, Chen Wu, XueLing Mu
2023-05-23T14:24:09Z
http://arxiv.org/abs/2305.14101v1
Impacts of symmetry energy slope on the oscillation frequencies of neutron stars with short-range correlation and admixed dark matter ###### Abstract Oscillation modes of compact stars, in general, can serve as a fingerprint in determining the equation of state (EOS) of dense matter. In this study, we examine the impact of symmetry energy slope (\(L\)) on the oscillation frequencies of neutron stars (NSs) with nucleon-nucleon short range correlation (SRC) and admixed dark matter (DM) for the first time within the relativistic mean-field theory. By adjusting the \(L\), we revise the EOS and coupling parameters in light of the SRC and DM effects, and construct the new sets. The results reveal that NSs containing SRC and DM inside are more likely to satisfy the observational constraints, and we find that smaller \(L\) exhibits larger fundamental non-radial and radial frequencies, and that the effect on Large Separation (LG) is also mainly concentrated in the low-mass region. Moreover, we update the linear relationship between the non-radial frequency and mean density, and we further give empirical relations between non-radial and radial frequencies and tidal deformability at different \(L\) for \(1.4M_{\odot}\) and \(2M_{\odot}\). These findings will enable us to more effectively confine the NS EOSs, in turn, also provide a strategy to place constraints on the \(L\). Neutron stars; Nuclear astrophysics; Dark matter ## I Introduction As a class of very dense objects in astronomy, the description of matter at high density inside NSs has attracted a lot of attention from nuclear physics [1], particle physics, and astrophysics [2; 3]. However, given the non-perturbative nature of nuclear forces, we cannot derive the EOS directly from quantum chromodynamics (QCD). As a result, it is more common to construct NS EOSs from the microscopic first principles, such as \(\chi EFT\)[4; 5; 6; 7; 8], or from the self-consistent phenomenological models, typical approaches like the Skyrme-Hartree-Fock [9; 10; 11] and Gogny-Hartree-Fock [12; 13], or from parameterization models [14; 3]. These models are also strongly constrained by multi-messenger observations, including the recent series of discoveries of twice-solar-mass (\(2M_{\odot}\)) NSs [15; 16; 17; 18; 19; 20], the tidal deformability extracted from the binary NS merger event GW170817 [21; 22], the simultaneous mass and radius measurements of the isolated PSR J0030+0451 by NICER (Neutron Star Interior Composition Explorer) [23; 24] and the gravitational wave event GW190814 from the coalescence of a stellar-mass black hole and a mysterious compact star [25]. To some extent, these observations undoubtedly disprove certain EOSs, making it imperative to establish plausible ones that can describe the low density properties while meeting the observational constraints. Nuclear microscopic scale nucleon-nucleon SRC [26; 27; 28] and cosmic macroscopic size DM have proven to be relatively challenging issues. Several theoretical works have demonstrated that, in contrast to the free Fermi gas, the SRC originating from the strongly repulsive core of nuclear force and its tensor part leads to an appreciable depletion below the Fermi surface, with some nucleons occupying regions above the Fermi surface and giving rise to a high-momentum tail, as has been verified by experiments like \((e,e^{\prime}p)\)[29] and \((e,e^{\prime}NN)\)[30; 31]. Furthermore, the SRC effects also play an important role in nuclear physics [32; 33; 34; 35; 36; 37], such as allowing us to have a better understanding of the EMC effect [38; 39] and to explain the neutrino oscillation measurements [40; 41] as well as density-dependent behavior of nuclear symmetry energy [35]. When it is extended to study NSs, tidal deformability [42; 43], mass-radius relations [44; 45; 46] and cooling efficiency will all be affected [47]. For DM, many observations such as gravitational lensing, galaxy rotation curves, velocity dispersions, galaxy clusters and cosmic microwave background have predicted its existence (for a review, see [48; 49; 50]). Although there are many candidates for DM, the origin and properties remain a mystery, moreover, DM does not interact directly with normal matter, but it has a more pronounced gravitational effect on dense objects like NSs [51], and play a important role in determining NS mass-radius relation [52; 46] and tidal deformability [53; 54]. Despite this, there is still little ongoing research to incorporate nucleon-nucleon SRC into the admixed DM NSs and to further examine their effects. The symmetry energy and its slope play a crucial role in determining the equation of state of pure neutron matter, which involves determining the neutron skin thickness in neutron-rich matter, the dynamics of heavy-ion collisions, and also affects the structure and properties of NSs, helping us understand isospin asymmetry physics
2303.06853
Representation Learning for Stack Overflow Posts: How Far are We?
The tremendous success of Stack Overflow has accumulated an extensive corpus of software engineering knowledge, thus motivating researchers to propose various solutions for analyzing its content.The performance of such solutions hinges significantly on the selection of representation model for Stack Overflow posts. As the volume of literature on Stack Overflow continues to burgeon, it highlights the need for a powerful Stack Overflow post representation model and drives researchers' interest in developing specialized representation models that can adeptly capture the intricacies of Stack Overflow posts. The state-of-the-art (SOTA) Stack Overflow post representation models are Post2Vec and BERTOverflow, which are built upon trendy neural networks such as convolutional neural network (CNN) and Transformer architecture (e.g., BERT). Despite their promising results, these representation methods have not been evaluated in the same experimental setting. To fill the research gap, we first empirically compare the performance of the representation models designed specifically for Stack Overflow posts (Post2Vec and BERTOverflow) in a wide range of related tasks, i.e., tag recommendation, relatedness prediction, and API recommendation. To find more suitable representation models for the posts, we further explore a diverse set of BERT-based models, including (1) general domain language models (RoBERTa and Longformer) and (2) language models built with software engineering-related textual artifacts (CodeBERT, GraphCodeBERT, and seBERT). However, it also illustrates the ``No Silver Bullet'' concept, as none of the models consistently wins against all the others. Inspired by the findings, we propose SOBERT, which employs a simple-yet-effective strategy to improve the best-performing model by continuing the pre-training phase with the textual artifact from Stack Overflow.
Junda He, Zhou Xin, Bowen Xu, Ting Zhang, Kisub Kim, Zhou Yang, Ferdian Thung, Ivana Irsan, David Lo
2023-03-13T04:49:06Z
http://arxiv.org/abs/2303.06853v2
# Representation Learning for Stack Overflow Posts: How Far are We? ###### Abstract The tremendous success of Stack Overflow has accumulated an extensive corpus of software engineering knowledge, thus motivating researchers to propose various solutions for analyzing its content. The performance of such solutions hinges significantly on the selection of representation model for Stack Overflow posts. As the volume of literature on Stack Overflow continues to burgeon, it highlights the need for a powerful Stack Overflow post representation model and drives researchers' interest in developing specialized representation models that can adeptly capture the intricacies of Stack Overflow posts. The state-of-the-art (SOTA) Stack Overflow post representation models are Post2Vec and BERTOverflow, which are built upon trendy neural networks such as convolutional neural network (CNN) and Transformer architecture (e.g., BERT). Despite their promising results, these representation methods have not been evaluated in the same experimental setting. To fill the research gap, we first empirically compare the performance of the representation models designed specifically for Stack Overflow posts (Post2Vec and BERTOverflow) in a wide range of related tasks, i.e., tag recommendation, relatedness prediction, and API recommendation. The results show that the Post2Vec cannot further improve each state-of-the-art technique of downstream tasks, and BERTOverflow shows surprisingly poor effectiveness. To find more suitable representation models for the posts, we further explore a diverse set of BERT-based models, including (1) general domain language models (RoBERTa and Longformer) and (2) language models built with software engineering-related textual artifacts (CodeBERT, GraphCodeBERT, and seBERT). This exploration shows that CodeBERT and RoBERTa are generally the most suitable for representing Stack Overflow posts. However, it also illustrates the "No Silver Bullet" concept, as none of the models consistently wins against all the others. Inspired by the findings, we propose SOBERT, which employs a simple-yet-effective strategy to improve the best-performing model by continuing the pre-training phase with the textual artifact from Stack Overflow. The overall experimental results demonstrate that SOBERT can consistently outperform the considered models and increase the state-of-the-art performance by a significant margin for all the downstream tasks. Stack Overflow, Transformers, Pre-trained Models + Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: author: Corresponding Corresponding: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding: Corresponding Corresponding: author: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: author: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: ###### Abstract. We propose a novel approach to the proposed approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach. We propose a novel approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach. We propose a novel approach to the proposed approach to the proposed approach. We propose a novel approach to the proposed approach. from GitHub commit messages and issues, Jira issues, and Stack Overflow posts. CodeBERT, GraphCodeBERT, and seBERT are considered to be better at capturing the semantics of technical jargon of the SE domain. Finally, we also include models from the general domain as they are usually trained with a more diverse amount of data than domain-specific models. RoBERTa is one of the most popular BERT-based language models. It is trained with larger batch size and learning rates compared with the original BERT. Longformer is also considered as it overcomes the input length limit of conventional BERT-based language models. While BERT-based language models could maximumly accept an input length of 512 tokens, more than 50% of the Stack Overflow posts have the surpass 512 limit [11]. In contrast, Longformer could accept a maximum of 4,096 tokens as its input. We evaluate the performance of the aforementioned representation models on multiple Stack Overflow-related downstream tasks (i.e., tag recommendation, API recommendation, and relatedness prediction). Furthermore, we build SOBERT, a stronger BERT-based language model for modeling Stack Overflow posts. Our experimental results reveal several interesting findings: 1. [leftmargin=*] 2. _Existing Stack Overflow post representation techniques fail to improve the SOTA performance of considered tasks._ Xu et al. demonstrated that the addition of the feature vectors generated by Post2Vec is beneficial for improving the post representation for traditional machine learning techniques. However, we discover that appending the feature vectors from Post2Vec [43] does not derive a beneficial effect on considered deep neural networks. Furthermore, we reveal that the embedding generated by BERTOverflow could only achieve reasonable performance in the API recommendation task and give surprisingly poor performance in the tag recommendation task. 3. _Among all the considered models, none of them can always perform the best._ According to our experiment results, although the newly considered models can outperform the SOTA approaches, none of them can always perform the best. It motivates us to build an extensive model. Overall, CodeBERT produces the most promising representation among the considered models, and Longformer fails to beat conventional BERT-based language models, although it is expected to be capable of accepting a longer input. 4. _Continued pre-training based on Stack Overflow textual artifact develops a consistently better model._ We propose SOBERT by further pre-training with Stack Overflow data. The overall results show that SOBERT consistently boosts the performance in all three considered tasks, implying a better representation. Overall, we summarize the contributions of our empirical study as follows: 1. [leftmargin=*] 2. We comprehensively evaluate the effectiveness of seven representation models for Stack Overflow posts in three downstream tasks. 3. We propose SOBERT by pre-training based on 20 million posts from Stack Overflow and show that SOBERT consistently outperforms other representation models in multiple downstream tasks. 4. We derive several insightful lessons from the experimental results to the software engineering community. The rest of the paper is organized as follows. Section 2 categorizes representation learning models into three groups and briefly describes them. We formulate the downstream tasks (i.e., tag recommendation, API recommendation, relatedness prediction) and their corresponding state-of-the-art method in Section 3. Section 4 introduces our research questions and the experiment settings. In Section 5, we answer the research question and report the experiment results. Section 6 further analyzes the result and elaborates the insights with evidence. Section 7 describes related works, and Section 8 summarizes this study. ## 2. Representation Learning Models In this section, we summarize the considered representation models for this paper. We explore a wide range of techniques across the spectrum of representing Stack Overflow posts, including two Transformer-based Pre-trained Models (PTM) from the general domain (RoBERTa (Krizhevsky et al., 2014) and Longformer (Long et al., 2015)), three SE-domain specific PTMs (CodeBERT (Chen et al., 2016), GraphCodeBERT (Chen et al., 2016), and SERET (Wang et al., 2017)) and two Stack Overflow-specific post representation models (BERTOverflow (Wang et al., 2017) and Post2Vec (Wang et al., 2017)). ### BERT-based Language Models BERT (**B**idirectional **E**nocder **R**epresentations from **T**ransformers) (Chen et al., 2016) based language models have revolutionized the representation learning of natural language (Krizhevsky et al., 2014; Long et al., 2015; Long et al., 2015; Long et al., 2016; Wang et al., 2017) by achieving phenomenal performance in a wide range of natural language processing (NLP) tasks, such as sentiment analysis (Wang et al., 2017), POS tagging (Wang et al., 2017), question answering (Liu et al., 2018). BERT-based language models inherit the Transformer (Wang et al., 2017) architecture, whose self-attention mechanism can learn a bidirectional contextual representation of text. These models usually perform the _Masked Language Modeling_ (MLM) task in the pre-training phase. It initially corrupts the input data by randomly masking 15% of the tokens, and then it teaches the model to reconstruct the original data by predicting the masked words. BERT-based models are extensively pre-trained on large-scale datasets, which learn a meaningful representation that is reusable for various tasks, thus eliminating the process of training language models from scratch and saving a drastic amount of time and resources. ### Existing Models for Stack Overflow Posts **BERTOverflow**(Wang et al., 2017) keeps the original BERT\({}_{base}\) architecture, and it leverages 152 million sentences and 2.3 billion tokens from Stack Overflow to pre-train Stack Overflow-specific word embeddings. The authors have leveraged the embedding generated by BERTOverflow to implement a software-related named entity recognizer (SoftNER). The performance of SoftNER is experimented with the name entity recognition (NER) task for the software engineering domain, focusing on identifying code tokens or programming-related named entities that appear within SQA sites like Stack Overflow. The results show that BERTOverflow outperforms all other models in the proposed task. **Post2Vec**(Wang et al., 2017) is the latest approach proposed specifically for Stack Overflow post representation learning (Wang et al., 2017). Unlike the existing models, Post2Vec is designed with a _triplet_ architecture. Post2Vec leverages CNNs as feature extractors for each post to encode three components (i.e., title, text, and code snippets) separately from the post. The corresponding three output feature vectors are then fed to a feature fusion layer to produce the representation of the post. In the end, Post2Vec uses tag information of the post, which is considered as the post's general semantic meaning to supervise the representation learning process. The representation learned by Post2Vec is then leveraged by enhancing the feature vectors in Stack Overflow-related downstream tasks (e.g., relatedness prediction and API recommendation). For each downstream task, in (Wang et al., 2017), the vector representation learned by Post2Vec is combined with the feature vector produced by the corresponding state-of-the-art approach to form a new feature vector. Finally, the new feature vector is used to boost the performance of the corresponding model for the task. Following the experiment settings of Xu et al., we use Post2Vec as a complementary feature vector to the state-of-the-art approach in this paper. We concatenate the post representation generated by Post2Vec to the original feature vector of the state-of-the-art approach. We then leverage the concatenated feature vector in further training. ### Models from General Domain **RoBERTa**(Roh et al., 2017) is a replication study on the pre-training objectives, along with the impact of several key hyper-parameters of BERT (He et al., 2019). They then proposed their improved model on BERT, namely RoBERTa. In comparison with BERT, RoBERTa has made several modifications to the pre-training stage, including: (1) training with larger batch size, more data, and longer training time; (2) abandoning the next sentence prediction (NSP) task of BERT and showed that removal of NSP slightly improves the model efficiency; (3) training with longer sequences; (4) masking the training data dynamically rather than statically. Pre-trained models like BERT (He et al., 2019) and RoBERTa (Roh et al., 2017) only accept a maximum input of 512 tokens. However, according to the statistics conducted by He et al. (2019), more than half of the Stack Overflow questions have more tokens than the given 512 limit. A simple workaround is to truncate the given input sequence to the acceptable length restriction. However, it increases the risk of losing vital information. The self-attention mechanism suffers from the \(O(n^{2})\) quadratic computational complexity problem, which restricts the ability of Transformer-based models to model long sequences. **Longformer**(He et al., 2019) aims to alleviate the limitation in processing long sequences. It leverages a combination of sliding window attention and global attention mechanism such that the computational memory consumption scales linearly as the sequence becomes longer. In contrast to models like RoBERTa and CodeBERT, which could only accept a maximum of 512 tokens as input, Longformer supports sequences of length up to 4,096. Similar to CNN (He et al., 2019), Longformer lets each input token only attends to surrounding neighbors that are within a fixed window size. Denoting the window size as \(w\), each token could only attend to \(\frac{1}{2}w\) tokens on both sides, thus decreasing the computation complexity to \(O(n\times w)\). However, the sliding window may compromise the performance as it cannot capture the whole context. To compensate for the side-effect, global tokens are selected. Such tokens are implemented with global attention, which attends to all other tokens, and other tokens also attend to the global tokens. As previous work showed that more than 50% of the Stack Overflow posts exceed the size limit of conventional BERT-based models (512), it motivates us to explore whether Longformer is better at representing Stack Overflow posts. ### Models from SE domain **CodeBERT**(He et al., 2019) The strong versatility and capability of Transformer-based representational models drive researchers' interest in adopting them to the SE domain. CodeBERT (He et al., 2019) is a SE knowledge-enriched bi-modal pre-trained model, which is capable of modeling both natural languages (NL) and programming languages (PL). CodeBERT inherits the architecture of BERT (He et al., 2019), and it continues pre-training based on the checkpoint of RoBERTa (Roh et al., 2017) with the NL-PL data pairs obtained from the CodeSearchNet dataset (Kang et al., 2019). It has two pre-training objectives: Masked Language Modeling (MLM) and Replaced Token Detection (RTD). The eventual loss function for CodeBERT at the pre-training stage is the combination of both MLM and RTD objectives, where \(\theta\) denotes the model parameters: \[\min_{\theta}(\mathcal{L}_{RTD}(\theta)+\mathcal{L}_{MLM}(\theta)) \tag{1}\] The CodeBERT model has shown great effectiveness in a diverse range of SE domain-specific activities, for example, code search (He et al., 2019), traceability prediction (Roh et al., 2017), and code translation (He et al., 2019). **GraphCodeBERT**(He et al., 2019) incorporates a hybrid representation in source code modeling. Apart from addressing the pre-training process over NL and PL, GraphCodeBERT utilizes the data flow graph of source code as additional inputs and considers two structure-aware pre-training tasks (i.e., Edge Prediction and Node Alignment) aside from the MLM prediction task. GraphCodeBERT is evaluated in code search (Bordes and Senn, 2017), clone detection (Zhu et al., 2017), code translation (Bordes and Senn, 2017), and code refinement (Zhu et al., 2017),respectively. It outperforms CodeBERT and all the other baselines, including RoBERTa (code version) (Bordes and Senn, 2017), Transformer (Zhu et al., 2017), LSTM (He et al., 2017), under their experimental setting. **seBERT**(Zhu et al., 2017) aims to advance the previous PTMs in the SE context with a larger model architecture and more diverse pre-training data. The authors pre-trained seBERT with the BERT\({}_{large}\) architecture, i.e., with 24 layers, a hidden layer size of 1024, and 16 self-attention heads, with a total of 340 million parameters. seBERT is pre-trained with more than 119GB of data from four data sources, i.e., Stack Overflow posts, Github issues, Jira issues, and Github commit messages. The model's effectiveness is verified in three classification tasks, i.e., issue type prediction, commit intent prediction, and sentiment mining. Results showed that seBERT is significantly better than BERToverflow in these tasks. ## 3. Downstream Tasks In this section, we formulate the target problems that are used to measure the effectiveness of the representation models and then describe the corresponding state-of-the-art solution. We select multiple Stack Overflow-related downstream tasks, which have been popular research topics for Stack Overflow posts. To be more specific, we consider: _Tag Recommendation_(Han et al., 2015; Zhu et al., 2017), _API Recommendation_(Bordes and Senn, 2017; Zhu et al., 2017) and _Relatedness Prediction_(Zhu et al., 2017; Zhu et al., 2017), covering a multi-label classification problem, a multi-class classification problem, and a ranking problem. All selected tasks operate on the abstraction of a post, which could be benefited from a high-quality Stack Overflow post representation. ### Tag Recommendation The user-annotated tags of a Stack Overflow post serve as helpful metadata and have a critical role in organizing the contents of Stack Overflow posts across different topics. Suitable tags precisely summarize the message of a post, while redundant tags and synonym tags make it more difficult in maintaining the content of the site. A tag recommendation system could effectively simplify the tagging process and minimize the effect of manual errors, therefore, avoiding problems like tag synonyms and tag redundancy. #### 3.1.1. Task Formulation We formulate the tag recommendation task as a _multi-label classification problem_. Given \(\mathcal{X}\) as a corpus of Stack Overflow posts, and \(\mathcal{Y}\) denotes the total collection of tags, we represent each post as \(x_{i}\), where \(0\leq i\leq|X|,i\in\mathbb{N}\) and the tag of each post as \(y_{i}\subset\mathcal{Y}\). The goal is to recommend the most relevant set of tags \(y_{i}\) to \(x_{i}\). #### 3.1.2. State-of-the-art technique PTM4Tag (Han et al., 2015) leverages three pre-trained models to solve the tag recommendation problem. PTM4Tag leverages three pre-trained models, which are responsible for modeling the title, description, and code snippet, independently. ### API Recommendation Questions related to Application Programming Interfaces (APIs) are one of the most viewed topics on Stack Overflow (Han et al., 2015). Stack Overflow consists of an enormous amount of discussion about API usage. Developers are more intended to search for relevant Stack Overflow posts and pick out the APIs that seem useful in the discussions (Zhu et al., 2017) rather than checking API documentation, which makes Stack Overflow the primary source for building a dataset of the API recommendation task. The modern software development process heavily relies on third-party APIs, which leads to the research of an automated API recommendation approach that is intended to simplify API search (Zhu et al., 2017). #### 3.2.1. Task Formulation We follow the exact task definition as the previous literature (He et al., 2016; He et al., 2017; He et al., 2018), with the goal of recommending relevant APIs that answer the question or implement the function for a given NL query. #### 3.2.2. State-of-the-art technique Wei et al. (Wei et al., 2018) proposed CLEAR, an automated approach that recommends API by embedding queries and Stack Overflow posts with a BERT-based PTM (distilled version of the RoBERTa2). To be more specific, given a natural language query, CLEAR initially picks a sub-set of candidate Stack Overflow posts based on the embedding similarity to reduce the search space. Then, CLEAR ranks the candidate Stack Overflow posts and recommends the APIs from the top-ranked Stack Overflow posts. Footnote 2: [https://huggingface.co/distilroberta-base](https://huggingface.co/distilroberta-base) ### Relatedness Prediction The notion of a Knowledge Unit (KU) is defined as a set containing a question along with all its answers (Zhu et al., 2018; Wang et al., 2018). To find a comprehensive technical solution for a given problem, developers usually need to summarize the information from multiple related KUs. However, searching for related KUs can be time-consuming as the same question can be rephrased in many different ways. Thus, researchers have proposed several techniques to automate the process of identifying the related KUs (Zhu et al., 2018; Wang et al., 2018; Wang et al., 2018), which could significantly improve the efficiency of the software development cycle. #### 3.3.1. Task Formulation The task is commonly formulated as a multi-class classification problem (Zhu et al., 2018; Wang et al., 2018; Wang et al., 2018). The relatedness between questions is classified into four classes, from the most relevant to irrelevant, which are: * _Duplicate_: The Two KUs correspond to a pair of semantically equivalent questions. The answer of one KU can also be used to answer another KU. * _Direct_: One KU is beneficial in answering the question in another KU, for example, by explaining certain concepts and giving examples. * _Indirect_: One KU provides relevant information but does not directly answer the questions of another KU. * _Isolated_: The two KUs are semantically uncorrelated. #### 3.3.2. State-of-the-art technique Recently, Pei et al. introduced ASIM (Pei et al., 2018), which yielded state-of-the-art performance in the relatedness prediction task. Pei et al. pre-trained word embeddings specialized to model Stack Overflow posts with a corpus collected from the Stack Overflow data dump. Then ASIM uses BiLSTM (He et al., 2018) to extract features from Stack Overflow posts and implements the attention mechanism to capture the semantic interaction among the KUs. ## 4. Research Questions and Experimental Settings In this section, we first introduce our research questions and then describe the corresponding experiment settings. ### Research Questions RQ1. How effective are the existing Stack Overflow post representation models?Various methods have been proposed in modeling Stack Overflow posts. However, there is still a lack of analysis of the existing Stack Overflow-specific representation methods. For instance, Xu et al. (Xu et al., 2018) have demonstrated that Post2Vec is effective in boosting the performance of traditional machine learning algorithms, i.e., support vector machine (SVM) and Random Forest. However, the efficacy of Post2Vec in facilitating deep learning-based models has not yet been investigated. Moreover, Tabassum et al. (Tabassum et al., 2018) only leveraged the embeddings from BERTOverflow in the software-related NER task, but not for other popular Stack Overflow-related tasks. In light of this research gap, we aim to evaluate the current Stack Overflow-specific representation methods for popular Stack Overflow-related tasks under the same setting for this research question. _RQ2. How effective are the popular BERT-based language models for the targeted downstream tasks?_ In addition to the existing Stack Overflow representation models, we explore the effectiveness of a wider spectrum of representation models. BERT-based language models have shown great performance and generalizability in representation learning. Representations generated by such models have demonstrated promising performance in a broad range of tasks with datasets of varying sizes and origins. Borrowing the best-performing representation models from various domains and investigating their performance can derive interesting results, as recent literature (Zhou et al., 2018; Zhang et al., 2019) have revealed that they are potentially great candidates for representing posts as well. This motivates us to employ RoBERTa (Krizhevsky et al., 2015) and Longformer (Long et al., 2015) from the general domain and CodeBERT (Chen et al., 2017), GraphCodeBERT (Chen et al., 2017), and seBERT (Wang et al., 2018) from the SE domain. We set up the exact same experimental settings for each model. _RQ3. Is further pre-training on Stack Overflow data helpful in building a better model?_ Further pre-trained models with domain-specific corpus have been common practice in the NLP domain, however, their effectiveness is not verified for representing Stack Overflow posts. In this RQ, we introduce SOBERT, which is obtained by continuing the pre-training process on CodeBERT with Stack Overflow data, and we aim to investigate whether further pre-training with Stack Overflow data improves the performance. ### Experimental Settings #### 4.2.1. Tag Recommendation _Dataset_. The dataset used by He et al. (He et al., 2017) in the training of PTM4Tag only includes the Stack Overflow posts dated before September 5, 2018. To address this limitation, we use the Stack Overflow data dump released in August of 2022 to construct a new dataset for our experiment. Ideally, a tag recommendation approach should only learn from high-quality questions. Therefore, we remove the low-quality questions when constructing the dataset. According to the classificaition of question quality defined by Ponzanelli et al. (Ponzanelli et al., 2020), we first filter out the questions which do not have an accepted answer and further removed the questions with a score of less than 10. Moreover, we remove the rare tags and rare posts. Previous literature in tag recommendation (He et al., 2017; Tabassum et al., 2018; Tabassum et al., 2018) has defined a tag as rare if it occurs less than 50 times within the dataset, and a post is considered rare if all of its tags are rare tags. The usage of rare tags is discouraged since it implies the unawareness of the tag among developers. We follow the same definition as the previous literature and set the frequency threshold for rare tags as 50. In the end, we obtain a dataset of 527,717 posts and 3,207 tags. We split the dataset into a training set, a validation set, and a test set according to the 8:1:1 ratio, which corresponds to 422,173, 52,772, and 52,772 posts, respectively. _Evaluation Metrics_. We report the performance for this task using Precision@k, Recall@k, and F1-score@k, where k indicates the top-k recommendations. Such metrics are extensively used in previous works (He et al., 2017; Tabassum et al., 2018; Zhang et al., 2019; Zhang et al., 2019), and we calculate the average score for each of them. Mathematically speaking, the evaluation metrics are computed as follows: \[Precision@k=\frac{|\text{Tag}_{\text{True}}\cap\text{Tag}_{\text{Predict}}|}{k}\] \[Recall@k_{i}=\begin{cases}\frac{|\text{Tag}_{\text{True}}\cap\text{Tag}_{ \text{Predict}}|}{k}&\text{if }|\text{Tag}_{\text{True}}|>k\\ \frac{|\text{Tag}_{\text{True}}\cap\text{Tag}_{\text{Predict}}|}{|\text{Tag}_{ \text{True}}|}&\text{if }|\text{Tag}_{\text{True}}|\leq k\end{cases}\] \[F1\text{-}score@k=2\times\frac{Precision@k\times Recall@k}{Precision@k+Recall@k}\] In the above formulas, \(\text{Tag}_{\text{True}}\) refers to the ground truth tags and \(\text{Tag}_{\text{Predict}}\) refers to the predicted tags. Notice that the above formula of Recall@k is determined by conditions since Recall@k naturally disfavors small k. The revisited Recall@k has been widely adopted in previous experiments of tag recommendation [11, 43, 48]. Since Stack Overflow posts cannot have more than 5 tags, we report the results by setting the k as 1,3,and 5. #### Implementation Details For Longformer, we set the maximum accepted input sequence as 1,024, and for other BERT-based language models (i.e., RoBERTa, CodeBERT, BERTOverflow, and SOBERT) the maximum input sequence is set as 512. We set the learning rate as 5e-5, batch size as 512, epoch number as 30, and use the Adam optimizer to update the parameters. We save the model at the end of each epoch and select the model with the smallest validation loss to run the evaluation. #### 4.2.2. **API Recommendation** _Dataset_. We reuse the BIKER dataset leveraged by Wei et al. [41]. The training dataset contains 33K questions with corresponding relevant APIs in the accepted answers. The test dataset contains 413 manually labeled questions from Stack Overflow, which are looking for API to solve programming problems, and labeled the ground-truth API for these questions based on their accepted answers. The dataset is constructed by selecting posts satisfying three criteria: (1) the question has a positive score, (2) at least one answer to the question contains API entities (3) the answer has a positive score. _Evaluation Metrics_. We use the same evaluation metrics as previous literature [13, 41] for the API recommendation task. The metrics are: Mean Reciprocal Rank (MRR), Mean Average Precision (MAP), Precision@k and Recall@k. Different from tag recommendation, the API recommendation task is not a multi-label classification task, and the Recall@k metrics used in this task follow the conventional definition, which is: \[Recall@k=\frac{|\text{API}_{\text{True}}\cap\text{API}_{\text{Predict}}|}{| \text{API}_{\text{True}}|}\] To be consistent with Wei et al. [41], we use \(\text{k}\in{1,3,5}\). _Implementation Details_. CLEAR shows state-of-the-art performance in the API recommendation task by leveraging BERT sentence embedding and contrastive learning. The original architecture of CLEAR is implemented based on DistilRoBERTa 3 during the training process. In this study, we also explore the effectiveness of other representation methods by replacing the embedding of DistilRoBERTa in CLEAR. For Post2Vec, we concatenate the post representation from Post2Vec to the original implementation of CLEAR. For this task, we set the batch size as 256, and the epoch number as 30. Same to the description in Sec 4.2.1, we select the model with the smallest validation loss to run the test set. #### 4.2.3. **Relatedness Prediction** DatasetThe experiments are conducted based on the KUs dataset provided by Shirani et al. (Shirani et al., 2019). This dataset contains 347,372 pairs of KUs. To ensure a fair comparison with the prior work (Shirani et al., 2019), we use the same data for training, validation, and testing, containing 208,423, 347,37, and 104,211 pairs of KU, respectively. Evaluation MetricsFollowing prior work (Shirani et al., 2019), we adopt the micro-averaging method to calculate Micro-precision, Micro-recall, and Micro-F1 as evaluation metrics. Implementation DetailsWe concatenate a pair of posts as the input to train a multi-class classifier. We fine-tuned Longformer on a sequence length of 1,024 and fine-tuned other pre-trained models on a sequence length of 512. For all experiments, we set the batch size as 32 and the epoch number as 5. We select the model with the smallest validation loss to run the evaluation. ## 5. Experimental Results This section describes the experiment results and answers our research questions. The experimental results are summarized in Table 1, 2, and 3 respectively. RQ1: How effective are the existing Stack Overflow post representation models?The experimental results for the tag recommendation task are summarized in Table 1. PTM4Tag originally achieves a performance of 0.417, 0.805, and 0.526 in terms of Precision@5, Recall@5, and F1-score@5. However, the extra inclusion of Post2Vec lowers the performance to 0.416, 0.804, and 0.525, respectively. BERTOverflow struggles in the task with scores of 0.083, 0.163, and 0.105. For API recommendation (Table 2), combining Post2Vec with the state-of-the-art approach CLEAR also fails to boost the performance. CLEAR itself could obtain a score of 0.739 and 0.753 in MRR and MAP, while the performance drop to 0.735 and 0.745 when Post2Vec is added. BERTOverflow obtained a performance of 0.753 and 0.778. In the relatedness prediction task (Table 3), the integration of Post2Vec with ASIM slightly lowers the performance from 0.785 to 0.768 in F1-score. BERTOverflow fails to beat ASIM with an F1-score of 0.697. Overall, Post2Vec can not improve the performance of the state-of-the-art solutions in our downstream tasks. BERTOverflow performs poorly in classification tasks and only achieves comparable performance with the state-of-the-art solution in API recommendation. Answer to RQ1The existing Stack Overflow representation methods fail to improve state-of-the-art performance from the three targeted downstream tasks. **RQ2: How effective are the popular BERT-based language models for the targeted downstream tasks?** For tag recommendation (Table 1), the F1-score@5 for state-of-the-art approach PTM4Tag is 0.526. CodeBERT and RoBERTa can both achieve a higher F1-score@5 of 0.527. For Longformer and GraphCodeBERT, their F1-score@5 are 0.502 and 0.517, respectively. Like BERTOverflow, seBERT struggles in this task with an F1-score@5 of 0.105. Table 2 shows that CLEAR is no longer the best-performing method in API recommendation. Replacing the embedding of Distilled RoBERTa in the original design of CLEAR with other BERT-based language models increases the performance. In terms of MRR and MAP, seBERT scores 0.754 and 0.777. In contrast, CodeBERT, RoBERTa, GraphCodeBERT, and Longformer are all able to achieve higher scores than 0.767 and 0.782. In particular, GraphCodeBERT boosts the performance of CLEAR by 3.8% and 5.0% in terms of MRR and MAP. For Precision@1,3,5 and Recall@1,3,5, GraphCodeBERT outperforms CLEAR by 6.7% 22.0%. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline **Group** & **Representation** & **P@1** & **R@1** & **F1@1** & **P@3** & **R@3** & **F1@3** & **P@5** & **R@5** & **F1@5** \\ \hline \multirow{2}{*}{**SO-Specific**} & **PTM4Tag** & 0.875 & 0.875 & 0.875 & 0.586 & 0.756 & 0.641 & 0.417 & 0.805 & 0.526 \\ \cline{2-11} & **Post2Vec** & 0.875 & 0.875 & 0.875 & 0.585 & 0.754 & 0.639 & 0.416 & 0.804 & 0.525 \\ \cline{2-11} & **BERTOverflow** & 0.088 & 0.088 & 0.088 & 0.089 & 0.094 & 0.095 & 0.083 & 0.163 & 0.105 \\ \hline \multirow{2}{*}{**General**} & **RoBERTa** & 0.878 & 0.878 & 0.878 & 0.591 & 0.761 & 0.646 & 0.418 & 0.804 & 0.527 \\ \cline{2-11} & **Longformer** & 0.852 & 0.852 & 0.852 & 0.559 & 0.721 & 0.612 & 0.397 & 0.769 & 0.502 \\ \cline{2-11} & **CodeBERT** & 0.876 & 0.876 & 0.876 & 0.588 & 0.758 & 0.642 & 0.418 & 0.805 & 0.527 \\ \cline{2-11} & **GraphCodeBERT** & 0.874 & 0.875 & 0.875 & 0.582 & 0.751 & 0.636 & 0.410 & 0.791 & 0.517 \\ \cline{2-11} & **seBERT** & 0.088 & 0.088 & 0.089 & 0.094 & 0.095 & 0.083 & 0.163 & 0.105 \\ \hline **Our Model** & **SOBERT** & **0.896** & **0.896** & **0.896** & **0.610** & **0.784** & **0.666** & **0.431(+3.1\%)** & **0.830(+3.1\%)** & **0.544(+3.2\%)** \\ \hline \end{tabular} \end{table} Table 1: Experiment Results for Tag Recommendation Task \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline **Group** & **Representation** & **MRR** & **MAP** & **P@1** & **P@3** & **P@5** & **R@1** & **R@3** & **R@5** \\ \hline \multirow{2}{*}{**SOTA**} & **CLEAR** & 0.739 & 0.753 & 0.482 & 0.560 & 0.562 & 0.629 & 0.766 & 0.793 \\ \cline{2-11} & **Post2Vec** & 0.735 & 0.745 & 0.471 & 0.560 & 0.556 & 0.625 & 0.774 & 0.801 \\ \cline{2-11} & **BERTOverflow** & 0.753 & 0.778 & 0.521 & 0.639 & 0.651 & 0.681 & 0.774 & 0.762 \\ \hline \multirow{2}{*}{**General domain**} & **RoBERTa** & 0.777 & 0.790 & 0.537 & 0.640 & 0.653 & 0.689 & 0.782 & 0.815 \\ \cline{2-11} & **Longformer** & 0.767 & 0.782 & 0.525 & 0.623 & 0.646 & 0.683 & 0.772 & 0.793 \\ \hline \multirow{2}{*}{**SE domain**} & **CodeBERT** & 0.781 & 0.800 & 0.564 & 0.641 & 0.659 & 0.712 & 0.772 & 0.793 \\ \cline{2-11} & **GraphCodeBERT** & 0.784 & 0.804 & 0.537 & 0.652 & 0.663 & 0.693 & 0.803 & 0.829 \\ \cline{2-11} & **seBERT** & 0.754 & 0.777 & 0.525 & 0.624 & 0.635 & 0.678 & 0.749 & 0.772 \\ \hline **Our Model** & **SOBERT** & **0.809(+3.2\%)** & **0.827(+2.9\%)** & **0.571** & **0.681** & **0.687** & **0.728** & **0.824** & **0.849** \\ \hline \end{tabular} \end{table} Table 2: Experimental Results for API Recommendation Task \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Group** & **Representation** & **F1-Score** & **Precision** & **Recall** \\ \hline **SOTA** & **ASIM** & 0.785 & 0.785 & 0.785 \\ \hline \multirow{2}{*}{**SO-Specific**} & **Post2Vec** & 0.768 & 0.768 & 0.768 \\ \cline{2-5} & **BERTOverflow** & 0.697 & 0.697 & 0.697 \\ \hline \multirow{2}{*}{**General Domain**} & **RoBERTa** & 0.787 & 0.787 & 0.787 \\ \cline{2-5} & **Longformer** & 0.786 & 0.786 & 0.786 \\ \hline \multirow{2}{*}{**SE domain**} & **CodeBERT** & 0.803 & 0.803 & 0.803 \\ \cline{2-5} & **GraphCodeBERT** & 0.801 & 0.801 & 0.801 \\ \cline{2-5} & **seBERT** & 0.799 & 0.799 & 0.799 \\ \hline **Our Model** & **SOBERT** & **0.824(+2.6\%)** & **0.824(+2.6\%)** & **0.824(+2.6\%)** \\ \hline \end{tabular} \end{table} Table 3: Experiment Result for Relatedness Prediction Task From Table 3, we observe that ASIM, the state-of-the-art technique in relatedness prediction, is outperformed by other BERT-based language models. While ASIM achieves a score of 0.785 in F1-score, CodeBERT drives forward the state-of-the-art performance by 2.3% with an F1-score of 0.803. Moreover, RoBERTa, GraphCodeBERT, Longformer, and seBERT have an F1-score of 0.787, 0.801, 0.786, and 0.799. Overall, models like CodeBERT can consistently give promising representations in all three tasks, proving its generalizability and effectiveness in a wide range of SE-related tasks. **Answer to RQ2**: Representations generated by CodeBERT and RoBERTa consistently outperform each state-of-the-art technique from the targeted downstream tasks. However, none of the models can always be the best performer. Overall, CodeBERT is the most promising representation model. **RQ3: Is further pre-training on Stack Overflow data helpful in building a better model?** Our experimental results show that there is no "one-size-fits-all" model in representing Stack Overflow posts, which could consistently outperform others in the considered tasks. Such a phenomenon delivers an intuition that there is an improvement opportunity in the representation technique for Stack Overflow. Based on such intuition and common practice that a second phase of in-domain pre-training leads to performance gains [10], we conduct additional pre-training for a BERT-based model (i.e., CodeBERT) with the Stack Overflow dataset. We name it SOBERT. **Pre-training Details** We have leveraged the Stack Overflow dump dated August 2022 (which includes posts from July 2008 to August 2022) and selected 22 million question posts as the training corpus. The raw dataset has a size of approximately 67G. Many previous works have removed the code snippets of a Stack Overflow post during pre-processing stage [19, 48]. According to the statistics conducted by Xu et al. [43], more than 70% of the Stack Overflow contains at least one code snippet. As a result, the removal of code snippets would result in losing a significant of information, and they should be considered to learn an effective post representation. As the code snippets within the body of a post are enclosed in HTML tags \(<pre><code>\) and \(<code><\rangle pre>\), we cleaned the redundant HTML tags with regular expression \(<pre><code>\) (\([\backslash\backslash\backslash]S^{*}?)<\backslash code><\backslash pre>\). We have initialized SOBERT based on the checkpoint of the CodeBERT model and pre-trained SOBERT using the MLM objective with a standard masking rate of 15%. The batch size is set as 256, and the learning rate is 1e-4. The training process takes 100 hours for eight Nvidia V100 GPUs with 16 GB of memory to complete. The detailed code used is included in the replication package provided. The experimental results show that SOBERT achieves the best performance for every downstream task. For tag recommendation, SOBERT achieves an F1-score@5 of 0.544 and beats the CodeBERT and RoBERTa by 3.2%; for API recommendation, SOBERT performs with 0.809 in terms of MRR and outperforms GraphCodeBERT by 3.2%.; and for relatedness prediction, it accomplishes an F1-score of 0.824 and outperforms CodeBERT by 2.6%. We conduct the Wilcoxon Signed Rank at a 95% significance level (i.e., p-value \(<\) 0.05) on the paired data corresponding to SOBERT and the best-performing representation model in each task (i.e., CodeBERT and RoBERTa in tag recommendation, GraphCodeBERT in API recommendation, and CodeBERT in relatedness prediction). The significance test has been conducted on the values of evaluation metrics. We observe that SOBERT significantly outperforms the comparing model. **Answer to RQ3**: Further pre-training of CodeBERT with the Stack Overflow data improves the original performance and consistently outperforms state-of-the-art performance in all the targeted downstream tasks. ## 6. Discussion ### Lessons Learned _Lesson #1. Incorporating post embeddings from an external approach does not boost the performance of neural network models._ Xu et al. (2018) demonstrated that appending the distributed post representation learned by Post2Vec to the manually crafted feature vector can increase the performance of traditional machine learning algorithms, for example, Support Vector Machine (Wang et al., 2017) and Random Forest (Bordes and Komodel, 2017), in a set of Stack Overflow-related tasks. However, these benefits are not observed for the state-of-the-art techniques that are based on deep neural networks. This is potentially caused by the design of neural networks that automatically extract feature vectors and continuously optimize the representations. It indicates that deep neural networks may lose the effectiveness of external embeddings while optimizing the parameters of the feature extractor. _Lesson #2. Models with broader background knowledge derive better results than those with specific knowledge._ Textual artifacts from different domains follow dissimilar word distributions. BERTOverflow is expected to produce the desired Stack Overflow post representation as it is specifically designed for Stack Overflow data. A major difference between the models for post representation and others is the vocabulary. As BERTOverflow is pre-trained from scratch with the Stack Overflow data, its vocabulary should be more suitable than general domain models. Notice that since CodeBERT, GraphCodeBERT, and Longformer are initialized on the checkpoint of RoBERTa, these models inherit the same vocabulary as RoBERTa. Table 4 presents five examples of the tokenization result of BERTOverflow and RoBERTa. "MongoDB" is separated into three sub-words ("M", "ongo", and "DB") by RoBERTa, but BERTOverflow is capable of representing as a whole word. It confirms our hypothesis that BERTOverflow has a more suitable vocabulary for representing the SE domain technical terms. Surprisingly, our experiment results show that other BERT-based language models outperform BERTOverflow by a substantial margin across all three tasks. It gives an extremely poor performance in the tag recommendation task. By inspecting the prediction results of BERTOverflow in the tag prediction task, we notice that the top-5 predictions made by BERTOverflow are always the most frequent tags ('python', 'java', 'c#', 'java-script', and 'android') from the dataset. \begin{table} \begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{**Word**} & **RoBERTa** & **BERTOverflow** \\ & **Tokenization** & **Tokenization** \\ \hline jvm & ‘j’, ’vm’ & ‘jvm’ \\ overflow & ‘over’, ‘flow’ & ‘overflow’ \\ jQuery & ‘j’, ‘query’ & ‘jquery’ \\ MongoDB & ‘M’, ‘ongo’, ‘DB’ & ‘MongoDB’ \\ PostgreSQL & ‘Post’, ‘greSQL’ & ‘PostgreSQL’ \\ \hline \hline \end{tabular} \end{table} Table 4. Examples of the tokenization of software-related technical words by CodeBERT and BERTOverflow We observe seBERT has similar performance as BERTOverflow in the tag recommendation task. We perceive that it is potentially because these models lack a sufficient amount of pre-training to perform well. Beause seBERT and BERTOverflow are trained from scratch and requires much more pre-training effort than continued pre-training with extant models. To prove this concept, we performed additional pre-training on BERTOverflow with the same pre-training corpus as SOBERT. The further training was with the same hyper-parameters as SOBERT, and it took 23 hours for us to finish with 4 GPUs containing 16GB Nvidia V100 each. We demote this new model as BERTOverflow\({}_{\text{NEW}}\), and we notice its notable performance improvements compared to BERTOverflow. The results are reported in Table 5.4 Footnote 4: Please note that we could not apply further pre-training to seBERT due to the constraints of limited resources to handle BERT\({}_{\text{Large}}\) architecture. Overall, our experiments have shown that, in all three tasks, vocabulary has a subdued effect, and the more important factor tends to be the scale for pre-training. Also, pre-training from scratch is commonly considered as an expensive process. Initializing new representation models based on the checkpoint of an existing decent model lowers the risk and the tag recommendation task is a good indicator to demonstrate the generalizability and the sufficiency of pre-training for pre-trained models. Lesson #3. Despite considering a longer input length, Longformer does not produce better representations for posts.Conventional BERT-based models like CodeBERT and RoBERTa are unable to handle long sequences due to the quadratic complexity of the self-attention mechanism [37] and accept a maximum of 512 sub-token as the input. However, more than 50% of Stack Overflow posts are longer than this given limit [11]. Truncation is normally employed to deal with this limitation; however, applying truncation increases the risk of losing information. It motivates us to investigate Longformer as it is designed to handle long-length input with a maximum size of 4,096 tokens. As all of our evaluations demonstrated, Longformer fails to perform better than the other model that belongs to the general domain (i.e., RoBERTa) as well as models from the SE domain, even though it takes more time and resources for training. We further compare the performance of Longformer by varying the input size considering the first 512 and 1,024 tokens. The additional experimental results are shown in Table 5. These additional settings do not differ in performance. It indicates that diversifying the input size does not affect Longformer's performance on post representation. A potential interpretation would be the important features for representing Stack Overflow posts lie in the first part of each post (e.g., Title serves as a succinct summary of the post). It is not worth trying Longformer unless one strictly needs the entire content of Stack Overflow posts. Lesson #4. We advocate future studies related to Stack Overflow consider the SOBERT as the underlying baseline. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**Tag Recommendation**} & \multicolumn{2}{c|}{**API Recommendation**} & \multicolumn{2}{c}{**Relatedness Prediction**} \\ \hline & **P@5** & **R@5** & **F1@5** & **MRR** & **MAP** & **P** & **R** & **F1** \\ \hline **BERTOverflow** & 0.083 & 0.163 & 0.105 & 0.753 & 0.778 & 0.697 & 0.697 & 0.697 \\ \hline **BERTOverflow-New** & 0.410 & 0.790 & 0.518 & 0.769 & 0.784 & 0.789 & 0.789 & 0.789 \\ \hline **Longformer-512** & 0.397 & 0.768 & 0.502 & 0.768 & 0.783 & 0.785 & 0.785 & 0.785 \\ \hline **Longformer-1024** & 0.397 & 0.769 & 0.502 & 0.767 & 0.782 & 0.786 & 0.786 & 0.786 \\ \hline \end{tabular} \end{table} Table 5: Results for variants of BERTOverflow and Longformer Our experiment results demonstrate that further pre-training based on in-domain data leads to better Stack Overflow post representation. By initializing SOBERT with the CodeBERT checkpoint and performing further pre-training on Stack Overflow data, we have noticed that SOBERT consistently outperforms the original CodeBERT and produces new state-of-the-art performance for all three tasks. In Table 6, we present three examples of the prediction results of CodeBERT and SOBERT for the tag recommendation task. We observe that CodeBERT is making wrong predictions like ".net" and "c#" when the question is about "haskell" while SOBERT is capable of making the correct predictions. CodeBERT may lack knowledge of programming languages like Haskell and Lua since it is pre-trained on artifacts from Python, Java, JavaScript, PHP, Ruby and Go. Taking the Stack Overflow post with ID 13202867 as another example, the question is about Flexslider, a jQuery slider plugin. In the given example, SOBERT could successfully make connections to tags like 'jQuery' and 'css' while CodeBERT struggles to give meaningful predictions. Overall, by continuing the pre-training process on Stack Overflow data, SOBERT outperforms CodeBERT in three popular Stack Overflow-related tasks. Furthermore, Figure 1 shows the learning curve of different representation models in the tag recommendation task (i.e., evaluated on the test data) by varying the number of epochs. SOBERT not only achieves state-of-the-art effectiveness, but it also converges faster than the other models. The same pattern is observed in the API recommendation and the relatedness prediction tasks. In practice, a model with a faster convergence speed is preferred as the fine-tuning stage would require less amount of resources, and it implies that the model provides a good initialization for the learning tasks. We advocate future studies to consider SOBERT as their underlying baseline. To facilitate the usage of the enhanced CodeBERT model proposed in this work, we plan to release it to HuggingFace5 so that it can be used by simply calling the interface. Footnote 5: [https://huggingface.co/](https://huggingface.co/) ### Threats to Validity **Threats to internal validity.** To ensure the correct implementation of the baseline methods (i.e., Post2Vec, PTM4Tag, CLEAR, and ASIM), we reused the replication package released by the original authors.6,7,8,9 When investigating the effectiveness of various pre-trained models, we used the implementation of each from the popular open-source community _HuggingFace_. \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Post** & **Post** & **CodeBERT** & **SOBERT** & **True Tag** \\ \hline **ID** & **Title** & **Tag Prediction** & **Tag Prediction** & **Towag** \\ \hline **13202867** & Fixed size of Flexslider & ‘apache-flex’, frameworks’, ‘ios’, ‘swift’, ‘xcode’ & ‘carousel’, ‘css’, ‘html’, ‘javascript’, ‘javascript’, ‘jquery’ & ‘css’, ‘html’, ‘javascript’ \\ & What is the right way to & What is the right way to & What is the right way to & What is the right way to & What is the right way to \\ **30434343** & bycheck dependent & ‘.net’, ‘binding’, ‘c’, ‘functional-programming’, ‘haskell’, & ‘haskell’ \\ & lambda abstraction & ‘lambda’, ‘type-inference’ & ‘lambda’, ‘type-inference’, ‘types’ & ‘haskell’ \\ & using ‘bound’ & & ‘.net’, ‘c++’, ‘d’, ‘ghc’, ‘haskell’, ‘type-systems’, ‘haskell’, ‘static-analysis’, & ‘für’, ‘performance’ & ‘typeclass’, ‘types’ & ‘typeclass’, ‘types’ \\ \hline \hline \end{tabular} \end{table} Table 6. Examples of predictions made by CodeBERT and SOBERT in the tag recommendation task **Threats to external validity.** One threat to external validity relates our results may not generalize to those newly emerging topics or other Stack Overflow-related downstream tasks. We have minimized this threat by considering multiple downstream tasks. **Threats to construct validity.** We reuse the same evaluation metrics in our baseline methods (Kumar et al., 2018; Zhang et al., 2018; Zhang et al., 2018). To further reduce the risk, we conduct the Wilcoxon signed-rank statistical hypothesis test to check whether the output between the two competing approaches is significant. ## 7. Related Work In this section, we review two lines of research that most relate to our work: pre-trained models for SE and mining Stack Overflow posts. ### Pre-trained Models for Software Engineering Inspired by the success of pre-trained models achieved in the field of artificial intelligence, there is emerging research interest in exploring pre-training tasks and applying pre-trained models in SE (Kumar et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). One set of research focuses on learning semantic and contextual representations of source code; after pre-training, these models can be fine-tuned to solve SE downstream tasks. Note that the three Stack Overflow tasks considered in our work are understanding tasks. Thus, we focus on the encoder-based models (i.e., the Transformer Encoder component). In the literature, there are other types of PTMs that can be used for generation tasks. They are based on the Transformer decoder component (e.g., CodeGPT (Zhang et al., 2018)) or encoder-decoder architecture (e.g., CodeT5 (Zhang et al., 2018)). In this part, we review one model from each category. ContraCode (Kumar et al., 2018) is another encoder-based model, which adopts a contrastive pre-training task to learn code functionality. They organize programs into positive pairs (i.e., functionally similar) and negative pairs (i.e., functionally dissimilar). During Figure 1. Performance of pre-trained models on tag prediction during training contrastive pre-training, query programs are used to retrieve positive programs. Positive programs are pushed together, while negative ones have been pushed away. CodeGPT (Kumar et al., 2017) pre-trains a Transformer-decoder-based language model GPT (Kumar et al., 2017) on program languages. It consists of 12 layers of Transformer decoders. CodeGPT has been pre-trained in Python and Java corpora from the CodeSearchNet dataset, which contains 1.1M Python functions and 1.6M Java methods. It can be used for code completion and code generation tasks (Kumar et al., 2017). Wang et al. (Wang et al., 2018) present CodeT5, a pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Similar to T5, CodeT5 pre-trains on the masking span prediction task, which randomly masks spans with arbitrary length in the source sequence and then predicts the masked spans. In addition, CodeT5 also pre-trains with two tasks to fuse code-specific structural information into the model, i.e., identifier tagging and masked identifier prediction. The other set of research focuses on fine-tuning the pre-trained models to tackle SE challenges (Kumar et al., 2017; Wang et al., 2018). Zhang et al. (Zhang et al., 2018) conduct a comparative study on PTM with prior SE-specific tools in sentiment analysis for SE. The experimental results show that PTM is more ready for real use than the prior tools. Lin et al. (Lin et al., 2019) find that BERT can boost the performance of traceability tasks in open-source projects. They investigate three BERT architectures, i.e., Single-BERT, Siamese-BERT, and Twin-BERT. The results indicate that the single-BERT can generate the most accurate links, while a Siamese-BERT architecture produced comparable effectiveness with significantly better efficiency. In this paper, we conducted a comprehensive study on multiple SOTA PTMs for mining Stack Overflow tasks. Different from these works, ours is more comprehensive and covers several common tasks other than focusing on one specific task. Except for fine-tuning PTMs, we also further pre-trained CodeBERT on Stack Overflow data. ### Mining Stack Overflow Posts We address tag recommendation (Kumar et al., 2017; Wang et al., 2018), API recommendation (Kumar et al., 2017; Wang et al., 2018), and relatedness prediction (Kumar et al., 2017; Wang et al., 2018) in this work. Others also explored other tasks for mining Stack Overflow posts to support software developers, such as post recommendation (Kumar et al., 2017), multi-answer summarization (Wang et al., 2018), and controversial discussions (Kumar et al., 2017). Rubei et al. (Kumar et al., 2017) propose an approach named PostFinder, which aims to retrieve Stack Overflow posts that are relevant to API function calls that have been invoked. They make use of Apache Lucene to index the textual content and code in Stack Overflow to improve efficiency. In both the data collection and query phase, they make use of the data available at hand to optimize the search process. Specifically, they retrieve and augment posts with additional data to make them more exposed to queries. Besides, they boost the context code to construct a query that contains the essential information to match the stored indexes. Xu et al. (Xu et al., 2018) investigate the multi-answer posts summarization task for a given input question, which aims to help developers get the key points of several answer posts before they dive into the details of the results. They propose an approach _AnswerBot_, which contains three main steps, i.e., relevant question retrieval, useful answer paragraph selection, and diverse answer summary generation. Ren et al. (Ren et al., 2018) investigate the controversial discussions in Stack Overflow. They find that there is a large scale of controversies in Stack Overflow, which indicates that many answers are wrong, less optimal, and out-of-date. Our work and their work are complementary to each other, and all aim to boost automation in understanding and utilizing Stack Overflow contents. ## 8. Conclusion and Future Work In this paper, we empirically study the effectiveness of varying techniques for modeling Stack Overflow posts, including approaches that are specially designed for Stack Overflow posts (i.e., Post2Vec and BERTOverflow), SE domain representation models (i.e., CodeBERT, GraphCodeBERT, and seBERT) and general domain representation models (i.e., RoBERTa, and LongFormer). We evaluate the performance of these representation models on three popular and representative Stack Overflow-related tasks, which are tag recommendation, API recommendation, and relatedness prediction. Our experimental results show that Post2Vec is unable to enhance the representations that are automatically extracted by deep learning-based methods and BERTOverflow performs surprisingly worse than other BERT-based language models. Furthermore, there does not exist one representation technique that could consistently outperform other representation models. Our findings indicate the current research gap in representing Stack Overflow posts. Thus, we propose SOBERT with a simple-yet-effective strategy. We initialize SOBERT with the checkpoint of CodeBERT and continue the pre-training process with 22 million posts from Stack Overflow. As a result, SOBERT improves the performance of the original CodeBERT and consistently outperforms other models on all three tasks, confirming that further pre-training on Stack Overflow data is helpful for building Stack Overflow representation. In the future, we would also extend our research to other SQA sites, such as AskUbuntu10. Moreover, we show that Longformer and BERTOverflow do not generate better representations for Stack Overflow posts, therefore exploring better representation models which could handle noise in a longer input and with a Stack Overflow vocabulary is still a possible direction to explore. Footnote 10: [https://askubuntu.com/](https://askubuntu.com/) ## 9. Data Availability The replication package of the data and code used in this paper is available at **[https://figshare.com/s/7f80db836305607b89f3](https://figshare.com/s/7f80db836305607b89f3)**.
2302.09849
A step towards a general density Corrádi--Hajnal Theorem
For a nondegenerate $r$-graph $F$, large $n$, and $t$ in the regime $[0, c_{F} n]$, where $c_F>0$ is a constant depending only on $F$, we present a general approach for determining the maximum number of edges in an $n$-vertex $r$-graph that does not contain $t+1$ vertex-disjoint copies of $F$. In fact, our method results in a rainbow version of the above result and includes a characterization of the extremal constructions. Our approach applies to many well-studied hypergraphs (including graphs) such as the edge-critical graphs, the Fano plane, the generalized triangles, hypergraph expansions, the expanded triangles, and hypergraph books. Our results extend old results of Simonovits~\cite{SI68} and Moon~\cite{Moon68} on complete graphs and can be viewed as a step towards a general density version of the classical Corr\'{a}di--Hajnal Theorem~\cite{CH63}.
Jianfeng Hou, Heng Li, Xizhi Liu, Long-Tu Yuan, Yixiao Zhang
2023-02-20T09:12:10Z
http://arxiv.org/abs/2302.09849v2
# A step towards a general density Corradi-Hajnal Theorem ###### Abstract For a nondegenerate \(r\)-graph \(F\), large \(n\), and \(t\) in the regime \([0,c_{F}n]\), where \(c_{F}>0\) is a constant depending only on \(F\), we present a general approach for determining the maximum number of edges in an \(n\)-vertex \(r\)-graph that does not contain \(t+1\) vertex-disjoint copies of \(F\). In fact, our method results in a rainbow version of the above result and includes a characterization of the extremal constructions. Our approach applies to many well-studied hypergraphs (including graphs) such as the edge-critical graphs, the Fano plane, the generalized triangles, hypergraph expansions, the expanded triangles, and hypergraph books. Our results extend old results of Simonovits [65] and Moon [53] on complete graphs and can be viewed as a step towards a general density version of the classical Corradi-Hajnal Theorem [10]. **Keywords:** Hypergraph Turan problems, the Corradi-Hajnal Theorem, \(F\)-matching, stability, vertex-extendability. ## 1 Introduction ### Motivation Fix an integer \(r\geq 2\), an \(r\)-graph \(\mathcal{H}\) is a collection of \(r\)-subsets of some finite set \(V\). We identify a hypergraph \(\mathcal{H}\) with its edge set and use \(V(\mathcal{H})\) to denote its vertex set. The size of \(V(\mathcal{H})\) is denoted by \(v(\mathcal{H})\). Given two \(r\)-graphs \(F\) and \(\mathcal{H}\) we use \(\nu(F,\mathcal{H})\) to denote the maximum of \(k\in\mathbb{N}\) such that there exist \(k\) vertex-disjoint copies of \(F\) in \(\mathcal{H}\). We call \(\nu(F,\mathcal{H})\) the \(F\)**-matching number of \(\mathcal{H}\). If \(F=K_{r}^{r}\) (i.e. an edge), then we use \(\nu(\mathcal{H})\) to represent \(\nu(F,\mathcal{H})\) for simplicity. The number \(\nu(\mathcal{H})\) is also known as the **matching number** of \(\mathcal{H}\). The study of the following problem encompasses several central topics in Extremal Combinatorics. Given an \(r\)-graph \(F\) and integers \(n,t\in\mathbb{N}\): _What kinds of constraints on an \(n\)-vertex \(r\)-graph \(\mathcal{H}\) force it to satisfy \(\nu(F,\mathcal{H})\geq t+1\)?_ For \(r=2\) and \(F=K_{2}\), the celebrated Erdos-Gallai Theorem [14] states that for all integers \(n,\ell\in\mathbb{N}\) with \(t+1\leq n/2\) and for every \(n\)-vertex graph \(G\), \[|G|>\max\left\{\binom{2t+1}{2},\binom{n}{2}-\binom{n-t}{2}\right\}\quad \Rightarrow\quad\nu(G)\geq t+1.\] Here we use the symbol \(\Rightarrow\) to indicate that the constraint on the left side forces the conclusion on the right side. Extending the Erdos-Gallai Theorem to \(r\)-graphs for \(r\geq 3\) is a major open problem, and the following conjecture of Erdos is still open in general (see e.g. [20, 21, 22, 34] for some recent progress on this topic). **Conjecture 1.1** (Erdos [13]).: _Suppose that \(n,t,r\in\mathbb{N}\) satisfy \(r\geq 3\) and \(t+1\leq n/r\). Then for every \(n\)-vertex \(r\)-graph \(\mathcal{H}\),_ \[|\mathcal{H}|>\max\left\{\binom{r(t+1)-1}{r},\binom{n}{r}-\binom{n-t}{r}\right\} \quad\Rightarrow\quad\nu(\mathcal{H})\geq t+1.\] For general \(r\)-graphs \(F\), determining the minimum number of edges in an \(n\)-vertex \(r\)-graph \(\mathcal{H}\) that guarantees \(\nu(F,\mathcal{H})\geq 1\) is closely related to the Turan problem. For our purpose in this work, let us introduce the following notions. Fix an \(r\)-graph \(F\), we say another \(r\)-graph \(\mathcal{H}\) is \(F\)**-free** if \(\nu(F,\mathcal{H})=0\). In other words, \(\mathcal{H}\) does not contains \(F\) as a subgraph. The **Turan number**\(\mathrm{ex}(n,F)\) of \(F\) is the maximum number of edges in an \(F\)-free \(r\)-graph on \(n\) vertices. The **Turan density** of \(F\) is defined as \(\pi(F):=\lim_{n\to\infty}\mathrm{ex}(n,F)/\binom{n}{r}\), the existence of the limit follows from a simple averaging argument of Katona, Nemetz, and Simonovits [37] (see Proposition 3.2). An \(r\)-graph \(F\) is called **nondegenerate** if \(\pi(F)>0\). We use \(\mathrm{EX}(n,F)\) to denote the collection of all \(n\)-vertex \(F\)-free \(r\)-graphs with exactly \(\mathrm{ex}(n,F)\) edges, and call members in \(\mathrm{EX}(n,F)\) the **extremal constructions** of \(F\). The study of \(\mathrm{ex}(n,F)\) and \(\mathrm{EX}(n,F)\) is a central topic in Extremal Combinatorics. Much is known when \(r=2\), and one of the earliest results in this regard is Mantel's theorem [52], which states that \(\mathrm{ex}(n,K_{3})=\lfloor n^{2}/4\rfloor\). For every integer \(\ell\geq 2\) let \(T(n,\ell)\) denote the balanced complete \(\ell\)-partite graph on \(n\) vertices. Here, balanced means that the sizes of any two parts differ by at most one. We call \(T(n,\ell)\) the **Turan graph**, and use \(t(n,\ell)\) to denote the number of edges in \(T(n,\ell)\). The seminal Turan Theorem states that \(\mathrm{EX}(n,K_{\ell+1})=\{T(n,\ell)\}\) for all integers \(n\geq\ell\geq 2\). Later, Turan's theorem was extended to general graphs \(F\) in the celebrated Erdos-Stone-Simonovits Theorem [15, 17], which says that \(\pi(F)=\left(\chi(F)-2\right)/\left(\chi(F)-1\right)\). Here \(\chi(F)\) is the **chromatic number** of \(F\). For \(r\geq 3\), determining \(\mathrm{ex}(n,F)\) or even \(\pi(F)\) for an \(r\)-graph \(F\) is known to be notoriously hard in general. The problem of determining \(\pi(K_{\ell}^{r})\) raised by Turan [67], where \(K_{\ell}^{r}\) is the complete \(r\)-graph on \(\ell\) vertices, is still wide open for all \(\ell>r\geq 3\). Erdos offered $500 for the determination of any \(\pi(K_{\ell}^{\tau})\) with \(\ell>r\geq 3\) and \(\$1000\) for all \(\pi(K_{\ell}^{\tau})\) with \(\ell>r\geq 3\). We refer the reader to an excellent survey [38] by Keevash for related results before 2011. Another related central topic in Extremal Combinatorics is the Factor Problem. We say an \(r\)-graph \(\mathcal{H}\) has an \(F\)-**factor** if it contains a collection of vertex-disjoint copies of \(F\) that covers all vertices in \(V(\mathcal{H})\). In other words, \(\nu(F,\mathcal{H})=\frac{v(\mathcal{H})}{v(F)}\) (in particular, \(v(F)\mid v(\mathcal{H})\)). For an \(r\)-graph \(\mathcal{H}\) and a vertex \(v\in V(\mathcal{H})\) the **degree**\(d_{\mathcal{H}}(v)\) of \(v\) in \(\mathcal{H}\) is the number of edges in \(\mathcal{H}\) containing \(v\). We use \(\delta(\mathcal{H})\), \(\Delta(\mathcal{H})\), and \(d(\mathcal{H})\) to denote the **minimum degree**, the **maximum degree**, and the **average degree** of \(\mathcal{H}\), respectively. We will omit the subscript \(\mathcal{H}\) if it is clear from the context. A classical theorem of Corradi and Hajnal [10] implies the following result for \(K_{3}\). **Theorem 1.2** (Corradi-Hajnal [10]).: _Suppose that \(n,t\in\mathbb{N}\) are integers with \(t\leq n/3\). Then for every \(n\)-vertex graph \(G\),_ \[\delta(G)\geq t+\left\lfloor\frac{n-t}{2}\right\rfloor\quad\Rightarrow\quad \nu(K_{3},G)\geq t.\] _In particular, if \(3\mid n\), then every \(n\)-vertex graph \(G\) with \(\delta(G)\geq 2n/3\) contains a \(K_{3}\)-factor._ Later, Theorem 1.2 was extended to all complete graphs in the classical Hajnal-Szemeredi Theorem [31], which implies that for all integers \(n\geq\ell\geq 2\), \(t\leq\lfloor n/(\ell+1)\rfloor\), and for every \(n\)-vertex graph \(G\), \[\delta(G)\geq t+\left\lfloor\frac{\ell-1}{\ell}(n-t)\right\rfloor\quad \Rightarrow\quad\nu(K_{\ell+1},G)\geq t.\] For further related results, we refer the reader to a survey [44] by Kuhn and Osthus. In this work, we are interested in density constraints that force an \(r\)-graph to have large \(F\)-matching number, where \(F\) is a nondegenerate \(r\)-graph. Since our results are closely related to the Turan problem of \(F\), we abuse the use of notation by letting \(\operatorname{ex}\left(n,(t+1)F\right)\) denote the maximum number of edges in an \(n\)-vertex \(r\)-graph \(\mathcal{H}\) with \(\nu(F,\mathcal{H})<t+1\). Given two \(r\)-graphs \(\mathcal{G}\) and \(\mathcal{H}\) whose vertex sets are disjoint, we define the **join**\(\mathcal{G}\mathfrak{X}\mathcal{H}\) of \(\mathcal{G}\) and \(\mathcal{H}\) to be the \(r\)-graph obtained from \(\mathcal{G}\sqcup\mathcal{H}\) (the vertex-disjoint union of \(\mathcal{G}\) and \(\mathcal{H}\)) by adding all \(r\)-sets that have nonempty intersection with both \(V(\mathcal{G})\) and \(V(\mathcal{H})\). For simplicity, we define the join of an \(r\)-graph \(\mathcal{H}\) and a family \(\mathcal{F}\) of \(r\)-graphs as \(\mathcal{H}\mathrel{\raisebox{0.86pt}{\scalebox{0.8}{$\propto$}}}\mathcal{F} :=\{\mathcal{H}\mathrel{\raisebox{0.86pt}{\scalebox{0.8}{$\propto$}}} \mathcal{G}\colon\mathcal{G}\in\mathcal{F}\}\). Erdos [12] considered the density problem for \(K_{3}\) and proved the following result. **Theorem 1.3** (Erdos [12]).: _Suppose that \(n,t\in\mathbb{N}\) and \(t\leq\sqrt{n/400}\). Then_ \[\operatorname{EX}\left(n,(t+1)K_{3}\right)=\{K_{t}\mathrel{\raisebox{0.86pt}{ \scalebox{0.8}{$\propto$}}}T(n-t,2)\}.\] Later, Moon [53] extended it to all complete graphs. **Theorem 1.4** (Moon [53]).: _Suppose that integers \(n,t,\ell\in\mathbb{N}\) satisfy \(\ell\geq 2\), \(t\leq\frac{2n-3\ell^{2}+2\ell}{\ell^{3}+2\ell^{2}+\ell+1}\), and \(\ell\mid(n-t)\). Then_ \[\operatorname{EX}\left(n,(t+1)K_{\ell+1}\right)=\{K_{t}\mathrel{\raisebox{0.86pt} {\scalebox{0.8}{$\propto$}}}T(n-t,\ell)\}\,. \tag{1}\] It is worth mentioning that, in fact, for \(\ell=2\), Moon proved that the constraint \(\ell\mid(n-t)\) can be removed, and moreover, (1) holds for all \(t\leq\frac{2n-8}{9}\). For \(\ell\geq 3\), Moon remarked in [53] that there are some difficulties to remove the constraint \(\ell\mid(n-t)\). Nevertheless, the divisibility constraint is not required in our results. Meanwhile, Simonovits [65] also considered this problem and proved that if \(t\geq 1\) and \(\ell\geq 2\) are fixed integers, then (1) holds for all sufficiently large \(n\). It becomes much more complicated when extending Theorem 1.4 to larger \(t\). Indeed, a full density version of the Corradi-Hajnal Theorem was obtained only very recently by Allen, Bottcher, Hladky, and Piguet [2] for large \(n\). Their results show that, interestingly, there are four different extremal constructions for four different regimes of \(t\), and the construction \(K_{t}\unltimes T(n-t,2)\) is extremal only for \(t\leq\frac{2n-6}{9}\). For the other three extremal constructions, we refer the reader to their paper for details. For larger complete graphs, it seems that there are even no conjectures for the extremal constructions in general (see remarks in the last section of [2]). The objective of this work is to provide a general approach to determine \(\operatorname{ex}(n,(t+1)F)\) for nondegenerate hypergraphs (including graphs) \(F\) when \(n\) is sufficiently large and \(t\) is within the range of \([0,c_{F}n]\), where \(c_{F}>0\) is a small constant depending only on \(F\). Our main results are stated in the next section after the introduction of some necessary definitions. We hope our results could shed some light on a full generalization of the density version of the Corradi-Hajnal Theorem. ### Main results Given an \(r\)-graph \(F\) and an integer \(n\in\mathbb{N}\) define \[\delta(n,F):=\operatorname{ex}(n,F)-\operatorname{ex}(n-1,F)\quad\text{and} \quad d(n,F):=\frac{r\cdot\operatorname{ex}(n,F)}{n}.\] Observe that \(d(n,F)\) is the average degree of hypergraphs in \(\operatorname{EX}(n,F)\), and \(\delta(n,F)\) is a lower bound for the minimum degree of hypergraphs in \(\operatorname{EX}(n,F)\) (see Fact 4.1). The following two definitions are crucial for our main results. The first definition concerns the maximum degree of a near-extremal \(F\)-free \(r\)-graph. **Definition 1.5** (Boundedness).: _Let \(f_{1},f_{2}\colon\mathbb{N}\to\mathbb{R}\) be two nonnegative functions. An \(r\)-graph \(F\) is \((f_{1},f_{2})\)-bounded if every \(F\)-free \(r\)-graph \(\mathcal{H}\) on \(n\) vertices with average degree at least \(d(n,F)-f_{1}(n)\) satisfies \(\Delta(\mathcal{H})\leq d(n,F)+f_{2}(n)\), i.e._ \[d(\mathcal{H})\geq d(n,F)-f_{1}(n)\quad\Rightarrow\quad\Delta(\mathcal{H}) \leq d(n,F)+f_{2}(n).\] Later we will prove that families with certain stability properties also have good boundedness (see Theorem 1.11). The next definition concerns the smoothness of the Turan function \(\operatorname{ex}(n,F)\). **Definition 1.6** (Smoothness).: _Let \(g\colon\mathbb{N}\to\mathbb{R}\) be a nonnegative function. The Turan function \(\operatorname{ex}(n,F)\) of an \(r\)-graph \(F\) is \(g\)-smooth if_ \[|\delta(n,F)-d(n-1,F)|\leq g(n)\quad\text{holds for all $n\in\mathbb{N}$}.\] Assumptions on the smoothness of \(\operatorname{ex}(n,F)\) were used by several researchers before. See e.g. [3, 35] for degenerate graphs and see e.g. [39, Theorem 1.4] for nondegenerate hypergraphs. Now we are ready to state our main result. **Theorem 1.7**.: _Fix integers \(m\geq r\geq 2\) and a nondegenerate \(r\)-graph \(F\) on \(m\) vertices. Suppose that there exists a constant \(c>0\) such that for all sufficiently large \(n\in\mathbb{N}\colon\)_ 1. \(F\) _is_ \(\left(c\binom{n}{r-1},\frac{1-\pi(F)}{4m}\binom{n}{r-1}\right)\)_-bounded, and_ 2. \(\operatorname{ex}(n,F)\) _is_ \(\frac{1-\pi(F)}{8m}\binom{n}{r-1}\)_-smooth._ _Then there exists \(N_{0}\) such that for all integers \(n\geq N_{0}\) and \(t\leq\min\left\{\frac{c}{4erm}n,\frac{1-\pi(F)}{64rm^{2}}n\right\}\), we have_ \[\operatorname{EX}\left(n,(t+1)F\right)=K_{t}^{r}\;\operatorname{EX}(n-t,F), \tag{2}\] _and, in particular,_ \[\operatorname{ex}\left(n,(t+1)F\right)=\binom{n}{r}-\binom{n-t}{r}+ \operatorname{ex}(n-t,F). \tag{3}\] **Remark.** Note that one cannot hope that (3) holds for all nondegenerate \(r\)-graphs. Indeed, if we let \(F=2K_{3}\) and let \(t\geq 2\), then \[\operatorname{ex}(n,(t+1)F)=\operatorname{ex}(n,(2t+2)K_{3}) \geq\binom{n}{2}-\binom{n-2t-1}{2}+\left\lfloor\frac{(n-2t-1)^{2 }}{4}\right\rfloor\] \[>\binom{n}{2}-\binom{n-t}{2}+\left\lfloor\frac{(n-1)^{2}}{4} \right\rfloor+n-1\] \[=\binom{n}{2}-\binom{n-t}{2}+\operatorname{ex}(n-t,F).\] Fix an \(r\)-graph \(F\) on \(m\) vertices. We say a collection \(\{\mathcal{H}_{1},\ldots,\mathcal{H}_{t+1}\}\) of \(r\)-graphs on the same vertex set \(V\) has a **rainbow \(F\)-matching** if there exists a collection \(\{S_{i}\colon i\in[t+1]\}\) of pairwise disjoint \(m\)-subsets of \(V\) such that \(F\subset\mathcal{H}_{i}[S_{i}]\) for all \(i\in[t+1]\). Recently, there has been considerable interest in extending some classical results to a rainbow version. See e.g. [1, 30, 34, 43, 50, 51] for some recent progress on the rainbow version of the Erdos Matching Conjecture. Here we include the following rainbow version of Theorem 1.7. **Theorem 1.8**.: _The following holds under the assumption of Theorem 1.7. If a collection \(\{\mathcal{H}_{1},\ldots,\mathcal{H}_{t+1}\}\) of \(n\)-vertex \(r\)-graphs on the same vertex set satisfies_ \[|\mathcal{H}_{i}|>\binom{n}{r}-\binom{n-t}{r}+\operatorname{ex}(n-t,F)\quad \text{for all $i\in[t+1]$},\] _then \(\{\mathcal{H}_{1},\ldots,\mathcal{H}_{t+1}\}\) contains a rainbow \(F\)-matching._ Observe that (3) follows immediately by letting \(\mathcal{H}_{1}=\cdots=\mathcal{H}_{t+1}\) in Theorem 1.8. In fact, we will prove Theorem 1.8 first (which yields (3)), and then we prove (2) by adding some further argument. ### Boundedness and smoothness In this subsection, we present some simple sufficient conditions for an \(r\)-graph to have good boundedness and smoothness. Before stating our results, let us introduce some necessary definitions. For many nondegenerate Turan problems the extremal constructions usually have simple structures. We use the following notions to encode the structural information of a hypergraph. Let an \(r\)**-multiset** mean an unordered collection of \(r\) elements with repetitions allowed. Let \(E\) be a collection of \(r\)-multisets on \([k]\). Let \(V_{1},\ldots,V_{k}\) be disjoint sets and let \(V:=V_{1}\cup\cdots\cup V_{k}\). The **profile** of an \(r\)-set \(X\subseteq V\) (with respect to \(V_{1},\ldots,V_{k}\)) is the \(r\)-multiset on \([k]\) that contains \(i\in[k]\) with multiplicity \(|X\cap V_{i}|\). For an \(r\)-multiset \(Y\subseteq[k]\), let \(Y(\!(V_{1},\ldots,V_{k})\!)\) consist of all \(r\)-subsets of \(V\) whose profile is \(Y\). The \(r\)-graph \(Y(\!(V_{1},\ldots,V_{k})\!)\) is called the **blowup** of \(Y\) (with respect to \(V_{1},\ldots,V_{k}\)) and the \(r\)-graph \[E(\!(V_{1},\ldots,V_{k})\!):=\bigcup_{Y\in E}Y(\!(V_{1},\ldots,V_{k})\!)\] is called the **blowup** of \(E\) (with respect to \(V_{1},\ldots,V_{k}\)). An **(\(r\)-uniform) pattern** is a pair \(P=(k,E)\) where \(k\) is a positive integer and \(E\) is a collection of \(r\)-multisets on \([k]\). It is clear that pattern is a generalization of \(r\)-graphs, since an \(r\)-graph is a pattern in which \(E\) consists of only simple \(r\)-sets. If it is clear from the context, we will use \(E\) to represent the pattern \(P\) for simplicity (like what we did for hypergraphs). Moreover, if \(E\) consists of a single element, we will use this element to represent \(E\). We say an \(r\)-graph \(\mathcal{G}\) is a \(P\)**-construction** on a set \(V\) if there exists a partition \(V=V_{1}\cup\cdots\cup V_{k}\) such that \(\mathcal{G}=E(\!(V_{1},\ldots,V_{k})\!)\). An \(r\)-graph \(\mathcal{H}\) is a \(P\)**-subconstruction** if it is a subgraph of some \(P\)-construction. For example, the Turan graph \(T(n,\ell)\) is a \(K_{\ell}\)-constrction on \([n]\), and an \(\ell\)-partite graph is a \(K_{\ell}\)-subconstrction. Let \(\Lambda(P,n)\) denote the maximum number of edges in a \(P\)-construction with \(n\) vertices and define the **Lagrangian** of \(P\) as the limit \[\lambda(P):=\lim_{n\to\infty}\frac{\Lambda(P,n)}{{n\choose r}}.\] Using a simple averaging argument, one can show that \(\Lambda(P,n)/{n\choose r}\) is nonincreasing, and hence, the limit exists. We say a pattern \(P=(k,E)\) is **minimum** if \(\lambda(P-i)<\lambda(P)\) for all \(i\in[k]\), where \(P-i\) denotes the new pattern obtained from \(P\) by removing \(i\) from \([k]\) and removing all \(r\)-multisets containing \(i\) from \(E\). Note that the Lagrangian of a pattern is a generalization of the well-known **hypergraph Lagrangian** (see e.g. [5, 25]) that has been successfully applied to Turan-type problems, with the basic idea going back to Motzkin and Straus [54]. **Remark**.: The notion of pattern was introduced by Pikhurko in [61] to study the general properties of nondegenerate hypergraph Turan problems, and it was also used very recently in [48, 49]. Note that the definition of pattern in [61] is more general by allowing recursive parts. Our results about patterns in this work can be easily extended to this more general setting. Let \(F\) be an \(r\)-graph and \(P\) be a pattern. We say \((F,P)\) is a **Turan pair** if every \(P\)-construction is \(F\)-free and every maximum \(F\)-free construction is a \(P\)-construction. For example, it follows from the Turan Theorem that \((K_{\ell+1},K_{\ell})\) is a Turan pair for all \(\ell\geq 2\). It is easy to observe that for a Turan pair \((F,P)\), we have \[\pi(F)=\lambda(P). \tag{4}\] For hypergraphs in Turan pairs, we have the following result concerning the smoothness of their Turan functions. **Theorem 1.9**.: _Suppose that \(F\) is an \(r\)-graph and \(P\) is a minimal pattern such that \((F,P)\) is a Turan pair. Then \(\operatorname{ex}(n,F)\) is \(4\binom{n-1}{r-2}\)-smooth._ The boundedness of \(F\) is closely related to the stability of \(F\). So we introduce some definitions related to stability. Suppose that \((F,P)\) is a Turan pair. * We say \(F\) is **edge-stable** with respect to \(P\) if for every \(\delta>0\) there exist constants \(N_{0}\) and \(\zeta>0\) such that for every \(F\)-free \(r\)-graph \(\mathcal{H}\) on \(n\geq N_{0}\) vertices with at least \((\pi(F)-\zeta)\,\binom{n}{r}\) edges, there exists a subgraph \(\mathcal{H}^{\prime}\subset\mathcal{H}\) with at least \((\pi(F)-\delta)\,\binom{n}{r}\) edges such that \(\mathcal{H}^{\prime}\) is a \(P\)-subconstruction. * We say \(F\) is **vertex-extendable** with respect to \(P\) if there exist constants \(N_{0}\) and \(\zeta>0\) such that for every \(F\)-free \(r\)-graph \(\mathcal{H}\) on \(n\geq N_{0}\) vertices satisfing \(\delta(\mathcal{H})\geq(\pi(F)-\zeta)\,\binom{n-1}{r-1}\) the following holds: if \(\mathcal{H}-v\) is a \(P\)-subconstruction for some vertex \(v\in V(\mathcal{H})\), then \(\mathcal{H}\) is also a \(P\)-subconstruction. * We say \(F\) is **weakly vertex-extendable** with respect to \(P\) if for every \(\delta>0\) there exist constants \(N_{0}\) and \(\zeta>0\) such that for every \(F\)-free \(r\)-graph \(\mathcal{H}\) on \(n\geq N_{0}\) vertices satisfying \(\delta(\mathcal{H})\geq(\pi(F)-\zeta)\,\binom{n-1}{r-1}\) the following holds: if \(\mathcal{H}-v\) is a \(P\)-subconstruction for some vertex \(v\in V(\mathcal{H})\), then \(d_{\mathcal{H}}(v)\leq(\pi(F)+\delta)\,\binom{n-1}{r-1}\). For simplicity, if \(P\) is clear from the context, we will simply say that \(F\) is edge-stable, vertex-extendable, and weakly vertex-extendable, respectively. The first stability theorem which states that \(K_{\ell+1}\) is edge-stable with respect to \(K_{\ell}\) was proved independently by Erdos and Simonovits [65], and it was used first by Simonovits [65] to determine the exact Turan number \(\operatorname{ex}(n,F)\) of an edge-critical graph \(F\) for large \(n\). Later, Simonovits' method (also known as the Stability Method) was used by many researchers to determine the Turan numbers of a large collection of hypergraphs (see Section 2 for more details). The definition of vertex-extendability was introduced by Mubayi, Reiher, and the third author in [47] for a unified framework for proving the stability of a large class of hypergraphs. The definition of weak vertex-extendability seems to be new, and it is clear from (4) and the following lemma that for a Turan pair \((F,P)\) the vertex-extendability implies the weak vertex-extendability. There are several examples showing that the inverse is not true in general (see e.g Section 2.6). It seems interesting to explore the relations between the weak vertex-extendability and other types of stability (see [47] for more details). **Lemma 1.10** ([48, Lemma 21]).: _Suppose that \(P\) is a minimal pattern. Then for every \(\delta>0\) there exist \(N_{0}\) and \(\varepsilon>0\) such that every \(P\)-subconstruction \(\mathcal{H}\) on \(n\geq N_{0}\) vertices with \(\delta(\mathcal{H})\geq(\lambda(P)-\varepsilon)\,\binom{n-1}{r-1}\) satisfies \(\Delta(\mathcal{H})\leq(\lambda(P)+\delta)\,\binom{n-1}{r-1}\)._ Let us add another remark about the weak vertex-extendability that might be useful for readers who are familiar with the stability method. In a standard stability argument in determining the exact value of \(\operatorname{ex}(n,F)\), one usually defines a set \(\mathcal{B}\) of bad edges and a set \(\mathcal{M}\) of missing edges, and then tries to prove that \(|\mathcal{M}|>|\mathcal{B}|\). One key step in this argument is to prove that the maximum degree of \(\mathcal{B}\) is small (more specifically, \(\Delta(B)=o(n^{r-1})\)), which, informally speaking, usually implies the weak vertex-extendability of \(F\). For a Turan pair \((F,P)\) with the weak vertex-extendability, we have the following result concerning the boundedness of \(F\). **Theorem 1.11**.: _Suppose that \(F\) is an \(r\)-graph and \(P\) is a minimal pattern such that \(F\) is edge-stable and weakly vertex-extendable (or vertex-extendable) with respect to \(P\). Then there exists a constant \(c>0\) such that \(F\) is \(\left(c{n-1\choose r-1},\frac{1-\pi(F)}{8m}{n-1\choose r-1}\right)\)-bounded for large \(n\)._ **Remark.** It seems possible to extend Theorems 1.9 and 1.11 to nonminimal patterns, but we do not aware of any \(r\)-graph \(F\) whose extremal construction is a \(P\)-construction for some nonminimal pattern \(P\). However, there does exist a finite family \(\mathcal{F}\) of \(r\)-graphs whose extremal construction is a \(P\)-construction for some nonminimal pattern \(P\) (see [33] for more details). In many cases, (weak) vertex-extendability of \(F\) follows from a stronger type of stability that was studied by many researchers before. Suppose that \((F,P)\) is a Turan pair. We say \(F\) is **degree-stable** with respect to \(P\) if there exists \(\zeta>0\) such that for large \(n\) every \(n\)-vertex \(F\)-free \(r\)-graph \(\mathcal{H}\) with \(\delta(\mathcal{H})\geq\left(\pi(F)-\zeta\right){n-1\choose r-1}\) is a \(P\)-subconstruction. It is easy to observe from the definition that if \(F\) is degree-stable with respect to \(P\), then \(F\) is edge-stable and vertex-extendable with respect to \(P\). Therefore, we have the following corollary of Theorems 1.9 and 1.11. **Corollary 1.12**.: _Suppose that \(F\) is an \(r\)-graph and \(P\) is a minimal pattern such that \(F\) is degree-stable with respect to \(P\). Then there exists a constant \(c>0\) such that_ * \(\operatorname{ex}(n,F)\) _is_ \(4{n-1\choose r-2}\)_-smooth, and_ * \(F\) _is_ \(\left(c{n-1\choose r-1},\frac{1-\pi(F)}{8m}{n-1\choose r-1}\right)\)_-bounded._ In the next section, we show some applications of Theorems 1.7, 1.9 and 1.11, and Corollary 1.12. We omit the applications of Theorem 1.8 since they are quite straightforward to obtain once we present the corresponding applications of Theorem 1.7. The proofs for Theorems 1.7 and 1.8 are included in Section 3. The proofs for Theorems 1.9 and 1.11 are included in Section 4. ## 2 Applications Combining some known stability results with Theorems 1.7, 1.9, and 1.11 (or Corollary 1.12) we can immediately obtain results in this section. To demonstrate a way to apply Theorems 1.7, 1.9, and 1.11 in general, we include the short proof for the weak vertex-extendability of \(\mathbb{F}_{3,2}\) (even though it can be deduced from results in [27]). ### Edge-critical graphs Recall that for a graph \(F\) its chromatic number is denoted by \(\chi(F)\). We say a graph \(F\) is **edge-critical** if there exists an edge \(e\in F\) such that \(\chi(F-e)<\chi(F)\). Using the stability method, Simonovits proved in [65] that if a graph \(F\) is edge-critical and \(\chi(F)\geq 3\), then \(\operatorname{EX}(n,F)=\{T(n,\chi(F)-1)\}\) for all sufficiently large \(n\). Extending the classical Andrasfai-Erdos-Sos Theorem [4], Erdos and Simonovits [16] proved that every edge-critical graph with chromatic number at least \(3\) is degree-stable. Therefore, combined with Theorem 1.7 and Corollary 1.12, we obtain the following result. **Theorem 2.1**.: _Suppose that \(F\) is an edge-critical graph with \(\chi(F)\geq 3\). Then there exist constants \(N_{0}\) and \(c_{F}>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,c_{F}n]\) we have_ \[\operatorname{EX}(n,(t+1)F)=\left\{K_{t}\;\mathbb{Z}\;T(n-t,\chi(F)-1)\right\}.\] **Remarks.** * For Theorem 2.1 and all other theorems in this section, we did not try to optimize the constant \(c_{F}\), but it seems possible to obtain a reasonable bound1 for \(c_{F}\) by a more careful analysis of the proof for Theorem 1.11 (and the proof for the (weak) vertex-extendability of \(F\) in some cases). Footnote 1: It seems possible to get a polynomial dependency between \(c_{F}\) and \(\frac{1}{rm}\). * The case when \(F\) is an odd cycle was also considered in a recent paper [18, Theorem 1.1]. * It might be true that Theorem 2.1 holds for a broader class of graphs, and it would be interesting to characterize the class of graphs for which Theorem 2.1 holds. ### The Fano plane The **Fano plane**\(\mathbb{F}\) is a \(3\)-graph with vertex set \(\{1,2,3,4,5,6,7\}\) and edge set \[\{123,345,561,174,275,376,246\}.\] Let \([n]=V_{1}\cup V_{2}\) be a partition with \(|V_{1}|=\lfloor n/2\rfloor\) and \(|V_{2}|=\lceil n/2\rceil\). Let \(B_{3}(n)\) denote the \(3\)-graph on \([n]\) whose edge set consists of all triples that have a nonempty intersection with both \(V_{1}\) and \(V_{2}\). Note that \(|B_{3}(n)|\sim\frac{3}{4}\binom{n}{3}\). It was conjectured by Sos [66] and famously proved by De Caen and Furedi [11] that \(\pi(\mathbb{F})=3/4\). Later, using a stability argument, Keevash and Sudakov [42], and independently, Furedi and Simonovits [29] proved that \(\operatorname{EX}(n,\mathbb{F})=\{B_{3}(n)\}\) for all solfficienly large \(n\). Recently, Bellmann and Reiher [6] proved that \(\operatorname{ex}(n,\mathbb{F})=|B_{3}(n)|=\frac{n-2}{2}\lfloor\frac{n^{2}}{4 }\rfloor\) for all \(n\geq 7\), and moreover, they proved that \(B_{3}(n)\) is the unique extremal construction for all \(n\geq 8\). It follows from the result of Keevash and Sudakov [42], and independently, Furedi and Simonovits [29] that \(\mathbb{F}\) is degree-stable. Therefore, we obtain the following result. **Theorem 2.2**.: _There exist constants \(N_{0}\) and \(c_{\mathbb{F}}>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,c_{\mathbb{F}}n]\) we have_ \[\operatorname{EX}(n,(t+1)\mathbb{F})=\left\{K_{t}^{3}\;\mathbb{Z}\;B_{3}(n-t) \right\}.\] ### Generalized triangles The (\(r\)-uniform) **generalized triangle**\(\mathds{T}_{r}\) is the \(r\)-graph with vertex set \([2r-1]\) and edge set \[\left\{\{1,\ldots,r-1,r\},\{1,\ldots,r-1,r+1\},\{r,r+1,\ldots,2r-1\}\right\}.\] Note that \(\mathds{T}_{2}\) is simply a triangle. Fix \(n\geq r\geq 2\) and \(\ell\geq r\). Let \([n]=V_{1}\cup\cdots\cup V_{\ell}\) be a partition such that \(|V_{i}|\in\left\{\lfloor\frac{n}{\ell}\rfloor,\lceil\frac{n}{\ell}\rceil\right\}\) for all \(i\in[\ell]\). The **generalized Turan**\(r\)-**graph**\(T_{r}(n,\ell)\) is the \(r\)-graph on \([n]\) whose edge set consists of all \(r\)-sets that contain at most one vertex from each \(V_{i}\). Note that \(T_{2}(n,\ell)\) is the Turan graph \(T(n,\ell)\). Let \(t_{r}(n,\ell)\) denote the number of edges in \(T_{r}(n,\ell)\). ### Generalized triangles The (\(r\)-uniform) **generalized triangle**\(\mathds{T}_{r}\) is the \(r\)-graph with vertex set \([2r-1]\) and edge set \[\left\{\{1,\ldots,r-1,r\},\{1,\ldots,r-1,r+1\},\{r,r+1,\ldots,2r-1\}\right\}.\] Note that \(\mathds{T}_{2}\) is simply a triangle. Fix \(n\geq r\geq 2\) and \(\ell\geq r\). Let \([n]=V_{1}\cup\cdots\cup V_{\ell}\) be a partition such that \(|V_{i}|\in\left\{\lfloor\frac{n}{\ell}\rfloor,\lceil\frac{n}{\ell}\rceil\right\}\) for all \(i\in[\ell]\). The **generalized Turan**\(r\)-**graph**\(T_{r}(n,\ell)\) is the \(r\)-graph on \([n]\) whose edge set consists of all \(r\)-sets that contain at most one vertex from each \(V_{i}\). Note that \(T_{2}(n,\ell)\) is the Turan graph \(T(n,\ell)\). Let \(t_{r}(n,\ell)\) denote the number of edges in \(T_{r}(n,\ell)\). ### Generalized triangles The (\(r\)-uniform) **generalized triangle**\(\mathds{T}_{r}\) is the \(r\)-graph with vertex set \([2r-1]\) and edge set \[\left\{\{1,\ldots,r-1,r\},\{1,\ldots,r-1,r+1\},\{r,r+1,\ldots,2r-1\}\right\}.\] Note that \(\mathds{T}_{2}\) is simply a triangle. Fix \(n\geq r\geq 2\) and \(\ell\geq r\). Let \([n]=V_{1}\cup\cdots\cup V_{\ell}\) be a partition such that \(|V_{i}|\in\left\{\lfloor\frac{n}{\ell}\rfloor,\lceil\frac{n}{\ell}\rceil\right\}\) for all \(i\in[\ell]\). The **generalized Turan**\(r\)-**graph**\(T_{r}(n,\ell)\) is the \(r\)-graph on \([n]\) whose edge set consists of all \(r\)-sets that contain at most one vertex from each \(V_{i}\). Note that \(T_{2}(n,\ell)\) is the Turan graph \(T(n,\ell)\). Let \(t_{r}(n,\ell)\) denote the number of edges in \(T_{r}(n,\ell)\). Katona conjectured and Bollobas [8] proved that \(\text{EX}(n,\{\mathds{T}_{3},K_{4}^{3-}\})=\{T_{3}(n,3)\}\) for all \(n\in\mathbb{N}\), where \(K_{4}^{3-}\) is the unique \(3\)-graph with \(4\) vertices and \(3\) edges. Later, Frankl and Furedi [23] sharpened the result of Bollobas by showing that \(\text{EX}(n,\mathds{T}_{3})=\{T_{3}(n,3)\}\) for all \(n\geq 3000\). In [40], Keevash and Mubayi proved the edge-stability of \(\mathds{T}_{3}\) and improved the lower bound of \(n\) from \(3000\) to \(33\). A short proof for the edge-stability with a linear dependency between the error parameters can be found in [45]. The vertex-extendability of \(\mathds{T}_{3}\) can be easily obtained from the proof of Lemma 4.4 in [47] (also see the Concluding Remarks in [47]). Therefore, we obtain the following result. **Theorem 2.3**.: _There exist constants \(N_{0}\) and \(c_{\mathds{T}_{3}}\) such that for all integers \(n\geq N_{0}\) and Figure 1: The Fano plane and the complete bipartite \(3\)-graph \(B_{3}(n)\). \(t\in[0,c_{\mathbb{T}_{3}}n]\) we have_ \[\mathrm{EX}(n,(t+1)\mathds{T}_{3})=\left\{K_{t}^{3}\;\mathbb{X}\;T_{3}(n-t,3) \right\}.\] For \(r=4\), improving a result of Sidorenko in [63], Pikhurko proved in [59] that \(\mathrm{EX}(n,\mathds{T}_{4})=\{T_{4}(n,4)\}\) for all sufficiently large \(n\). Similarly, the vertex-extendability of \(\mathds{T}_{4}\) can be obtained from the proof of Lemma 4.4 in [47] (also see the Concluding Remarks in [47]). Therefore, we obtain the following result. **Theorem 2.4**.: _There exist constants \(N_{0}\) and \(c_{\mathbb{T}_{4}}\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,c_{\mathbb{T}_{4}}n]\) we have_ \[\mathrm{EX}(n,(t+1)\mathds{T}_{4})=\left\{K_{t}^{4}\;\mathbb{X}\;T_{4}(n-t,4) \right\}.\] The situation becomes complicated when \(r\geq 5\). Let \(\mathds{W}_{5}\) denote the unique \(5\)-graph with \(11\) vertices such that every \(4\)-set of vertices is contained in exactly one edge. Let \(\mathds{W}_{6}\) denote the unique \(6\)-graph with \(12\) vertices such that every \(5\)-set of vertices is contained in exactly one edge. Let \(\mathds{W}_{5}(n)\) and \(\mathds{W}_{6}(n)\) denote the maximum \(\mathds{W}_{5}\)-construction and \(\mathds{W}_{6}\)-construction on \(n\) vertices, respectively. Some calculations show that \(\mathds{W}_{5}(n)\sim\frac{6}{11^{4}}n^{5}\) and \(\mathds{W}_{6}(n)\sim\frac{11}{12^{5}}n^{6}\). In [24], Frankl and Furedi proved that \(\mathrm{ex}(n,\mathds{T}_{r})\leq|\mathds{W}_{r}(n)|+o(n^{r})\) for \(r=5,6\). Much later, using a sophisticated stability argument, Norin and Yepremyan [57] proved that \(\mathds{T}_{5}\) and \(\mathds{T}_{6}\) are edge-stable with respect to \(\mathds{W}_{5}\) and \(\mathds{W}_{6}\) respectively, and moreover, \(\mathrm{EX}(n,\mathds{T}_{r})=\{\mathds{W}_{r}(n)\}\) for \(r=5,6\) and large \(n\). It was observed by Pikhurko [59] that both \(\mathds{T}_{5}\) and \(\mathds{T}_{6}\) fail to be degree-stable (or vertex-extendable). However, from Lemmas 7.2 and 7.4 in [57] one can easily observe that \(\mathds{T}_{5}\) and \(\mathds{T}_{6}\) are weakly vertex-extendable. Therefore, we obtain the following theorem. **Theorem 2.5**.: _For \(r\in\{5,6\}\) there exist constants \(N_{0}\) and \(c_{\mathbb{T}_{r}}>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,c_{\mathbb{T}_{r}}n]\) we have_ \[\mathrm{EX}(n,(t+1)\mathds{T}_{r})=\left\{K_{t}^{r}\;\mathbb{X}\;\mathds{W}_{ r}(n-t)\right\}.\] It seems that there are even no conjectures for the extremal constructions of \(\mathds{T}_{r}\) when \(r\geq 7\). We refer the reader to [24] for some lower and upper bounds for \(\pi(\mathds{T}_{r})\) in general. ### The expansion of complete graphs Fix integers \(\ell\geq r\geq 2\). The **expansion**\(H_{\ell+1}^{r}\) of the complete graph \(K_{\ell+1}\) is the \(r\)-graph obtained from \(K_{\ell+1}\) by adding a set of \(r-2\) new vertices into each edge of \(K_{\ell+1}\), and moreover, these new \((r-2)\)-sets are pairwise disjoint. It is clear from the definition that \(H_{\ell+1}^{r}\) has \(\ell+1+(r-2)\binom{\ell+1}{2}\) vertices and \(\binom{\ell+1}{2}\) edges. The \(r\)-graph \(H_{\ell+1}^{r}\) was introduced by Mubayi [55] as a way to generalize Turan's theorem to hypergraphs. These hypergraphs provide the first explicitly defined examples which yield an infinite family of numbers realizable as Turan densities for hypergraphs. In [55], Mubayi determined the Turan density of \(H_{\ell+1}^{r}\) for all integers \(\ell\geq r\geq 3\), and proved that \(H_{\ell+1}^{r}\) is edge-stable. In [60], Pikhurko refined Mubayi's result and proved that \(\mathrm{EX}(n,H_{\ell+1}^{r})=\{T_{r}(n,\ell)\}\) for all integers \(\ell\geq r\geq 3\) when \(n\) is sufficiently large. The vertex-extendability of \(H^{r}_{\ell+1}\) can be easily obtained by a small modification of the proof of Lemma 4.8 in [47] (also see the Concluding Remarks in [47]). Therefore, we obtain the following result. **Theorem 2.6**.: _Fix integers \(\ell\geq r\geq 2\). There exist constants \(N_{0}\) and \(c=c(\ell,r)>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,cn]\) we have_ \[\operatorname{EX}(n,(t+1)H^{r}_{\ell+1})=\{K^{r}_{t}\mathrel{\hbox to 0.0pt{\lower 4. 3pt\hbox{$\sim$}}}T_{r}(n-t,\ell)\}\,.\] **Remarks.** The definition of expansion can be extended to all graphs as follows. Fix a graph \(F\), let the \(r\)-graph \(H^{r}_{F}\) be obtained from \(F\) by adding a set of \(r-2\) new vertices into each edge of \(F\), and moreover, these new \((r-2)\)-sets are pairwise disjoint. Similar to Theorem 2.1, one could obtain a corresponding result for the expansion of all edge-critical graphs. We omit its statement and proof here. ### The expansion of hypergraphs Given an \(r\)-graph \(F\) with \(\ell+1\) vertices, the **expansion**\(H^{F}_{\ell+1}\) of \(F\) is the \(r\)-graph obtained from \(F\) by adding, for every pair \(\{u,v\}\subset V(F)\) that is not contained in any edge of \(F\), an \((r-2)\)-set of new vertices, and moreover, these \((r-2)\)-sets are pairwise disjoint. It is easy to see that the expansion of the empty \(r\)-graph on \(\ell+1\) vertices (here empty means that the edge set is empty) is the same as the expansion of the complete graph \(K_{\ell+1}\) defined in the previous subsection. However, in general, these two definitions are different. Our first result in this subsection is about the expansion of the expanded trees. Given a tree \(T\) on \(k\) vertices, define the \((r-2)\)**-expansion**\(\operatorname{Exp}(T)\) of \(T\) as \[\operatorname{Exp}(T):=\{e\cup A\colon e\in T\}\,,\] where \(A\) is a set of \(r-2\) new vertices that is disjoint from \(V(T)\). Given a tree \(T\) on \(k\) vertices, we say \(T\) is an **Erdos-Sos tree** if it satisfies the famous Erdos-Sos conjecture on trees. In other words, \(T\) is contained in every graph with average degree more than \(k-2\). In [64], Sidorenko proved that for large \(k\), if \(T\) is an Erdos-Sos tree on \(k\) vertices, then \(\operatorname{ex}(n,H^{\operatorname{Exp}(T)}_{k+r-2})\leq t_{r}(n,k+r-3)+o( n^{r})\). Much later, Norin and Yepremyan [58], and independently, Brandt, Irwin, and Jiang [9], improved Sidorenko's result by showing that, under the same setting, \(H^{\operatorname{Exp}(T)}_{k+r-2}\) is edge-stable with respect to \(K^{r}_{k+r-3}\) and \(\operatorname{EX}(n,H^{\operatorname{Exp}(T)}_{k+r-2})=\{T_{r}(n,k+r-3)\}\) for large \(n\). In fact, it follows easily from Lemmas 3.5 and 4.1 in [58] that \(H^{\operatorname{Exp}(T)}_{k+r-2}\) is weakly vertex-extendable with respect to \(K^{r}_{k+r-3}\). Hence, we obtain the following result. **Theorem 2.7**.: _For every integer \(r\geq 3\) there exists \(M_{r}\) such that if \(T\) is an Erdos-Sos tree on \(k\geq M_{r}\) vertices, then there exist \(N_{0}\) and \(c_{T}>0\) such that for all integers \(n\geq N_{0}\) and \(t\leq c_{T}n\), we have_ \[\operatorname{EX}\left(n,(t+1)H^{\operatorname{Exp}(T)}_{k+r-2}\right)=K^{r} _{t}\;\mathbb{Z}\;T_{r}(n-t,k+r-3).\] Next, we consider the expansion of a different class of hypergraphs. Let \(B(r,\ell+1)\) be the \(r\)-graph with vertex set \([\ell+1]\) and edge set \[\left\{[r]\right\}\cup\left\{e\subset[2,\ell+1]\colon|e|=r\text{ and }|e\cap[2,r]|\leq 1 \right\}.\] Recall that the Lagrangian of an \(r\)-graph \(\mathcal{H}\) (by viewing \(\mathcal{H}\) as a pattern) is denoted by \(\lambda(\mathcal{H})\). For integers \(\ell\geq r\geq 2\) let the family \(\mathcal{F}^{r}_{\ell+1}\) be the collection of \(r\)-graphs \(F\) with the following properties: * \(\sup\left\{\lambda(\mathcal{H})\colon\mathcal{H}\text{ is $F$-free and not a $K^{r}_{\ell}$-subconstruction}\right\}<\frac{\ell\cdots(\ell-r+1)}{\ell^{r}}\), and * either \(F\) has an isolated vertex or \(F\subset B(r,\ell+1)\). For every \(F\in\mathcal{F}^{r}_{\ell+1}\) the vertex-extendability2 of the expansion \(H^{F}_{\ell+1}\) can be easily obtained by a small modification of the proof of Lemma 4.8 in [47] (also see the Concluding Remarks in [47]). Hence, we obtain the following result. Footnote 2: The weak vertex-extendability of \(F\in\mathcal{F}^{r}_{\ell+1}\) with an isolated vertex also follows from Lemma 3.4 in [58]. **Theorem 2.8**.: _Suppose that \(\ell\geq r\geq 2\) are integers and \(F\in\mathcal{F}^{r}_{\ell+1}\). Then there exist constants \(N_{0}\) and \(c_{F}>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,c_{F}n]\), we have_ \[\operatorname{EX}\left(n,(t+1)H^{F}_{\ell+1}\right)=\left\{K^{r}_{t}\; \mathbb{Z}\;T_{r}(n-t,\ell)\right\}.\] **Remarks.** * In [56], Mubayi and Pikhurko considered the Turan problem for the \(r\)-graph \(\operatorname{Fan}^{r}\) (the generalized Fan), which is the expansion of the \(r\)-graph on \(r+1\) vertices with only one edge. It is easy to see that \(\operatorname{Fan}^{r}\) is a member in \(\mathcal{F}^{r}_{r+1}\). * The Turan problem for the expansion of certain class of \(r\)-graphs (which is a proper subfamily of \(\mathcal{F}^{r}_{\ell+1}\)) were studied previously in [9] and [58]. * Let \(M^{r}_{k}\) denote the \(r\)-graph consisting of \(k\) vertex-disjoint edges (i.e. a matching of size \(k\)) and let \(L^{r}_{k}\) denote the \(r\)-graph consisting of \(k\) edges having one vertex, say \(v\), in common, and every pair of edges interest only at \(v\) (i.e. a \(k\)-edge sunflower with the center \(v\)). By results in [32, 36], if \(F\) is isomorphic to \(M^{3}_{k}\) (see [32] for \(k=2\) and [36] for \(k\geq 3\)), \(L^{3}_{k}\) (see [36]), or \(L^{4}_{k}\) (see [36]), where \(k\geq 2\) is an integer, then \(F\) is contained in \(\mathcal{F}^{r}_{\ell+1}\). Now we focus on the expansion of \(r\)-uniform matching of size two with \(r\geq 4\). We say an \(r\)-graph is **semibipartite** if its vertex set can be partitioned into two parts \(V_{1}\) and \(V_{2}\) such that every edge contains exactly one vertex in \(V_{1}\). Let \(S_{r}(n)\) denote the semibipartite \(r\)-graph on \(n\) vertices with the maximum number of edges. Simply calculations show that \(|S_{r}(n)|\sim\left(\frac{r-1}{r}\right)^{r-1}\binom{n}{r}\). Confirming a conjecture of Hefetz and Keevash [32], Bene Watts, Norin, and Yepremyan [7] showed that for \(r\geq 4\), \(\operatorname{EX}\left(n,H_{2r}^{M_{2}^{r}}\right)=\{S_{r}(n)\}\) for all sufficiently large \(n\). The vertex-extendability3 of \(H_{2r}^{M_{2}^{r}}\) can be easily obtained by a small modification of the proof of Lemma 4.12 in [47] (also see the Concluding Remarks in [47]). Hence we have the following result. Footnote 3: The weak vertex-extendability of \(H_{2r}^{M_{2}^{r}}\) also follows from Theorem 3.2 in [7] **Theorem 2.9**.: _For every integer \(r\geq 4\), there exist constants \(N_{0}\) and \(c=c(r)>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,cn]\), we have_ \[\operatorname{EX}\left(n,(t+1)H_{2r}^{M_{2}^{r}}\right)=\left\{K_{t}^{r}\; \mathbb{Z}\;S_{r}(n-t)\right\}.\] **Remark.** It is quite possible that Theorem 1.7 applies to the expansion of other hypergraphs, for example, the \(3\)-graph defined in [68] which provides the first example of a single hypergraph whose Turan density is an irrational number. ### Expanded triangles Let \(\mathcal{C}_{3}^{2r}\) denote the \(2r\)-graph with vertex set \([3r]\) and edge set \[\left\{\{1,\ldots,r,r+1,\ldots,2r\},\{r+1,\ldots,2r,2r+1,\ldots,3r\},\{1, \ldots,r,2r+1,\ldots,3r\}\right\}.\] Let \([n]=V_{1}\cup V_{2}\) be a partition such that \(|V_{1}|=\lfloor n/2\rfloor+m\). Let \(B_{2r}^{\text{odd}}(n,m)\) denote the \(2r\)-graph on \([n]\) whose edge set consists of all \(2r\)-sets that interest \(V_{1}\) in odd number of vertices. Some calculations show that \(\max_{m}|B_{2r}^{\text{odd}}(n,m)|\sim\frac{1}{2}\binom{n}{2r}\). Let \(B_{2r}^{\text{odd}}=(2,E)\) denote the pattern such that \(E\) consists of all \(2r\)-multisets that contain exactly odd number of \(1\)s. Note that \(B_{2r}^{\text{odd}}(n,m)\) is a \(B_{2r}^{\text{odd}}\)-construction. The Turan problem for \(\mathcal{C}_{3}^{2r}\) was first considered by Frankl [19], who proved that \(\pi(\mathcal{C}_{3}^{2r})=1/2\). Later, Keevash and Sudakov [41] proved that \(\mathcal{C}_{3}^{2r}\) is edge-stable with respect to \(B_{2r}^{\rm odd}\), and moreover, \(\mathrm{EX}(n,\mathcal{C}_{3}^{2r})\subset\left\{B_{2r}^{\rm odd}(n,m)\colon m\in[0, n/2]\right\}\). Simple constructions4 show that \(\mathcal{C}_{3}^{2r}\) is not degree-stable (or vertex-extendable) with respect to \(B_{2r}^{\rm odd}\). However, using Claim 3.5 in [41], one can easily show that \(\mathcal{C}_{3}^{2r}\) is weakly vertex-extendable with respect to \(B_{2r}^{\rm odd}\). Hence, we have the following theorem. Footnote 4: For example, choose a set \(S\) of \(2r\) vertices from \(V_{1}\) in \(B_{2r}^{\rm odd}(n,0)\), then remove all edges in \(B_{2r}^{\rm odd}(n,0)\) that contain at least two vertices in \(S\) and add \(S\) to the edge set. **Theorem 2.10**.: _For every integer \(r\geq 2\) there exist constants \(N_{0}\) and \(c>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,cn]\), we have_ \[\mathrm{EX}\left(n,(t+1)\mathcal{C}_{3}^{2r}\right)\subset K_{t}^{2r}\; \mathbb{X}\left\{B_{2r}^{\rm odd}(n-t,m)\colon m\in\left[0,\sqrt{2r(n-t)} \right]\right\}.\] **Remarks.** * Calculations in [41] show that if \(B_{2r}^{\rm odd}(n,m)\) is an optimal \(B_{2r}^{\rm odd}\)-construction, then \(m<\sqrt{2rn}\). So it suffices to consider \(m\) in the range \(\left[0,\sqrt{2r(n-t)}\right]\) for Theorem 2.10. * In general, one could consider the expanded \(K_{\ell+1}\) for \(\ell\geq 3\). It seems that the above theorem can be extended to these hypergraphs in some cases. We refer the reader to [62] and [41] for more details. ### Hypergraph books Let \(F_{7}\) (4-book with 3-pages) denote the 3-graph with vertex set \(\{1,2,3,4,5,6,7\}\) and edge set \[\{1234,1235,1236,1237,4567\}\;.\] Let \(B_{4}^{\rm even}(n)\) denote the maximum \(B_{4}^{\rm even}:=(2,\{1,1,2,2\})\)-construction on \(n\) vertices. Simply calculations show that \(|B_{4}(n)|\sim\frac{3}{8}\binom{n}{4}\). Furedi, Pikhurko, and Simonovits [28] proved that \(\mathrm{EX}(n,F_{7})=\{B_{4}(n)\}\) for all sufficiently large \(n\). Moreover, they proved that \(F_{7}\) is degree-stable. Hence, we obtain the following result. **Theorem 2.11**.: _There exist constants \(N_{0}\) and \(c>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,cn]\), we have_ \[\mathrm{EX}\left(n,(t+1)F_{7}\right)=\left\{K_{t}^{4}\;\mathbb{X}\;B_{4}^{\rm even }(n-t)\right\}.\] Figure 5: The 4-graph \(F_{7}\) (4-book with 3 pages) and the 4-graph \(B_{4}^{\rm even}(n)\). Let \(\mathbb{F}_{4,3}\) denote the \(4\)-graph with vertex set \(\{1,2,3,4,5,6,7\}\) and edge set \[\{1234,1235,1236,1237,4567\}\;.\] Let \(B_{4}^{\text{odd}}(n,m)\) denote the \(B_{4}^{\text{odd}}:=(2,\{\{1,2,2,2\},\{1,1,1,2\}\})\)-construction on \(n\) vertices with one part of size \(\lfloor n/2\rfloor+m\). Recall from the previous subsection that \(\max_{m}|B_{4}^{\text{odd}}(n,m)|\sim\frac{1}{2}\binom{n}{4}\). Furedi, Mubayi, and Pikhurko [26] proved that \(\text{EX}(n,\mathbb{F}_{4,3})\subset\{B_{4}^{\text{odd}}(n,m)\colon m\in[0,n/ 2]\}\) for large \(n\), and moreover, \(\mathbb{F}_{4,3}\) is edge-stable with respect to \(B_{4}^{\text{odd}}\). They also showed that edge-stable cannot be replaced by degree-stable (or vertex-extendable). However, from Lemma 3.1 in [26] one can easily obtain that \(\mathbb{F}_{4,3}\) is weakly edge-stable with respect to \(B_{4}^{\text{odd}}\). Hence, we obtain the following theorem. **Theorem 2.12**.: _There exist constants \(N_{0}\) and \(c>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,cn]\), we have_ \[\text{EX}\left(n,(t+1)\mathbb{F}_{4,3}\right)\subset K_{t}^{4}\;\raisebox{-1.72pt}{$\approx$}\;\Big{\{}B_{4}^{\text{odd}}(n-t,m)\colon m\in[0,\sqrt{4(n-t) }]\Big{\}}\,.\] Let \(\mathbb{F}_{3,2}\) denote the \(3\)-graph with vertex set \(\{1,2,3,4,5\}\) and edge set \[\{123,124,125,345\}.\] Recall that \(S_{3}(n)\) is the semibipartite \(3\)-graph on \(n\) vertices with the maximum number of edges, i.e. the maximum \(S_{3}:=(2,\{1,2,2\})\)-construction on \(n\) vertices. Figure 7: The \(3\)-graph \(\mathbb{F}_{3,2}\) and the semibipartite \(3\)-graph \(S_{3}(n)\). Furedi, Pikhurko, and Simonovits [27] proved that \(\mathrm{EX}(n,\mathbb{F}_{3,2})=\{S_{3}(n)\}\) for all sufficiently large \(n\). A construction in their paper ([27, Construction 1.2]) shows that \(\mathbb{F}_{3,2}\) is not vertex-extendable with respect \(S_{3}\). But we will present a short proof in Section 5 which shows that \(\mathbb{F}_{3,2}\) is weakly vertex-extendable with respect to \(S_{3}\). Hence, we obtain the following result. **Theorem 2.13**.: _There exist constants \(N_{0}\) and \(c>0\) such that for all integers \(n\geq N_{0}\) and \(t\in[0,cn]\), we have_ \[\mathrm{EX}\left(n,(t+1)\mathbb{F}_{3,2}\right)=\left\{K_{t}^{r}\;\mbox{$ \asymp\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 3.3**.: _Suppose that \(m\leq n/r-1\). Then_ \[\binom{n}{r}-\binom{n-m}{r}=\sum_{i=1}^{r}\binom{m}{i}\binom{n-m}{r-i}\leq 2m \binom{n-m}{r-1}. \tag{5}\] Proof.: For every \(i\in[2,r]\) we have \[\frac{\binom{m}{i}\binom{n-m}{r-i}}{\binom{m}{i-1}\binom{n-m}{r-i+1}}=\frac{m- i+1}{i}\frac{r-i+1}{n-m-r+i}\leq\frac{(r-1)m}{2(n-m-r)}\leq\frac{1}{2},\] where the last inequality follows from the assumption that \(m\leq n/r-1\). Therefore, \[\sum_{i=1}^{r}\binom{m}{i}\binom{n-m}{r-i}\leq\sum_{i=1}^{r}\left(\frac{1}{2} \right)^{i-1}m\binom{n-m}{r-1}\leq 2m\binom{n-m}{r-1}.\] **Lemma 3.4**.: _Suppose that integers \(n,b,r\geq 1\) satisfy \(b\leq\frac{n-r}{r+1}\). Then_ \[\binom{n}{r}\leq e\binom{n-b}{r}.\] Proof.: For every \(i\in[b]\) it follows from \(b\leq\frac{n-r}{r+1}\) that \(\frac{n-i}{n-i-r}=1+\frac{r}{n-i-r}\leq 1+\frac{r}{n-b-r}\leq 1+\frac{1}{b}\). Therefore, \[\binom{n}{r}=\prod_{i=0}^{b-1}\frac{n-i}{n-i-r}\binom{n-b}{r}\leq\left(1+\frac {1}{b}\right)^{b}\binom{n-b}{r}\leq e\binom{n-b}{r}.\] The following lemma says that \(d(n,F)\) is smooth for every \(F\). **Lemma 3.5**.: _Let \(F\) be an \(r\)-graph. For every \(n\) and \(m\leq n/r-1\) we have_ \[|d(n,F)-d(n-m,F)|\leq 4m\binom{n-m}{r-2}.\] Proof.: It follows from Proposition 3.2 that \(\mathrm{ex}(n,F)/\binom{n}{r}\leq\mathrm{ex}(n-m,F)/\binom{n-m}{r}\). Therefore, \[\mathrm{ex}(n,F)-\mathrm{ex}(n-m,F) \leq\frac{\binom{n}{r}}{\binom{n-m}{r}}\mathrm{ex}(n-m,F)-\mathrm{ ex}(n-m,F)\] \[=\frac{\binom{n}{r}-\binom{n-m}{r}}{\binom{n-m}{r}}\mathrm{ex}(n- m,F)\] \[\stackrel{{\text{Lemma \ref{lem:2011}}}}{{\leq}}\frac{2m \binom{n-m}{r-1}}{\binom{n-m}{r}}\mathrm{ex}(n-m,F)=\frac{2mr}{n-m-r+1} \mathrm{ex}(n-m,F).\] Consequently, \[|d(n,F)-d(n-m,F)| =\left|\frac{r\cdot\mathrm{ex}(n,F)}{n}-\frac{r\cdot\mathrm{ex}(n-m,F )}{n-m}\right|\] \[=\left|\frac{r}{n}\left(\mathrm{ex}(n,F)-\mathrm{ex}(n-m,F)\right) -\frac{rm}{n(n-m)}\mathrm{ex}(n-m,F)\right|\] \[\leq\max\left\{\frac{2mr^{2}}{n(n-m-r+1)},\frac{rm}{n(n-m)} \right\}\cdot\mathrm{ex}(n-m,F)\] \[\leq\frac{2mr^{2}}{n(n-m-r+1)}\binom{n-m}{r}\leq 4m\binom{n-m}{r-2}.\] This completes the proof of Lemma 3.5. The following lemma deals with a simple case of Theorem 3.1 in which the maximum degree of every \(r\)-graph \(\mathcal{H}_{i}\) is bounded away from \(\binom{n-1}{r-1}\). **Lemma 3.6**.: _Let \(F\) be a nondegenerate \(r\)-graph with \(m\) vertices. Suppose that \(\mathrm{ex}(n,F)\) is \(g\)-smooth with \(g(n)\leq\frac{1-\pi(F)}{8m}\binom{n}{r-1}\) for all sufficiently large \(n\). Then there exists \(N_{1}\) such that the following holds for all integers \(n,t\in\mathbb{N}\) with \(n\geq N_{1}\) and \(t\leq\frac{1-\pi(F)}{64rm^{2}}n\)._ _Suppose that \(\{\mathcal{H}_{1},\ldots,\mathcal{H}_{t+1}\}\) is a collection of \(n\)-vertex \(r\)-graphs on the same vertex set \(V\) such that_ \[|\mathcal{H}_{i}|\geq\mathrm{ex}(n-t,F)+t\binom{n-t}{r-1}\quad\text{and}\quad \Delta(\mathcal{H}_{i})\leq d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{n-t}{r-1}\] _hold for all \(i\in[t+1]\). Then \(\{\mathcal{H}_{1},\ldots,\mathcal{H}_{t+1}\}\) contains a rainbow \(F\)-matching._ Proof.: Given an integer \(k\leq t+1\), we say a collection \(\mathcal{C}=\{S_{1},\ldots,S_{k}\}\) of pairwise disjoint \(m\)-subsets of \(V\) is \(F\)**-rainbow** if there exists an injection \(f\colon[k]\to[t+1]\) such that \(F\subset\mathcal{H}_{f(i)}[S_{i}]\) for all \(i\in[k]\). Fix a maximal collection \(\mathcal{C}=\{S_{1},\ldots,S_{k}\}\) of pairwise disjoint \(m\)-subsets of \(V\) that is \(F\)-rainbow. If \(k=t+1\), then we are done. So we may assume that \(k\leq t\). Without loss of generality, we may assume that \(F\subset\mathcal{H}_{i}[S_{i}]\) for all \(i\in[k]\) (i.e. \(f\) is the identity map). Let \(B=\bigcup_{i=1}^{k}S_{i}\) and let \(b=|B|=mk\). Let us count the number of edges in \(\mathcal{H}_{k+1}\). Observe that every copy of \(F\) in \(\mathcal{H}_{k+1}\) must contain a vertex from \(B\), since otherwise, it would contradict the maximality of \(\mathcal{C}\). Therefore, the induced subgraph of \(\mathcal{H}_{k+1}\) on \(V_{0}:=V\setminus B\) is \(F\)-free. Hence, by the maximum degree assumption, we obtain \[|\mathcal{H}_{k+1}| \leq|\mathcal{H}_{k+1}[V_{0}]|+b\left(d(n-t,F)+\frac{1-\pi(F)}{2 m}\binom{n-t}{r-1}\right)\] \[\leq\mathrm{ex}(n-b,F)+b\left(d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{ n-t}{r-1}\right)\] \[=\mathrm{ex}(n-t,F)+t\binom{n-t}{r-1}-(\Delta_{1}+\Delta_{2})\,,\] where \[\Delta_{1} :=\mathrm{ex}(n-t,F)-\mathrm{ex}(n-b,F)-(b-t)d(n-t,F),\] \[\Delta_{2} :=t\left(\binom{n-t}{r-1}-d(n-t,F)\right)-b\frac{1-\pi(F)}{2m} \binom{n-t}{r-1}.\] Next, we will prove that \(\Delta_{1}+\Delta_{2}>0\), which implies that \(|\mathcal{H}_{k+1}|<\operatorname{ex}(n-t,F)+t\binom{n-t}{r-1}\) contradicting our assumption. Since \(n-t\geq N_{1}/2\) is sufficiently large and \(\lim_{n\to\infty}\operatorname{ex}(n-t,F)/\binom{n-t}{r}=\pi(F)\), we have \(\operatorname{ex}(n-t,F)\leq\left(\pi(F)+\frac{1-\pi(F)}{5}\right)\binom{n-t} {r}\), and hence, \[d(n-t,F)=\frac{r\cdot\operatorname{ex}(n-t,F)}{n-t}\leq\left(\pi(F)+\frac{1- \pi(F)}{5}\right)\binom{n-t}{r-1}.\] Therefore, \[\Delta_{2} \geq t\left(1-\left(\pi(F)+\frac{1-\pi(F)}{5}\right)\right)\binom {n-t}{r-1}-mt\frac{1-\pi(F)}{2m}\binom{n-t}{r-1}\] \[\geq\frac{1-\pi(F)}{4}\binom{n-t}{r-1}t.\] On the other hand, by Lemma 3.5, we have \[d(n-t,F)\leq d(n-b,F)+4(b-t)\binom{n-b}{r-2}\leq d(n-b,F)+4mt\binom{n-t}{r-2}.\] Therefore, it follows from the Smoothness assumption and \(g\) is nondecreasing that \[\Delta_{1} =\sum_{i=1}^{b-t}\left(\operatorname{ex}(n-b+i,F)-\operatorname{ ex}(n-b+i-1,F)\right)-(b-t)d(n-t,F)\] \[\stackrel{{\text{Smoothness}}}{{\geq}}\sum_{i=0}^{b-t -1}\left(d(n-b+i,F)-g(n-b+i+1)\right)-(b-t)d(n-t,F)\] \[\stackrel{{\text{Nondecreasing}}}{{\geq}}\sum_{i=0}^{b-t -1}\left(d(n-b+i,F)-d(n-t,F)\right)-(b-t)g(n-t)\] \[\stackrel{{\text{Lemma \ref{lem:2011}}}}{{\geq}}-\sum_{i=0}^{b-t -1}4(b-t-i)\binom{n-b+i}{r-2}-(b-t)g(n-t)\] \[\geq-4m^{2}t^{2}\binom{n-t-1}{r-2}-mt\cdot g(n-t)=-\frac{4(r-1)m^{ 2}t^{2}}{n-t}\binom{n-t}{r-1}-mt\cdot g(n-t).\] Since \(t\leq\frac{1-\pi(F)}{64mr^{2}}n\), we obtain \(\frac{4(r-1)m^{2}t^{2}}{n-t}<\frac{1-\pi(F)}{8}t\). Together with \(g(n-t)\leq\frac{1-\pi(F)}{8m}\binom{n-t}{r-1}\), we obatin \[\Delta_{1}>-\left(\frac{1-\pi(F)}{8}t+mt\frac{1-\pi(F)}{8m}\right)\binom{n-t} {t-1}=-\frac{1-\pi(F)}{4}t\binom{n-t}{r-1}.\] Therefore, \(\Delta_{1}+\Delta_{2}>0\). This finishes the proof of Lemma 3.6. ### Proof of Theorem 3.1 We prove Theorem 3.1 in this section. Let us prove Part (i) first. Proof of Theorem 3.1 (i).: Fix a sufficiently large constant \(N_{0}\) and suppose that \(n\geq N_{0}\). Let \(k\leq t+1\). We say a collection \(L:=\{v_{1},\ldots,v_{k}\}\) of vertices in \(V\) is **heavy-rainbow** if there exists an injection \(f\colon[k]\to[t+1]\) such that \[d_{\mathcal{H}_{f(i)}}(v_{i})\geq d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{n-t}{r-1 }\quad\text{for all }i\in[k].\] Fix a maximal collection \(L:=\{v_{1},\ldots,v_{k}\}\) of vertices that is heavy-rainbow. Without loss of generality, we may assume that \(f\) (defined above) is the identity map. Let \(V_{0}=V\setminus L\) and \(\mathcal{H}^{\prime}_{j}=\mathcal{H}_{j}[V_{0}]\) for all \(j\in[k+1,t+1]\). For every \(j\in[k+1,t+1]\) observe that there are at most \(\binom{n}{r}-\binom{n-k}{r}\) edges in \(\mathcal{H}_{j}\) that have nonempty intersection with \(L\). Hence, \[|\mathcal{H}^{\prime}_{j}| \geq|\mathcal{H}_{j}|-\bigg{(}\binom{n}{r}-\binom{n-k}{r}\bigg{)}\] \[\geq\mathrm{ex}(n-t,F)+\binom{n}{r}-\binom{n-t}{r}-\bigg{(}\binom {n}{r}-\binom{n-k}{r}\bigg{)}\] \[=\mathrm{ex}((n-k)-(t-k),F)+\binom{n-k}{r}-\binom{(n-k)-(t-k)}{r}.\] On the other hand, it follows from the maximality of \(L\) that \[\Delta(\mathcal{H}^{\prime}_{j})\leq\Delta(\mathcal{H}_{j}) \leq d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{n-t}{r-1}\] \[=d((n-k)-(t-k),F)+\frac{1-\pi(F)}{2m}\binom{(n-k)-(t-k)}{r-1}\] holds for all \(j\in[k+1,t+1]\). By assumption, \(\frac{t-k}{n-k}\leq\frac{t}{n}\leq\frac{1-\pi(F)}{64rm^{2}}\) and \(n-k\geq n/2\) is sufficiently large, so it follows from Lemma 3.6 that there exists a collection \(\mathcal{C}=\{S_{k+1},\ldots,S_{t+1}\}\) of pairwise disjoint \(m\)-subsets of \(V_{0}\) such that \(F\subset\mathcal{H}^{\prime}_{j}[S_{j}]\) for all \(j\in[k+1,t+1]\). Next we will find a collection of rainbow copies of \(F\) from \(\{\mathcal{H}_{1},\ldots,\mathcal{H}_{k}\}\). **Claim 3.7**.: _For every \(i\in[k]\) and for every set \(B_{i}\subset V\setminus\{v_{i}\}\) of size at most \(2mt\) there exists a copy of \(F\) in \(\mathcal{H}_{i}[V\setminus B_{i}]\)._ Proof.: Fix \(i\in[k]\) and fix a set \(B_{i}\subset V\setminus\{v_{i}\}\) of size at most \(2mt\). We may assume that \(|B_{i}|=2mt\). Let \(V_{i}=V\setminus B_{i}\) and \(n_{i}=|V_{i}|=n-2mt\). Let \(\mathcal{H}^{\prime}_{i}=\mathcal{H}_{i}[V_{i}]\). Since the number of edges in \(\mathcal{H}_{i}\) containing \(v_{i}\) that have nonempty intersection with \(B_{i}\) is at most \(2mt\binom{n-1}{r-2}\), we have \[d_{\mathcal{H}^{\prime}_{i}}(v_{i}) \geq d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{n-t}{r-1}-2mt\binom{n-1}{r -2}\] \[\stackrel{{\text{Lemma \ref{lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma: which, by the assumption \(f(n-2mt)\geq 2emt\binom{n-2mt}{r-2}\), implies that \[d(\mathcal{H}_{i^{\prime}})=\frac{r\cdot|\mathcal{H}_{i}^{\prime}| }{n-2mt} \geq d(n-2mt,F)-2emt\binom{n-2mt-1}{r-2}\] \[>d(n-2mt,F)-f(n-2mt). \tag{7}\] It follows from (6), (7), and the Boundedness assumption that \(F\subset\mathcal{H}_{i}^{\prime}\). Let \(B=L\cup S_{k+1}\cup\cdots\cup S_{t+1}\). Now we can repeatedly apply Claim 3.7 to find a collection of rainbow copies of \(F\) as follows. First, we let \(B_{1}=B\setminus\{v_{1}\}\). Since \(|B_{1}|=k-1+m(t+1-k)\leq 2mt\), Claim 3.7 applied to \(v_{1}\), \(B_{1}\), and \(\mathcal{H}_{1}\) yields an \(m\)-set \(S_{1}\subset V\setminus B_{1}\) such that \(F\subset\mathcal{H}_{1}[S_{1}]\). Suppose that we have define \(S_{1},\ldots,S_{i}\) for some \(i\in[k-1]\) such that \(F\subset\mathcal{H}_{j}[S_{j}]\) holds for all \(j\leq i\). Then let \(B_{i+1}=(B\cup S_{1}\cup\cdots\cup S_{i})\setminus\{v_{i+1}\}\). Since \(|B_{i+1}|=k-1+m(t+1-k)+im\leq 2mt\), Claim 3.7 applied to \(v_{i+1}\), \(B_{i+1}\), and \(\mathcal{H}_{i+1}\) yields an \(m\)-set \(S_{i+1}\subset V\setminus B_{i+1}\) such that \(F\subset\mathcal{H}_{i+1}[S_{i+1}]\). At the end of this process, we obtain a collection \(\{S_{1},\ldots,S_{k}\}\) of pairwise disjoint sets such that \(F\subset\mathcal{H}_{i}[S_{i}]\) holds for all \(i\in[k]\). Since \(S_{i}\cap S_{j}=\emptyset\) for all \(i\in[k]\) and \(j\in[k+1,t+1]\), the set \(\{S_{1},\ldots,S_{t+1}\}\) yields a rainbow \(F\)-matching. Before proving Part (ii) of Theorem 3.1, we need the simple corollary of Lemma 3.6. **Lemma 3.8**.: _Let \(F\) be a nondegenerate \(r\)-graph with \(m\) vertices. Suppose that \(\operatorname{ex}(n,F)\) is \(g\)-smooth with \(g(n)\leq\frac{1-\pi(F)}{8m}\binom{n}{r-1}\) for all sufficiently large \(n\). Then there exists \(N_{1}\) such that the following holds for all integers \(n,t\in\mathbb{N}\) with \(n\geq N_{1}\) and \(t\leq\frac{1-\pi(F)}{64rm^{2}}n\)._ _Suppose that \(\mathcal{H}\) is an \(n\)-vertex \(r\)-graphs with_ \[\Delta(\mathcal{H})\leq d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{n-t}{r-1}\quad\text {and}\quad\nu(F,\mathcal{H})<t+1.\] _Then_ \[|\mathcal{H}|<\operatorname{ex}(n-t,F)+t\binom{n-t}{r-1}.\] Now we are ready to prove Part (ii). Proof of Theorem 3.1 (ii).: Let \(\mathcal{H}\) be an \(n\)-vertex \(r\)-graph with \(\operatorname{ex}(n,(t+1)F)\) edges and \(\nu(F,\mathcal{H})<t+1\). Note that Theorem 3.1 (i) already implies that \(\operatorname{ex}(n,(t+1)F)\leq\binom{n}{r}-\binom{n-t}{r}+\operatorname{ex}(n- t,F)\). So, it suffices to show that \(\mathcal{H}\) is isomorphic to \(K_{t}^{r}\operatorname{\Sigma}\mathcal{G}\) for some \(\mathcal{G}\in\operatorname{EX}(n-t,F)\). Let \(V=V(\mathcal{H})\) and define \[L:=\left\{v\in V\colon d_{\mathcal{H}}(v)\geq d(n-t,F)+\frac{1-\pi(F)}{2m} \binom{n-t}{r-1}\right\}.\] A similar argument as in the proof of Claim 3.7 yields the following claim. **Claim 3.9**.: _For every \(v\in L\) and for every set \(B\subset V\setminus\{v\}\) of size at most \(2mt\) there exists a copy of \(F\) in \(\mathcal{H}[V\setminus B]\)._ Let \(\ell=|L|\). We have the following claim for \(\ell\). **Claim 3.10**.: _We have \(\ell\leq t\)._ Proof.: Suppose to the contrary that \(\ell\geq t+1\). By taking a subset of \(L\) if necessary, we may assume that \(\ell=t+1\). Let us assume that \(L=\{v_{1},\ldots,v_{t+1}\}\). We will repeatedly apply Claim 3.9 to find a collection \(\{S_{1},\ldots,S_{t+1}\}\) of pairwise disjoint \(m\)-sets such that \(F\subset\mathcal{H}[S_{i}]\) for all \(i\in[t+1]\) as follows. Let \(B_{1}=L\setminus\{v_{1}\}\). Since \(|B_{1}|\leq 2mt\), it follows from Claim 3.9 that there exists a set \(S_{1}\subset V\setminus B\) such that \(F\subset\mathcal{H}[S_{1}]\). Now suppose that we have found pairwise disjoint \(m\)-sets \(S_{1},\ldots,S_{i}\) for some \(i\leq t\). Let \(B_{i+1}=(L\cup S_{1}\cup\cdots\cup S_{i})\setminus\{v_{i}\}\). It is clear that \(|B_{i+1}|\leq 2mt\). So it follows from Claim 3.9 that there exists a set \(S_{i+1}\subset V\setminus B\) such that \(F\subset\mathcal{H}[S_{i+1}]\). Repeat this process for \(t+1\) times, we find the collection \(\{S_{1},\ldots,S_{t+1}\}\) that satisfies the assertion. However, this contradicts the assumption that \(\nu(F,\mathcal{H})<t+1\). Let \(V_{0}=V\setminus L\) and \(\mathcal{H}_{0}=\mathcal{H}[V_{0}]\). The following claim follows from a similar argument as in the last paragraph of the proof of Theorem 3.1. **Claim 3.11**.: _We have \(\nu(F,\mathcal{H}_{0})<t-\ell+1\)._ If \(\ell=t\), then Claim 3.11 implies that \(\mathcal{H}_{0}\) is \(F\)-free. Therefore, it follows from \[|\mathcal{H}_{0}|\geq|\mathcal{H}|-\left(\binom{n}{r}-\binom{n-t}{r}\right)= \operatorname{ex}(n-t,F)\] that \(\mathcal{H}_{0}\in\operatorname{EX}(n-t,F)\) and \(d(v)=\binom{n-1}{r-1}\) for all \(v\in L\), which implies that \(\mathcal{H}=K_{t}^{r}\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\Sigma$}} \raisebox{-1.29pt}{$\mathcal{G}$}}\) for some \(\mathcal{G}\in\operatorname{EX}(n-t,F)\). If \(\ell\leq t-1\), then it follows from \(\Delta(\mathcal{H}_{0})\leq d(n-t,F)+\frac{1-\pi(F)}{2m}\binom{n-t}{r-1}\), \(\nu(F,\mathcal{H}_{0})<t-\ell+1\), and Lemma 3.8 that \[|\mathcal{H}_{0}|<\operatorname{ex}(n-t,F)+(t-\ell)\binom{n-t}{r-1}.\] Consequently, \[|\mathcal{H}|\leq|\mathcal{H}_{0}|+\binom{n}{r}-\binom{n-\ell}{r} <\operatorname{ex}(n-t,F)+(t-\ell)\binom{n-t}{r-1}+\binom{n}{r}- \binom{n-\ell}{r}\] \[\leq\operatorname{ex}(n-t,F)+\binom{n}{r}-\binom{n-t}{r},\] a contradiction. ## 4 Proofs of Theorems 1.9 and 1.11 In this section, we prove Theorems 1.9 and 1.11. Before that, let us introduce some definitions and prove some preliminary results. ### Preliminaries The following fact concerning \(\delta(n,F)\) for all hypergraphs \(F\). **Fact 4.1**.: _Let \(F\) be an \(r\)-graph and \(n\geq 1\) be an integer. Then every maximum \(n\)-vertex \(F\)-free \(r\)-graph \(\mathcal{H}\) satisfies \(\delta(\mathcal{H})\geq\delta(n,F)\). In particular, \(d(n,F)\geq\delta(n,F)\)._ Proof.: Let \(v\in V(\mathcal{H})\) be a vertex with minimum degree and let \(\mathcal{H}^{\prime}\) be the induced subgraph of \(\mathcal{H}\) on \(V(\mathcal{H})\setminus\{v\}\). Since \(\mathcal{H}^{\prime}\) is an \((n-1)\)-vertex \(F\)-free \(r\)-graph, we have \(|\mathcal{H}^{\prime}|\leq\operatorname{ex}(n-1,F)\). On the other hand, since \(\mathcal{H}\) is a maximum \(n\)-vertex \(F\)-free \(r\)-graph, we have \(\operatorname{ex}(n,F)=|\mathcal{H}|\). Therefore, \[\delta(n,F)=\operatorname{ex}(n,F)-\operatorname{ex}(n-1,F)\leq|\mathcal{H}| -|\mathcal{H}^{\prime}|=d_{\mathcal{H}}(v)=\delta(\mathcal{H}),\] which proves Fact 4.1. For Turan pairs \((F,P)\) we have the following fact which provides a lower bound for \(\delta(n,F)\). **Fact 4.2**.: _Suppose that \((F,P)\) is a Turan pair and \(\mathcal{H}\) is a maximum \(F\)-free \(r\)-graph on \(n-1\) vertices. Then \(\delta(n,F)\geq\Delta(\mathcal{H})\). In particular, \(\delta(n,F)\geq d(n-1,F)\)._ Proof.: First, notice that \(|\mathcal{H}|=\operatorname{ex}(n-1,F)\). On the other hand, it follows from the definition of Turan pair that \(\mathcal{H}\) is an \((n-1)\)-vertex \(P\)-construction. Let \(\tilde{\mathcal{H}}\) be an \(n\)-vertex \(P\)-construction obtained from \(\mathcal{H}\) by duplicating a vertex \(v\in V(\mathcal{H})\) with maximum degree. In other words, \(\tilde{\mathcal{H}}\) is obtained from \(\mathcal{H}\) by adding a new vertex \(u\) and adding all edges in \(\{\{u\}\cup S\colon S\in L_{\mathcal{H}}(v)\}\). It is clear that \(\tilde{\mathcal{H}}\) is an \(n\)-vertex \(P\)-construction, and hence, \(\tilde{\mathcal{H}}\) is \(F\)-free. So \(|\tilde{\mathcal{H}}|\leq\operatorname{ex}(n,F)\). It follows that \[\delta(n,F)=\operatorname{ex}(n,F)-\operatorname{ex}(n-1,F)\geq|\tilde{ \mathcal{H}}|-|\mathcal{H}|=d_{\mathcal{H}}(v)=\Delta(\mathcal{H})\geq d( \mathcal{H})\geq d(n-1,F),\] which proves Fact 4.2. The proof for the following fact can be found in [46, Lemma 4.2] (with some minor modifications). **Fact 4.3**.: _Let \(F\) be an \(r\)-graph and let \(\mathcal{H}\) be an \(n\)-vertex \(F\)-free \(r\)-graph. If \(n\) is large, \(\varepsilon>0\) is small, and \(|\mathcal{H}|\geq\left(\pi(F)-\varepsilon\right)\binom{n}{r}\), then_ 1. _the set_ \[Z_{\varepsilon}(\mathcal{H}):=\left\{v\in V(\mathcal{H})\colon d_{\mathcal{H} }(v)\leq\left(\pi(F)-2\varepsilon^{1/2}\right)\binom{n-1}{r-1}\right\}\] _has size at most_ \(\varepsilon^{1/2}n\)_, and_ 2. _the induced subgraph_ \(\mathcal{H}^{\prime}\) _of_ \(\mathcal{H}\) _on_ \(V(\mathcal{H})\setminus Z_{\varepsilon}(\mathcal{H})\) _satisfies_ \(\delta(\mathcal{H}^{\prime})\geq\left(\pi(F)-3\varepsilon^{1/2}\right)\binom{ n-1}{r-1}\)_._ ### Proofs of Theorems 1.9 and 1.11 We prove Theorem 1.9 first. Proof of Theorem 1.9.: Fix an integer \(n\geq 1\). Then \[|\delta(n,F)-d(n-1,F)| \stackrel{{\text{Fact 4.2}}}{{=}}\delta(n,F)-d(n-1,F)\] \[\stackrel{{\text{Fact 4.1}}}{{\leq}}d(n,F)-d(n-1,F) \stackrel{{\text{Lemma 3.5}}}{{\leq}}4\binom{n-1}{r-2},\] which proves Theorem 1.9. Next we prove Theorem 1.11. Proof of Theorem 1.11.: Fix constants \(0<\varepsilon\ll\varepsilon_{1}\ll 1\) and let \(n\in\mathbb{N}\) be sufficiently large. Suppose to the contrary that there exists an \(n\)-vertex \(F\)-free \(r\)-graph \(\mathcal{H}\) with \(d(\mathcal{H})\geq d(n,F)-\varepsilon{n-1\choose r-1}\) and \(\Delta(\mathcal{H})\geq d(n,F)+\frac{1-\pi(F)}{8m}{n-1\choose r-1}\). Let \(V=V(\mathcal{H})\). Fix a vertex \(v\in V\) with \(d_{\mathcal{H}}(v)=\Delta(\mathcal{H})\). Let \(V_{0}=V\setminus\{v\}\) and \(\mathcal{H}_{0}=\mathcal{H}[V_{0}]\). Since \[|\mathcal{H}_{0}|\geq|\mathcal{H}|-{n-1\choose r-1}\geq\operatorname{ex}(n,F )-2\varepsilon{n\choose r},\] it follows from the edge-stability of \(F\) that \(\mathcal{H}_{0}\) contains a subgraph \(\mathcal{H}_{1}\) with at least \(\operatorname{ex}(n,F)-\varepsilon_{1}{n\choose r}\geq(\pi(F)-\varepsilon_{1 }){n\choose r}\) edges, and moreover, \(\mathcal{H}_{1}\) is a \(P\)-subconstruction. It follows from Fact 4.3 that the set \[Z:=\left\{v\in V\colon d_{\mathcal{H}_{1}}(v)\leq\left(\pi(F)-2\varepsilon_{1 }^{1/2}\right){n-1\choose r-1}\right\}\] has size at most \(\varepsilon_{1}^{1/2}n\), and moreover, the \(r\)-graph \(\mathcal{H}_{2}:=\mathcal{H}_{1}[V_{0}\setminus Z]\) satisfies \(\delta(\mathcal{H}_{2})\geq\left(\pi(F)-3\varepsilon_{1}^{1/2}\right){n-1 \choose r-1}\). Note that \(\mathcal{H}_{2}\subset\mathcal{H}_{1}\) is also a \(P\)-subconstruction. Define \(\mathcal{H}_{3}:=\mathcal{H}_{2}\cup\{e\in\mathcal{H}[V\setminus Z]\colon v\in e\}\). Since \(|Z|\leq\varepsilon_{1}^{1/2}n\leq\frac{1-\pi(F)}{72m}\frac{n}{r}\), we have \[d_{\mathcal{H}_{3}}(v)\geq d_{\mathcal{H}}(v)-|Z|{n-2\choose r-2} \geq d(n,F)+\frac{1-\pi(F)}{8m}{n-1\choose r-1}-\frac{1-\pi(F)}{72m }\frac{n}{r}{n-2\choose r-2}\] \[\geq d(n,F)+\frac{1-\pi(F)}{8m}{n-1\choose r-1}-\frac{1-\pi(F)}{ 72m}{n-1\choose r-1}\] \[\geq\left(\pi(F)+\frac{1-\pi(F)}{9m}\right){n-1\choose r-1}.\] Let \(n^{\prime}=|V\setminus Z|\). Note that \(\mathcal{H}_{3}\) is an \(F\)-free \(r\)-graph on \(n^{\prime}\) vertices with \(\delta(\mathcal{H}_{3})\geq\delta(\mathcal{H}_{2})\geq\left(\pi(F)-3\varepsilon _{1}^{1/2}\right){n-1\choose r-1}\), and \(v\in V(\mathcal{H}_{3})\) is a vertex such that \(\mathcal{H}_{3}-v=\mathcal{H}_{2}\) is a \(P\)-subconstruction. However, this contradicts the weak vertex-extendability of \(F\) since \(\varepsilon_{1}\) is sufficiently small and \(d_{\mathcal{H}_{3}}(v)\geq\left(\pi(F)+\frac{1-\pi(F)}{9m}\right){n-1\choose r -1}\). ## 5 Proof of Theorem 2.13 The edge-stability of \(\mathbb{F}_{3,2}\) was already proved in [27, Theorem 2.2], so by Theorems 1.7, 1.9, and 1.11, to prove Theorem 2.13 it suffices to prove the following result. **Theorem 5.1**.: _The \(3\)-graph \(\mathbb{F}_{3,2}\) is weakly vertex-extendable with respect to the pattern \(S_{3}:=(2,\{1,2,2\})\)._ Proof.: Fix \(\delta>0\). Let \(n\) be sufficiently large and \(\zeta>0\) be sufficiently small. Let \(\mathcal{H}\) be an \(n\)-vertex \(\mathbb{F}_{3,2}\)-free \(3\)-graph with \(\delta(\mathcal{H})\geq\left(\frac{4}{9}-\zeta\right){n-1\choose 2}\). Suppose that \(v\in V\) is a vertex such that \(\mathcal{H}_{0}:=\mathcal{H}-v\) is an \(S_{3}\)-subconstruction (i.e. semibipartite). It suffices to show that \(d_{\mathcal{H}}(v)\leq\left(\frac{4}{9}+\delta\right){n-1\choose 2}\). Suppose to the contrary that \(d_{\mathcal{H}}(v)>\left(\frac{4}{9}+\delta\right){n-1\choose 2}\). Let \(V_{1}\cup V_{2}\) be a bipartition of \(V_{0}:=V\setminus\{v\}\) such that every edge in \(\mathcal{H}_{0}\) contains exactly one vertex from \(V_{1}\). Since \(|\mathcal{H}_{0}|\geq\frac{3}{n}\delta(\mathcal{H})\geq\left(\frac{4}{9}-\zeta \right)\binom{n}{3}\), it follows from some simple calculations (see e.g. [27, Theorem 2.2 (ii)]) that \[\max\left\{\left|\left|V_{1}\right|-\frac{n}{3}\right|,\left|\left|V_{2}\right| -\frac{2n}{3}\right|\right\}\leq\zeta^{1/2}n. \tag{8}\] Recall that the link of a vertex \(u\in V(\mathcal{H})\) is defined as \[L_{\mathcal{H}}(u):=\left\{A\in\binom{V(\mathcal{H})}{r-1}\colon A\cup\{u\} \in\mathcal{H}\right\}.\] Let \(L=L_{\mathcal{H}}(v)\) for simplicity and let \[L_{1}:=L\cap\binom{V_{1}}{2},\quad L_{2}:=L\cap\binom{V_{2}}{2},\quad\text{ and}\quad L_{1,2}:=L\cap(V_{1}\times V_{2}).\] Here we abuse the use of notation by letting \(V_{1}\times V_{2}\) denote the edge set of the complete bipartite graph with parts \(V_{1}\) and \(V_{2}\). **Claim 5.2**.: _We have \(|L_{2}|\geq\frac{\delta}{8}n^{2}\)._ Proof.: Suppose to the contrary that \(|L_{2}|\leq\delta n^{2}/8\). Then it follows from the inequality \[\sum_{v^{\prime}\in V_{1}}d_{L}(v^{\prime})=2|L_{1}|+|L_{1,2}|\geq|L|-|L_{2}| \geq\left(\frac{4}{9}+\delta\right)\binom{n-1}{2}-\frac{\delta}{8}n^{2}\geq \left(\frac{2}{9}+\frac{\delta}{4}\right)n^{2}\] that there exists a vertex \(w\in V_{1}\) with \[d_{L}(w)\geq\frac{\left(\frac{2}{9}+\frac{\delta}{4}\right)n^{2}}{\left(\frac {1}{3}+\zeta^{1/2}\right)n}\geq\left(\frac{2}{3}+\frac{\delta}{8}\right)n.\] Therefore, by (8), we have \[\min\left\{|N_{L}(w)\cap V_{1}|,|N_{L}(w)\cap V_{2}|\right\}\geq\frac{\delta} {16}n.\] Fix a vertex \(u\in N_{L}(w)\cap V_{1}\) and let \(V_{2}^{\prime}=N_{L}(w)\cap V_{2}\). Since \[\binom{|V_{2}|}{2}-d_{\mathcal{H}_{0}}(u)\leq\binom{\left(\frac{2}{3}+\zeta^{ 1/2}\right)n}{2}-\left(\frac{4}{9}-2\zeta\right)\binom{n-1}{2}<\binom{\delta n /16}{2}, \tag{9}\] there exists an edge \(ab\in L_{\mathcal{H}}(u)\cap\binom{V_{2}^{\prime}}{2}\). However, this implies that \(\mathbb{F}_{3,2}\subset\mathcal{H}[\{v,u,w,a,b\}]\) (see Figure 8), a contradiction. Figure 8: Finding \(\mathbb{F}_{3,2}\) in Claim 5.2 (left) and Claim 5.3 (right). **Claim 5.3**.: _We have \(L_{1}=\emptyset\)._ Proof.: Suppose to the contrary that there exists an edge \(uw\in L_{1}\). Note that \(|L_{2}|\geq\delta n^{2}/8\) from Claim 5.2. Choosing uniformly at random a pair \(\{a,b\}\) from \(\binom{V_{2}}{2}\), we obtain \[\min\left\{\mathbb{P}\left[ab\in L_{\mathcal{H}}(u)\right],\mathbb{P}\left[ab \in L_{\mathcal{H}}(w)\right]\right\}\geq\frac{\delta(\mathcal{H}_{0})}{\binom {|V_{2}|}{2}}>\frac{\left(\frac{4}{9}-2\zeta\right)\binom{n-1}{2}}{\left( \binom{\frac{2}{3}+\zeta^{1/2}n}{2}\right)}>1-10\zeta^{1/2},\] and \[\mathbb{P}\left[ab\in L_{2}\right]=\frac{|L_{2}|}{\binom{|V_{2}|}{2}}>\frac{ \delta n^{2}/8}{\left(\binom{\frac{2}{3}+\zeta^{1/2}n}{2}\right)}>\frac{ \delta}{8}.\] So it follows from the Union Bound that \[\mathbb{P}\left[ab\in L_{2}\cap L_{\mathcal{H}}(u)\cap L_{\mathcal{H}}(w) \right]>1-\left(10\zeta^{1/2}+10\zeta^{1/2}+1-\frac{\delta}{8}\right)>0.\] Hence, there exists an edge \(ab\in L_{2}\cap L_{\mathcal{H}}(u)\cap L_{\mathcal{H}}(w)\). However, this implies that \(\mathbb{F}_{3,2}\subset\mathcal{H}[\{v,u,w,a,b\}]\) (see Figure 8), a contradiction. Let us define \[U_{1}:=\left\{v^{\prime}\in V_{2}\colon|N_{L}(v^{\prime})\cap V_{1}|\geq\frac{ \delta}{16}n\right\}\quad\text{and}\quad U_{2}:=\left\{v^{\prime}\in V_{2} \colon|N_{L}(v^{\prime})\cap V_{2}|\geq\frac{\delta}{16}n\right\}.\] It follows from \[\left(\frac{1}{3}+\zeta^{1/2}\right)n|U_{1}|\geq\sum_{v^{\prime}\in U_{1}}|N_{ L}(v^{\prime})\cap V_{1}|\geq|L_{1,2}|-\frac{\delta}{16}n|V_{2}\setminus U_{1}| \geq|L_{1,2}|-\frac{\delta}{16}n^{2}\] and \[\left(\frac{2}{3}+\zeta^{1/2}\right)n|U_{2}|\geq\sum_{v^{\prime}\in U_{2}}|N_{ L}(v^{\prime})\cap V_{2}|\geq 2|L_{2}|-\frac{\delta}{16}n|V_{2}\setminus U_{2}| \geq 2|L_{2}|-\frac{\delta}{16}n^{2}\] that \[|U_{1}|+|U_{2}| \geq\frac{|L_{1,2}|-\frac{\delta}{16}n^{2}}{\left(\frac{1}{3}+\zeta ^{1/2}\right)n}+\frac{2|L_{2}|-\frac{\delta}{16}n^{2}}{\left(\frac{2}{3}+\zeta ^{1/2}\right)n}\] \[\geq\frac{|L_{1,2}|-\frac{\delta}{16}n^{2}+|L_{2}|-\frac{\delta} {16}n^{2}}{\left(\frac{1}{3}+\zeta^{1/2}\right)n}\] \[=\frac{|L|-\frac{\delta}{8}n^{2}}{\left(\frac{1}{3}+\zeta^{1/2} \right)n}\geq\frac{\left(\frac{2}{9}+\frac{\delta}{4}\right)n^{2}-\frac{\delta }{8}n^{2}}{\left(\frac{1}{3}+\zeta^{1/2}\right)n}\geq\left(\frac{2}{3}+\frac{ \delta}{8}\right)n.\] So it follows from (8) that \(|U_{1}\cap U_{2}|\geq|U_{1}|+|U_{2}|-|V_{2}|\geq\frac{\delta}{16}n\). Fix a vertex \(w\in U_{1}\cap U_{2}\) and a vertex \(u\in N_{L}(w)\cap V_{1}\). Let \(V_{2}^{\prime}=N_{L}(w)\cap V_{2}\). Since \(|V_{2}^{\prime}|\geq\frac{\delta}{16}n\), similar to (9), there exists an edge \(ab\in L_{\mathcal{H}}(u)\cap\binom{V_{2}^{\prime}}{2}\). However, this implies that \(\mathbb{F}_{3,2}\subset\mathcal{H}[\{v,u,w,a,b\}]\) (see Figure 9), a contradiction. This completes the proof of Theorem 5.1. ## 6 Concluding remarks By a small modification of the proof, one can easily extend Theorems 1.7 and 1.8 to vertex-disjoint union of different hypergraphs as follows (here we omit the statement for the rainbow version). **Theorem 6.1**.: _Let \(m\geq r\geq 2,k\geq 1\) be integers and let \(F_{1},\ldots,F_{k}\) be nondegenerate \(r\)-graphs on at most \(m\) vertices. Suppose that there exists a constant \(c>0\) such that for all \(i\in[k]\) and large \(n\)\(\colon\)_ * \(F_{i}\) _is_ \(\left(c\binom{n}{r-1},\frac{1-\pi(F)}{4m}\binom{n}{r-1}\right)\)_-bounded, and_ * \(\operatorname{ex}(n,F_{i})\) _is_ \(\frac{1-\pi(F)}{8m}\binom{n}{r-1}\)_-smooth._ _Then there exist constant \(N_{0}\) such that for all integers \(n\geq N_{0}\) and \(t_{1},\ldots,t_{k}\in\mathbb{N}\) with \(t+1:=\sum_{i=1}^{k}t_{i}\in[0,\varepsilon n]\), where \(\varepsilon=\min\left\{\frac{c}{4\varepsilon m},\frac{1-\pi(F_{1})}{64 \varepsilon m^{2}},\ldots,\frac{1-\pi(F_{k})}{64\varepsilon m^{2}}\right\}\), we have_ \[\operatorname{ex}\left(n,\bigsqcup_{i=1}^{k}t_{i}F_{i}\right)\leq\binom{n}{r} -\binom{n-t}{r}+\max_{i\in[k]}\left\{\operatorname{ex}(n-t,F_{i})\right\}.\] _Moreover, if \(\max_{i\in[k]}\operatorname{ex}(n-t,F_{i})=\operatorname{ex}(n,\{F_{1}, \ldots,F_{k}\})\), then the inequality above can be replace by equality._ Recall that Allen, Bottcher, Hladky, and Piguet [2] determined, for large \(n\), the value of \(\operatorname{ex}(n,(t+1)K_{3})\) for all \(t\leq n/3\). Considering that the situation is already very complicated for \(K_{3}\), the following question seems very hard in general. **Problem 6.2**.: _Let \(r\geq 2\) be an integer and \(F\) be a nondegenerate \(r\)-graph with \(m\) vertices. For large \(n\) determine \(\operatorname{ex}(n,(t+1)F)\) for all \(t\leq n/m\)._ A first step towards a full understanding of Problem 6.2 would be determining the regime of \(t\) in which members in \(K_{t}^{r}\operatorname{\preceqcurlyeq}\operatorname{EX}(n-t,F)\) are extremal. Here we propose the following question, which seems feasible for many hypergraphs (including graphs). **Problem 6.3**.: _Let \(r\geq 2\) be an integer and \(F\) be an \(r\)-graph with \(m\) vertices. For large \(n\) determine the maximum value of \(s(n,F)\) such that_ \[\operatorname{ex}(n,(t+1)F)=\binom{n}{r}-\binom{n-t}{r}+\operatorname{ex}(n-t,F)\] _holds for all \(t\in[0,s(n,F)]\)._ Understanding the asymptotic behavior of \(s(n,F)\) would be also very interesting. **Problem 6.4**.: _Let \(r\geq 2\) be an integer and \(F\) be an \(r\)-graph with \(m\) vertices. Let \(s(n,F)\) be the same as in Problem 6.3. Determine the value of \(\liminf_{n\to\infty}\frac{s(n,F)}{n}\)._ Note that the result of Allen, Bottcher, Hladky, and Piguet [2] implies that \(s(n,K_{3})=\frac{2n-6}{9}\) for large \(n\). In particular, \(\lim_{n\to\infty}\frac{s(n,K_{3})}{n}=\frac{2}{9}\). It would be also interesting to consider extensions of the density Corradi-Hajnal Theorem to degenerate hypergraphs such as complete \(r\)-partite \(r\)-graphs and even cycles. The behavior for degenerate hypergraphs seems very different from nondegenerate hypergraphs, and we refer the reader to e.g. [18, Theorem 1.3] for related results on even cycles.
2307.12309
Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network
Building extraction aims to segment building pixels from remote sensing images and plays an essential role in many applications, such as city planning and urban dynamic monitoring. Over the past few years, deep learning methods with encoder-decoder architectures have achieved remarkable performance due to their powerful feature representation capability. Nevertheless, due to the varying scales and styles of buildings, conventional deep learning models always suffer from uncertain predictions and cannot accurately distinguish the complete footprints of the building from the complex distribution of ground objects, leading to a large degree of omission and commission. In this paper, we realize the importance of uncertain prediction and propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem. To verify the performance of our proposed UANet, we conduct extensive experiments on three public building datasets, including the WHU building dataset, the Massachusetts building dataset, and the Inria aerial image dataset. Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin.
Wei He, Jiepan Li, Weinan Cao, Liangpei Zhang, Hongyan Zhang
2023-07-23T12:42:15Z
http://arxiv.org/abs/2307.12309v1
# Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network ###### Abstract Building extraction aims to segment building pixels from remote sensing images and plays an essential role in many applications, such as city planning and urban dynamic monitoring. Over the past few years, deep learning methods with encoder-decoder architectures have achieved remarkable performance due to their powerful feature representation capability. Nevertheless, due to the varying scales and styles of buildings, conventional deep learning models always suffer from uncertain predictions and cannot accurately distinguish the complete footprints of the building from the complex distribution of ground objects, leading to a large degree of omission and commission. In this paper, we realize the importance of uncertain prediction and propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem. Specifically, we first apply a general encoder-decoder network to obtain a building extraction map with relatively high uncertainty. Second, in order to aggregate the useful information in the highest-level features, we design a Prior Information Guide Module to guide the highest-level features in learning the prior information from the conventional extraction map. Third, based on the uncertain extraction map, we introduce an Uncertainty Rank Algorithm to measure the uncertainty level of each pixel belonging to the foreground and the background. We further combine this algorithm with the proposed Uncertainty-Aware Fusion Module to facilitate level-by-level feature refinement and obtain the final refined extraction map with low uncertainty. To verify the performance of our proposed UANet, we conduct extensive experiments on three public building datasets, including the WHU building dataset, the Massachusetts building dataset, and the Inria aerial image dataset. Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin. The source code of the proposed UANet is available at [https://github.com/Henryjiepanli/Uncertainty-aware-Network](https://github.com/Henryjiepanli/Uncertainty-aware-Network). Building extraction, remote sensing, uncertainty-aware ## I Introduction Building extraction aims to distinguish building footprints from high-resolution remote sensing (RS) images, and has made remarkable progress in the past few decades. Owing to its potential applications, building extraction has also been extended to various fields, such as city planning [1], urban dynamic monitoring [2], and disaster detection [3]. Up to date, numerous studies have made significant contributions to the extraction of buildings from high-resolution remote sensing (RS) images ([28, 30, 36]). Compared to middle/low resolution RS images, the higher-resolution RS images provide more detailed information about ground objects, while also increasing intra-class variances and decreasing inter-class variances, posing various challenges to accurately extract building footprints [4]. To overcome the aforementioned challenges, research on building extraction has undergone a long-term development. In the early stage, a major effort was devoted to the design of more distinctive features. For example, [5] utilized multiple colors and color-invariant spaces to select the representative corners and chose some corner candidates to generate the rooftop outline. Based on information about entropy and color, [6] introduced texture information to differentiate between buildings and trees. Moreover, [7] firstly took advantage of the contour driven by edge-flow to extract the building boundary, and then segmented the compositional polygons of the building roof by Joint Systems Engineering Group (JSEG). Nevertheless, due to the limited robustness and representativeness, the aforementioned hand-crafted features cannot handle the complex correlation between the buildings and the background. In the past few years, deep learning algorithms have been successfully applied to RS building extraction and have become the mainstream technical tools. Initially, in order to adopt deep learning algorithms into building extraction research, some simple networks were proposed based on patch-based Fig. 1: Uncertainty visualizations between our proposed UANet and the state-of-the-art (SOTA) method for building extraction (BuildFormer [28]). (c) and (d) are achieved by the operation \(0.5-|0.5-\star|\), with \(\star\) representing the output of the \(Sigmoid\) function. annotation. [9] designed a neural network consisting of three convolutional layers and two fully connected layers to achieve the automatic extraction of buildings. [10] designed a patch-based convolutional neural network (CNN) architecture that replaced fully connected layers with global average pooling (GAP) to improve extraction performance. However, the patch-based classification method has two unavoidable drawbacks [11], namely, a huge computational burden and limited long-distance information exchange. As a result, these methods cannot fully exploit contextual information in high-resolution RS images, making it difficult to completely and accurately extract buildings from complex backgrounds. Fully Convolutional Network (FCN) [12] is a landmark pixel-based segmentation method that provides an encoder-decoder architecture, which has become a paradigm. In detail, the encoders process the input image to generate multi-level features, and the decoders adopt various strategies to output the semantic results. Currently, typical backbone networks, such as VGG [13], ResNet [14], ResNext [15], Res2Net [16], and even some networks based on transformers [17, 18], are selected as encoders. After obtaining hierarchical features from the encoder, a sequence of decoder structures is proposed. For designing the decoders, the general strategy is to take advantage of multi-level encoded features from the aspects of modeling multi-scale contextual information [22, 31, 32, 33, 34, 63], mining long-range dependency information [21, 23, 30, 41, 64, 65], or feature refinement [25, 27, 62]. Regarding the decoding strategies for modeling multi-scale contextual information, two typical plug-and-play modules, namely, Atrous Spatial Pyramid Pooling (ASPP) [22] and Receptive Field Block (RFB) [31], have been proposed. Furthermore, [32] enhanced the extraction of local features with a reasonable stacking of small-dilation-rate dilation convolutions, thereby effectively reducing the cases of ambiguous results for small-sized building segmentation. [33] proposed a novel Adaptive Screening Feature Network to teach the network to adjust the receptive field and adaptively enhance useful feature information. Moreover, [34] utilized a graph-based scale-aware structure to model and reason the interactions between different scale features. Regarding the decoding strategies for mining long-range dependency information, there have been many great works on the design of both encoders and decoders. Given that CNN is limited to local connections, some researchers have replaced CNNs with transformers in the design of encoders. The current transformer networks, such as Swin Transformer [17], Pyramid Vision Transformer [18], and so on, have all proved their strength in capturing long-distance information. Additionally, some work has introduced unique modules to establish long-distance contextual information in the decoders. For example, an Asymmetric Pyramid Non-Local Block [19] was introduced by [20] to extract contextual global information. [21] combined an ASPP [22] and a Non-Local Block [23] to propose a pyramidal self-attentive module for convenient embedding in the network. [30] took advantage of a local-global dual-stream network to adaptively capture local and long-range information for accurate building extraction. Regarding the decoding strategies for feature refinement, researchers are suggested to model the long distances and accurately locate spatial locations, which is often overlooked by CNN due to the spatial transformation invariance. Therefore, some works have utilized boundaries and contours to refine the final segmentation. [25] proposed the Feature-Pairwise Conditional Random Field based on the Graph Convolutional Network (GCN) [26], which is a conditional random field for pairs of potential pixels with local constraints, incorporating the feature maps extracted by the CNN. [29] analyzed the conflict between deep CNN downsampling operation and accurate boundary segmentation, and introduced a Gated GCN into the CNN structure to generate clear boundaries and high fine-grained pixel-level classification results. Furthermore, [27] designed a boundary refinement module (BR) to progressively refine the prediction of the building by perceiving and refining the edges of the building. Although taking the boundary information into account seems to be an appropriate way to refine the details of the segmentation, the richness of the boundary samples can also be an important factor that should not be ignored to limit the performance exploration of such Fig. 2: The structure of the Uncertainty-Aware Network, which is composed of a general encoder-decoder, a prior information guide module (PIGM), and an uncertainty-aware fusion module (UAFM). methods. In summary, great progress has been made for high resolution RS building extraction using deep-learning-based methods. However, due to the complex distribution of ground objects in RS images and diverse appearances of buildings, current decoding strategies will inevitably produce misunderstanding of the building, resulting in uncertain prediction, which is clearly reflected in Fig. 1c. As analyzed in [48], the reason why current decoding strategies fail in some difficult cases is that they lack enough attention to hard-to-segment samples. Especially in RS images, some buildings are not salient enough and do not appear frequently, which will result in uncertainty of the model. Therefore, solving the uncertain prediction is the key to further improving the performance of building extraction model. In fact, uncertainty-aware learning has been studied in the general segmentation [52, 56, 57] and detection [58] areas. At the beginning, the uncertainty analysis is always tight to complex networks (Bayesian deep learning [42, 48], \(etc.\),) with a huge computational cost. Subsequently, the general frameworks (Probabilistic Representational Network [51, 52], Generative Adversarial Network (GAN), \(etc.\)) are designed to improve the prediction certainty. However, when dealing with a building extraction task, these models/frameworks may fail to explore the characteristics of RS and result in unsatisfied results. In this paper, we realize the importance of building uncertainty prediction, and propose the Uncertainty-Aware Network (UANet). The proposed UANet can automatically rank the background uncertainty and building uncertainty of RS, and progressively guide the attention to these uncertain pixels during the interaction of features. In detail, the proposed UANet first adopts a conventional encoder-decoder structure to output multi-level features and a relatively uncertain extraction map. On the basis of these results, we attempt to further solve the uncertainty problem and divide the following process into two key parts. At the beginning, we put forward a prior information guide module (PIGM) via a novel cross-attention mechanism to realize the enhancement of both spatial and channel aspects. Then, we propose the uncertainty-aware fusion module (UAFM) and innovatively invent an uncertainty rank algorithm (URA) to realize the elimination of uncertainty as much as possible. As shown in Figs. 1c and 1d, compared with BuildFormer, our UANet shows less uncertainty particularly around the edges. The main contributions of this study are as follows. 1. We introduce the uncertainty concept to building extraction and propose the UANet that can maintain high certainty faced with diverse scales, complex backgrounds, and various building appearances, \(etc.\) 2. We put forward a novel feature refinement way named PIGM from both spatial and channel aspects. 3. We propose the UAFM and the URA to relieve the uncertainty condition and achieve a refined extraction map with low uncertainty. The rest of this paper is organized as follows. In Section 2, we analyze and introduce the structure and components of our UANet. The experiments and results analysis are presented in Section 3, the ablation study of our proposed modules is given in Section 4, and the conclusions are outlined in Section 5. ## II Methodology ### _Overview_ Aiming to eliminate the uncertainty of the final extraction map as much as possible, we propose the Uncertainty-Aware Network (UANet). As shown in Fig.2, we first adopt a general encoder-decoder network to get a relatively uncertain extraction map. Regarding the general encoder-decoder part, we adopt VGG-16 [13] as the encoder backbone to extract multi-level features from the input image, introduce a multi-branch dilation convolution blocks to enhance the encoded features (\(E_{i},\left\{i=2,3,4,5\right\}\)), and use a typical cross-fusion strategy (Feature Pyramid Network (FPN [50])) to obtain a relatively uncertain extraction map \(M_{5}\). Based on the output features (\(F_{i},\left\{i=1,2,3,4,5\right\}\)) and uncertain extraction map \(M_{5}\), our UANet acts as a decoder strategy to deal with the general building extraction challenges and output a refined extraction map with low uncertainty. In detail, we first put forward a prior information guide module (PIGM) to take advantage of the prior information of the obtained extraction map to enhance the highest-level feature. Subsequently, the uncertainty-aware fusion module (UAFM) is utilized progressively to eliminate the uncertainty of features from high level to low level. Finally, UANet outputs the final refined extraction map with lower uncertainty. ### _Prior Information Guide Module_ In fact, the process to achieve the relatively uncertain extraction map \(M_{5}\) is a general decoding strategy, which cannot solve the current uncertainty problem. However, we believe that the information provided by \(M_{5}\) is still very valuable. Therefore, to achieve a more accurate and less uncertain prediction, we try to consider the extraction map \(M_{5}\) as prior knowledge and take advantage of it to realize the enhancement of the features. As mentioned before, the highest-level feature with the largest dimension lacks spatial information due to the smallest resolution. Based on this consideration, we propose the Prior Information Guide Module (PIGM) to guide the highest-level feature to realize refinement from both spatial and channel aspects. As shown in Fig. 3, we first utilize \(M_{5}\) to guide the highest-level feature to learn the corresponding spatial relationships. Subsequently, we continue to use \(M_{5}\) to model the channel dependence of the enhanced feature. In detail, the inputs of PIGM are \(F_{5}\in\mathbb{R}^{C\times H\times W}\) and \(M_{5}\in\mathbb{R}^{1\times H\times W}\). At the beginning, we split the input feature \(F_{5}\) from the channel dimension and get \(C\) feature maps \(F_{5}^{i}\in\mathbb{R}^{1\times H\times W}\): \[F_{5}^{i}=Split(F_{5}),i=1,2,...,C. \tag{1}\] On the one hand, we reshape \(F_{5}^{i}\) to compress its dimension and obtain \(V_{5}^{i}\in\mathbb{R}^{C\times N}\) (\(N=H\times W\)). On the other hand, we reshape and transpose \(F_{5}^{i}\) to get \(Q_{5}^{i}\in\mathbb{R}^{N\times C}\): \[\begin{split} V_{5}^{i}&=Reshape(F_{5}^{i}),i=1,2,...,n,\\ Q_{5}^{i}&=Transpose(Reshape(F_{5}^{i})),i=1,2,...,n.\end{split} \tag{2}\] Subsequently, we need to guide the input feature \(F_{5}\) to learn the spatial information by exploring the prior map \(M_{5}\), so we reshape \(M_{5}\) to \(K_{5}\in\mathbb{R}^{N\times C}\): \[K_{5}=Reshape(M_{5}). \tag{3}\] Then, we perform the cross-attention, which conducts the matrix multiplication between \(Q_{5}^{i}\) and \(K_{5}\) via Softmax function to obtain \(T_{5}^{i}\in\mathbb{R}^{N\times N}\) that represents the relationship between each channel of \(F_{5}\) and the prior map \(P_{5}\): \[T_{5}^{i}=Softmax(Q_{5}^{i}\otimes K_{5}),i=1,2,...,n. \tag{4}\] We take use of the relation map \(T_{5}^{i}\) to multiply \(V_{5}^{i}\) and achieve the enhanced feature map \(O_{5}^{i}\in\mathbb{R}^{1\times H\times W}\). Meanwhile, all the enhanced feature maps from \(O_{5}^{i}\) to \(O_{5}^{i}\) are concatenated to formulate \(C_{5}\). At last, we can obtain the spatial enhanced feature \(R_{5}\in\mathbb{R}^{C\times H\times W}\) with a residual structure: \[\begin{split}& O_{5}^{i}=V_{5}^{i}\otimes T_{5}^{i},i=1,2,...,C, \\ & C_{5}=Concat(O_{5}^{0},O_{5}^{i},...,O_{5}^{C}),\\ & R_{5}=\alpha\times C_{5}+F_{5},\end{split} \tag{5}\] where \(\alpha\) is a learnable parameter. From another perspective, we still attempt to utilize \(M_{5}\) to achieve the enhancement of \(R_{5}\). As shown in Fig. 3, we reshape and transpose \(M_{5}\) to obtain \(Q_{5}^{{}^{\prime}}\in\mathbb{R}^{N\times 1}\): \[Q_{5}^{{}^{\prime}}=Transpose(Reshape(M_{5})). \tag{6}\] At the same time, we reshape the spatial enhanced feature \(R_{5}\) as \(K_{5}^{{}^{\prime}}\in\mathbb{R}^{C\times N}\), and use the matrix multiplication among \(Q_{5}\) and \(K_{5}\) and the Sigmoid function to get \(S_{5}\in\mathbb{R}^{C\times 1}\). Afterward, by multiplying \(S_{5}\) and \(R_{5}\) associated with a residual structure, we can get the final feature \(G_{5}\in\mathbb{R}^{C\times H\times W}\): \[\begin{split}& K_{5}^{{}^{\prime}}=Reshape(R_{5}),\\ & S_{5}=Sigmoid(Q_{5}^{{}^{\prime}}\otimes K_{5}^{{}^{\prime}}), \\ & G_{5}=\beta\times S_{5}\times R_{5}+R_{5},\end{split} \tag{7}\] where \(\beta\) is a learnable parameter. ### _Uncertainty-Aware Fusion Module_ In the previous stages, we successively acquire the uncertain extraction map \(M_{5}\) and the enhanced feature \(G_{5}\). However, the uncertainty caused by the intricate backgrounds and various scales still remains. Therefore, we present the uncertainty-aware fusion module (UAFM) to tackle the high uncertainty issue, as illustrated in Fig. 4. As we all know, all the deep learning approaches output the extraction results by using the \(Softmax\) function to allocate the corresponding probability for each pixel, which can be directly used to reflect the uncertainty of the model in its predictions. As mentioned before, in RS images, some buildings are not salient enough and do not appear frequently, which will result in the uncertainty of model. To overcome such a uncertainty problem, we directly use the \(Sigmoid\) function to get the corresponding probabilities of all pixels in the extraction map \(M\) from spatial perspective, then we subtract all values of the probability map with \(0.5\) to measure the uncertainty belonging to foreground (\(U_{f}\)) and meanwhile subtract \(0.5\) with all values of the probability map to measure the uncertainty belonging to background (\(U_{b}\)), \[\begin{split}& U_{f}=Sigmoid(M)-0.5,\\ & U_{b}=0.5-Sigmoid(M).\end{split} \tag{8}\] Subsequently, we rank the uncertainty of foreground and background into five levels using the Uncertainty Rank Algorithm (URA), that is, the range of \([-0.5,0)\) represents not in consideration (rank 0), the range of \([0,0.1)\) indicates the highest uncertainty (rank 5), the range of \([0.1,0.2)\) represents the relatively high uncertainty (rank 4), the range of \([0.2,0.3)\) represents the central uncertainty (rank 3), the range of \([0.3,0.4)\) indicates moderately low uncertainty (rank 2), and the range of \([0.4,0.5]\) denotes the lowest uncertainty (rank 1). We then assign corresponding uncertainty levels as weights to the pixels, with the principle of attaching higher weights to pixels with higher uncertainty, so as to pay more attention on uncertain areas. We denote URA as: \[\begin{split}\mathcal{U}(i,j)=\left\{\begin{array}{ll}\lfloor \frac{0.5-U_{i,j}}{0.5}\rfloor,U_{i,j}>=0,\\ 0,U_{i,j}<0,\end{array}\right.\end{split} \tag{9}\] Fig. 3: The structure of the Prior Information Guide Module (PIGM). where \(U_{i,j}\) means the pixel in \(i_{th}\) row and \(j_{th}\) column of \(U_{f}\) or \(U_{b}\). Therefore, after using URA to allocate the uncertainty level about the uncertainty maps of the foreground and the background, respectively, we can obtain the foreground uncertainty rank map (\(R_{f}\)) and the background uncertainty rank map (\(R_{b}\)). \[\begin{split} R_{f}&=URA(Sigmoid(M)-0.5),\\ R_{b}&=URA(0.5-Sigmoid(M)),\end{split} \tag{10}\] We take the fusion of the highest \(G_{5}\) and \(F_{4}\) for example to illustrate the whole fusion process. Specifically, the inputs of UAFM in this layer are the enhanced feature \(G_{5}\), the \(F_{4}\), and the uncertain extraction map \(M_{5}\). Regarding the uncertainty-aware enhancement, on the one hand, we apply URA to \(M_{5}\) so that we can get the corresponding foreground uncertainty rank map (\(R_{f}^{5}\)) and background uncertainty rank map (\(R_{b}^{5}\)). Then we directly use \(G_{5}\) to multiply with them to highlight the uncertain pixels from both the foreground and background perspectives. Subsequently, we concatenate these two enhanced features and recover its original channel to get \(G_{5}^{u}\) by a \(1\times 1\) convolution operation. \[G_{5}^{u}=Conv_{1\times 1}(Concat(R_{f}^{5}*G_{5},R_{b}^{5}*G_{5})), \tag{11}\] On the other hand, we use the nearest neighbor interpolation method to upsample \(R_{f}^{5}\) and \(R_{b}^{5}\) to the same size as \(F_{4}\), and use the same operation to highlight \(F_{4}\) as the enhancement of \(G_{5}\), and we can get \(F_{4}^{u}\). \[\begin{split} F_{4}^{u}=Conv_{1\times 1}(Concat(Up(R_{f}^{5})*F_{4},Up(R_{b}^{5})*F_{4})),\end{split} \tag{12}\] Finally, we upsample \(G_{5}^{u}\) to match the size of \(F_{4}^{u}\), concatenate them together, and use a \(3\times 3\) convolution operation to get the fused feature \(G_{4}\), which can output the less uncertain extraction map \(M_{4}\) by a \(3\times 3\) convolution operation. \[\begin{split} G_{4}&=Conv_{3\times 3}(Concat(F_{4}^{u},G_{5}^{u})),\\ M_{4}&=Conv_{3\times 3}(G_{4}),\end{split} \tag{13}\] As shown in Fig. 4, we employ the uncertainty-aware fusion module (UAFM) to fuse the features \(G_{i}\) and \(F_{i-1}\) layer-by-layer and decode the fused feature to output the corresponding certainty-improved map \(M_{i-1}\). With such a UAFM, we can utilize \(M_{4}\) to fuse \(G_{4}\) and \(F_{3}\) and achieve output \(M_{3}\), utilize \(M_{3}\) to fuse \(G_{3}\) and \(F_{2}\) and achieve output \(M_{2}\), and utilize \(M_{2}\) to fuse \(G_{2}\) and \(F_{1}\) and output \(M_{1}\), where \(M_{1}\) can be viewed as the final refined extraction map with the lowest uncertainty. On the whole, we use the simple binary cross-entropy (\(BCE\)) loss function to supervise all the outputs, and the overall loss is : \[Loss=\sum_{i=1}^{5}BCE(M_{i},GT), \tag{14}\] where GT represents the ground truth. ## III Experiments ### _Dataset_ To verify the superiority of our proposed UANet, we select three public building extraction datasets to conduct extensive experiments, including the WHU building dataset, the Massachusetts building dataset, and the Inria building dataset. The detailed information of the whole three datasets is described as follows: 1. WHU building dataset [38] is composed of two types of images, \(i.e.\), satellite images, and aerial images. In our experimental settings, we only conducted experiments on the aerial image dataset, which has \(8,189\) image tiles (\(4,736\) tiles for training, \(1,036\) tiles for validation, and \(2,416\) tiles for testing). The spatial resolution is just \(0.3m\), and the whole aerial image dataset consists of \(22,000\) buildings and covers a huge area of over \(450km^{2}\). 2. Massachusetts building dataset [40] owns 151 aerial images of the Boston area with spatial resolution \(1m\). Composed of two types of scenes, \(i.e.\), urban, and suburban, the Massachusetts building dataset covers almost \(340km^{2}\) areas, and all the image sizes are of \(1500\times 1500\) pixels. The official dataset contains a training set (137 images), a validation set (4 images), and a testing set (10 images). We adopt some data augmentation ways to expand the original training set to 411 images. For the training phase, we randomly crop the images and labels into 1024 x 1024 pixels as input. And for both the validating and testing phase, the images and labels are padded to the size of \(1536\times 1536\) pixels to ensure it is divisible by 32. It is worth mentioning that we ignore the padding parts when computing evaluation metrics. 3. Inria building dataset [39] contains 360 images collected from 5 cities (Austin, Chicago, Kitsap, Tyrol, and Vienna). Referring to the official suggestion, we select 1 to 5 tiles from each city for validation and the rest for Fig. 4: The structure of the Uncertainty-Aware Fusion Module (UAFM). training. We first pad the original 5000 x 5000 images to 5120 x 5120 pixels and then crop them into 512 x 512 pixels image tiles. Second, we remove the images without buildings, the remaining 9737 and 1942 image tiles used for training and validation, respectively. ### _Evaluation Metrics_ To conduct a broad and comprehensive evaluation of our proposed model, we chose four metrics, \(i.e.\), intersection over union (\(IoU\)), F1 score (\(F1\)), Precision, and Recall. At first, we use \(TP\), \(FP\), and \(FN\) to represent the true positive, the false positive, and the false negative, respectively. Then, we give the definition of the four evaluation metrics as follows: \[IoU=\frac{TP}{TP+FP+FN} \tag{15}\] \[Precision=\frac{TP}{TP+FP} \tag{16}\] \[Recall=\frac{TP}{TP+FN} \tag{17}\] \[F1=\frac{2\times Precision\times Recall}{Precision+Recall} \tag{18}\] ### _Experimental Settings_ To comprehensively evaluate our proposed model, all related experiments are implemented in PyTorch 1.8.1 (CUDA 11.1) on an NVIDIA GeForce RTX 3090 GPU with 24GB of memory. In the training phase, we selected the AdamW [61] optimizer and employed the cosine strategy to adjust the learning rate. Additionally, we utilized random horizontal and vertical flipping to augment the training data. According to the \begin{table} \begin{tabular}{c|c|c c c|c c c c|c c c c} \hline \multirow{2}{*}{Baseline} & \multirow{2}{*}{Year} & \multicolumn{4}{c|}{WHU (\%)} & \multicolumn{4}{c|}{Massachusetts (\%)} & \multicolumn{4}{c}{Inira (\%)} \\ \cline{3-14} & & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) \\ \hline UNet \# & 2015 & 85.92 & 92.39 & 91.78 & 93.01 & 68.48 & 81.47 & 80.99 & 81.96 & 74.40 & 85.32 & 86.39 & 84.28 \\ HRNet \# & 2019 & 85.64 & 92.27 & 91.69 & 92.85 & 69.39 & 81.93 & 81.49 & 82.38 & 75.03 & 85.73 & 86.56 & 84.92 \\ MA-FCN & 2019 & 90.70 & 95.15 & 95.20 & 95.10 & 73.80 & 84.93 & 87.07 & 82.89 & 79.67 & 88.68 & 89.82 & 87.58 \\ DSNet \# & 2020 & 89.54 & 94.48 & 94.05 & 94.91 & 75.04 & 85.74 & 87.56 & 83.99 & 81.02 & 89.52 & 90.32 & 88.73 \\ CBRNet & 2021 & 91.40 & 95.51 & 95.31 & 95.70 & 74.55 & 85.42 & 86.50 & 84.36 & 81.10 & 89.56 & 89.93 & 89.20 \\ MSNet & 2022 & 89.07 & 93.96 & 94.83 & 93.12 & 70.21 & 79.33 & 78.54 & 80.14 & - & - & - & - \\ BOMSNet & 2022 & 90.15 & 94.80 & 95.14 & 94.50 & 74.71 & 85.13 & 86.64 & 83.68 & 78.18 & 87.75 & 87.93 & 87.58 \\ LCS & 2022 & 90.71 & 95.12 & 95.38 & 94.86 & - & - & - & - & 78.82 & 88.15 & 89.58 & 86.77 \\ BuildFormer \# & 2022 & 90.73 & 95.14 & 95.15 & 95.14 & 75.03 & 85.73 & 86.69 & 84.79 & 81.24 & 89.71 & 90.65 & 88.78 \\ BCTNet & 2023 & 91.15 & 95.37 & 95.47 & 95.27 & 75.04 & 85.74 & 87.57 & 83.99 & - & - & - & - \\ FD-Net & 2023 & 91.14 & 95.36 & 95.27 & 95.46 & 74.54 & 85.42 & 87.95 & 83.02 & - & - & - & - \\ \hline **Ours-UANet** & & **92.15** & **95.91** & **95.96** & **95.86** & **76.41** & **86.63** & **87.94** & **85.35** & **83.08** & **90.76** & **92.04** & **89.52** \\ \hline \end{tabular} \# means that the results were obtained by ourselves. The codes of other compared methods are not released, we directly copy the results from the original papers. \end{table} TABLE I: Performance comparison with baseline models on the test datasets. \(\uparrow\) indicates the higher score the better and vice versa. The best score for each metric is marked in red. The second score for each metric is underlined. Fig. 5: Visual Comparison on WHU building dataset. experimental settings in BuildFormer [28] and our hardware conditions, for the WHU building dataset, we set the initial learning rate to \(10^{-3}\) and the batch size to 12. For the Massachusetts building dataset, we set the initial learning rate to \(5e^{-4}\) and the batch size to 2. And for the Inria building dataset, we set the initial learning rate to \(5e^{-4}\) and the batch size to 12. ### _Compared Methods_ For a fair comparison, we selected two typical CNNs, \(i.e.\), UNet [46] based on VGG-16 [13], and HRNet [47] for the comparison. Meanwhile, we selected nine state-of-the-art deep learning methods designed for building extraction, \(i.e.\), MACFCN [36], DSNet [30], CBRNet [27], MSNet [34], BOMSNet [37], LCS [35], BuildFormer [28], BCTNet [59], and FD-Net [60]. ### _Evaluation on WHU building dataset_ #### Iv-E1 Quantitative Comparison Table I lists the overall quantitative evaluation results of the different methods obtained on the WHU building dataset. Compared with other SOTA methods, our UANet can achieve the best performance on all metrics. In detail, our UANet outperforms SOTA method CBRNet ([27]) by \(0.75\) percentage on the \(IoU\) metric, \(0.40\) percentage on the \(F1\) metric, \(0.65\) percentage on the \(Precision\) metric, and \(0.16\) percentage on the \(Recall\) metric. The significant advantages of these metrics reflect the superiority of our method, proving that our proposed architecture with uncertainty consideration can greatly improve the effect of building extraction. #### Iv-E2 Visual Comparison In order to compare our UANet with other SOTA methods more intuitively, we visualize the extraction results of all methods. As shown in Fig. 5, the qualitative results for UANet and the other methods on the WHU buildings dataset are presented. For the first image, UNet, Deeplabv3+, HRNet, and BuildFormer all fail to extract the building in the red circle, while DSNet performs slightly better. By contrast, our UANet can accurately extract the buildings in the pink circle, which is closer to the ground truth. For the second image, all the compared methods wrongly recognize the road in the red circle as the part of buildings, but our UANet avoids this problem perfectly. Finally, for the third image, all the compared methods ignore the small building in the red circle, but our UANet demonstrates its superiority over the compared methods and successfully extracts this small building. It is evident that the ignored buildings in the three examples above are in a complex background, which leads to the uncertainty of the model. Faced with such a situation, our UANet is able to achieve satisfactory results with less uncertainty. ### _Evaluation on Massachusetts building dataset_ #### Iv-F1 Quantitative Comparison Table I lists the overall quantitative evaluation results of the different methods obtained on the Massachusetts building dataset. Compared with other SOTA methods, our UANet can achieve the best performance on all metrics. Specifically, compared with the SOTA method DSNet, our UANet can outperform it by \(1.37\) percentage on the IoU metric, \(0.89\) percentage on the F1 metric, \(0.37\) percentage on the Precision metric, and \(0.56\) percentage on the Recall metric. Since the same backbone is used as other compared methods (except BuildFormer), the huge advantage of our UANet indicates that our decoding strategy is very effective. #### Iv-F2 Visual Comparison As shown in Fig. 6, we present three visual examples of all the compared methods and our UANet on the Massachusetts building dataset. Due to the low image resolution of the dataset and the dense distribution of buildings Fig. 6: Visual Comparison on Massachusetts building dataset. in the image, it is evident that all the methods have a lot of errors in their extraction results. However, it is obvious that our extraction result extracts more details such as texture and edge than the compared methods, which is most noticeable in the red box area. The more complex the environment, the better our UANet performs than other compared methods, as our UANet can highlight the uncertain areas and eliminate them to a large extent. ### _Evaluation on Inria building dataset_ #### Iii-G1 Quantitative Comparison As shown in Table I, we list the overall quantitative evaluation results of the different methods tested on the Inria building dataset. Compared with the SOTA method BuildFormer, it is clear that our UANet can outperform it by \(1.84\) percentage on \(IoU\), \(1.05\) percentage on \(F1\), \(1.39\) percentage on \(Precision\), and \(0.74\) percentage on \(Recall\). This significant improvement demonstrates the effectiveness of our approach to introduce uncertainty to optimize decoding strategies. #### Iii-G2 Visual Comparison As presented in Fig. 7, we select three typical examples to compare our UANet with the other SOTA methods. In the first image, we can see that the buildings in the red circle is covered by shadows from the buildings next to it, and the compared methods fail to successfully extract the whole bodies of the buildings. At the same time, compared with our result, there are still more drawbacks. In the second image, we can easily find that the buildings in the red circle are somewhat different from the other buildings around it, and HRNet, DSNet, and BuildFormer ignore the real part of buildings but mistake unrelated parts for buildings. By contrast, the result of our proposed UANet is very close to the ground truth. In the third image, it is easy to find that the compared methods mistakenly detect the part of the red rectangle as a building, but our UANet succeed. From these three examples, it is convinced that our UANet can make the right judgment in the face of complex environments. ## IV Ablation Study In order to explore the effectiveness of our proposed modules in UANet, we conduct extensive experiments on the three building datasets. We selected the general encoder-decoder network used in our UANet as the baseline, which utilizes VGG-16 as the encoder and use a conventional decoding method to output an uncertain extraction map. Based on it, we verify the effectiveness of the Prior Information Guide Module (PIGM), and the Uncertainty-Aware Fusion Module (UAFM) in turn. In the following parts, we will give a detailed analysis. ### _The effectiveness of PIGM_ Guided by the uncertain extraction map \(M_{5}\), we try to enhance the highest-level features via PIGM. Different from the previous attention mechanism, we introduced a cross-attention method, which helps the high dimensions features to learn the spatial and the semantic relationship channel by channel. As shown in Table II, by introducing the PIGM, the extraction accuracy can be significantly improved. We conducted several experiments to verify the detailed effect of the two components of the PIGM. As shown in Table III, to verify the effectiveness of PIGM, we conducted four sets of experiments: 1) without learning any correlation 2) just establishing the spatial correlation (SC), 3) just establishing the channel correlation (CC), 4) establishing the spatial and channel correlation in series (SC + CC). It is clear that the two enhancement ways played their own role in the PIGM module. ### _The effectiveness of UAFM_ The proposed UAFM can reduce the uncertainty of \(G_{i}\) with the help of the foreground uncertainty rank map \(R^{i}_{f}\) and the Fig. 7: Visual Comparison on Inria building dataset. background uncertainty rank map \(R^{i}_{b}\), and output feature \(G_{i-1}\) with lower uncertainty. As shown in Table II, we can easily find that the UAFM can bring significant improvement of building extraction performance. We also conducted extensive experiments to explore the accuracy improvement brought about by such an uncertainty-aware strategy in detail. As presented in Table IV, We set up four feature interaction methods: \(Case1\): just concatenate the adjacent layers of features and introduce deep supervision in all levels; \(Case2\): just use the \(Sigmoid\) function to process the extraction map from former level and utilize it achieve the feature interaction; \(Case3\): just use the foreground uncertainty (\(R^{i}_{f}\)) to achieve the feature interaction; \(Case4\): follow our proposed uncertainty-aware strategy which utilizes both the foreground uncertainty (\(R^{i}_{f}\)) and the background uncertainty (\(R^{i}_{b}\)) to achieve the feature interaction. It is evident that the extraction accuracy is significantly improved with the guidance of the uncertainty maps of both the foreground and the background, which can intuitively reflect the huge advantage of our proposed strategy. At the same time, In order to verify that UAFM can output feature \(G_{i-1}\) and related prediction \(M_{i-1}\) with lower uncertainty, we visualize \(G_{i-1}\) and the uncertainty reflected in \(M_{i-1}\) of all levels. As exhibited in Fig.8, we can observe that in each level, the enhanced features \(G_{i-1}\) can achieve cleaner objects and related edges compared to that of \(G_{i}\), and the uncertainty is progressively reduced. Besides, Table. V can also illustrate the gradual enhancement of our high-to-low uncertain-aware strategy from quantitative evaluation. ### _The analysis of URA_ As the key algorithm in our UAFM, URA aims to rank the uncertainty level of all pixels in the extraction map. As mentioned in Section II-C, the principle of URA is to define a non-increasing linear function \(\Omega\) from \(U\) to \(R\). To simplify our design of \(\Omega\), we define the uncertainty of \(0-0.5\) into five levels. To verify the effectiveness of our designed URA, we visualize both \(R^{i}_{f}\) and \(R^{i}_{b}\) (\(\{i=1,2,3,4,5\}\)). As shown in Fig. 9, we find that the level of uncertainty is decreasing overall. We can conclude that assigning different weights to each level of uncertainty can address the uncertainty problem to some extent. We also find that, the weight of pixels with \begin{table} \begin{tabular}{c|c c c c|c c c c c|c c c c} \hline \multirow{2}{*}{Baseline} & \multicolumn{4}{c|}{WHU (\%)} & \multicolumn{4}{c|}{Massachusetts (\%)} & \multicolumn{4}{c}{Inira (\%)} \\ \cline{2-13} & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) \\ \hline \(M_{5}\) & 87.35 & 92.90 & 92.25 & 93.56 & 69.73 & 82.17 & 85.41 & 79.16 & 79.08 & 88.32 & 87.77 & 88.88 \\ \(M_{4}\) & 89.93 & 94.70 & 95.16 & 94.24 & 74.52 & 85.40 & 86.46 & 84.37 & 82.09 & 90.16 & 91.45 & 88.91 \\ \(M_{3}\) & 91.25 & 95.43 & 95.94 & 94.92 & 75.90 & 86.30 & 87.17 & 85.45 & 82.83 & 90.61 & 91.95 & 89.31 \\ \(M_{2}\) & 91.68 & 95.66 & **96.16** & 95.17 & 76.10 & 86.43 & 87.41 & **85.47** & 83.05 & 90.74 & 92.02 & 89.50 \\ \(M_{1}\) & **92.15** & **95.91** & 95.96 & **95.86** & **76.41** & **86.63** & **87.94** & 85.35 & **83.08** & **90.76** & **92.04** & **89.52** \\ \hline \end{tabular} \end{table} TABLE II: Ablation results on the test dataset. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c c} \hline \multirow{2}{*}{Baseline} & \multicolumn{4}{c|}{WHU (\%)} & \multicolumn{4}{c|}{Massachusetts (\%)} & \multicolumn{4}{c}{Inira (\%)} \\ \cline{2-13} & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) \\ \hline \(Case1\) & 89.08 & 94.23 & 93.92 & 94.53 & 72.13 & 83.81 & 86.56 & 81.24 & 79.73 & 88.73 & 89.17 & 88.28 \\ \(Case2\) & 91.39 & 95.43 & 95.47 & 95.40 & 75.21 & 85.96 & 87.81 & 84.18 & 80.98 & 89.03 & 90.62 & 87.50 \\ \(Case3\) & 91.61 & 95.57 & 95.45 & 95.70 & 75.87 & 86.28 & 87.93 & 84.69 & 82.34 & 90.31 & 91.49 & 89.16 \\ \(Case4\) & **92.15** & **95.91** & **95.96** & **95.86** & **76.41** & **86.63** & **87.94** & **85.35** & **83.08** & **90.76** & **92.04** & **89.52** \\ \hline \end{tabular} \end{table} TABLE IV: The ablation results about UAFM on the test dataset. \begin{table} \begin{tabular}{c c c c|c c c c|c c c c|c c c c} \hline \multirow{2}{*}{Baseline} & \multirow{2}{*}{PIGM} & \multirow{2}{*}{UAFM} & \multicolumn{4}{c|}{WHU (\%)} & \multicolumn{4}{c|}{Massachusetts (\%)} & \multicolumn{4}{c}{Inira (\%)} \\ \cline{3-14} & & & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) \\ \hline ✓ & & & & 87.35 & 92.90 & 92.25 & 93.56 & 69.73 & 82.17 & 85.41 & 79.16 & 79.08 & 88.32 & 87.77 & 88.88 \\ ✓ & & ✓ & & 91.25 & 95.43 & 95.73 & 95.13 & 74.84 & 85.61 & 87.56 & 83.75 & 81.84 & 90.01 & 90.43 & 89.61 \\ ✓ & ✓ & ✓ & & **92.15** & **95.91** & **95.96** & **95.86** & **76.41** & **86.63** & **87.94** & **85.35** & **83.08** & **90.76** & **92.04** & **89.52** \\ \hline \end{tabular} \end{table} TABLE II: Ablation results on the test dataset. \begin{table} \begin{tabular}{c c c c c|c c c c c|c c c c c} \hline \multirow{2}{*}{Baseline} & \multicolumn{4}{c|}{WHU (\%)} & \multicolumn{4}{c|}{Massachusetts (\%)} & \multicolumn{4}{c}{Inira (\%)} \\ \cline{3-14} & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) \\ \hline \(Case1\) & 89.08 & 94.23 & 93.92 & 94.53 & 72.13 & 83.81 & 86.56 & 81.24 & 79.73 & 88.73 & 89.17 & 88.28 \\ \(Case2\) & 91.39 & 95.43 & 95.47 & 95.40 & 75.21 & 85.96 & 87.81 & 84.18 & 80.98 & 89.03 & 90.62 & 87.50 \\ \(Case3\) & 91.61 & 95.57 & 95.45 & 95.70 & 75.87 & 86.28 & 87.93 & 84.69 & 82.34 & 90.31 & 91.49 & 89.16 \\ \(Case4\) & **92.15** & **95.91** & **95.96** & **95.86** & **76 high uncertainty needs to be significantly higher than that of pixels with low uncertainty. ### _The analysis of different encoders_ As mentioned before, our UANet can be also used for other kinds of encoder-decoder building extraction models to improve the certainty prediction. And we select ResNet-50 [14], Res2Net-50 [16], VGG-16 [13] and PVT-V2-B2 [18] as encoder-decoder backbones, to testify the efficacy of our UANet. As illustrated in TableVI, we can easily find that our UANet can achieve excellent results on different encoders, especially in the case of transformer based architecture PVT-V2-B2. However, since most previous models utilize the VGG-16 as the backbone, we also choose the same setting for a fair comparison. ### _The comparison with other uncertainty strategies_ In our proposed UANet, we rank the uncertainty-level from both the foreground and the background perspectives to reduce the uncertainty of features level by level. To verify the superiority over other uncertainty strategies, we compared our method with the uncertainty strategies used in other vision tasks. In detail, on the one hand, we adopted the settings in [54] and added a confidence estimation network to our VGG-16 based general encoder-decoder structure to formalise the uncertainty as probability distribution over model output and the input image. On the other hand, we followed the setting in [55] and introduced the Conditional Variational Autoencoder (CAVE) to measure the uncertainty of input data, \begin{table} \begin{tabular}{c|c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Inira (\%)} \\ \cline{2-5} & \(IoU\uparrow\) & \(F1\uparrow\) & \(Pre\uparrow\) & \(Recall\uparrow\) \\ \hline ResNet-50 & 82.17 & 90.21 & 91.24 & 89.20 \\ Res2Net-50 & 83.17 & 90.81 & 91.89 & 89.76 \\ PVT-V2-B2 & **83.34** & **90.91** & 91.86 & **89.97** \\ VGG-16 & 83.08 & 90.76 & **92.04** & 89.52 \\ \hline \end{tabular} \end{table} TABLE VI: Ablation results about different encoders on the test dataset. Fig. 8: Visual examples of building extraction. The first row represents the visualizations of \(G_{i}\), and the second row represents the uncertainty visualization of \(M_{i}\). Fig. 9: Visual examples of building extraction. The first row represents the visualizations of \(R^{i}_{f}\), and the second row represents the uncertainty visualization of \(R^{i}_{b}\). which was followed by being input to our VGG-16 based general encoder-decoder structure with the input image. As illustrated in TableVII, we can clearly see the superiority of our uncertainty strategy. We believe that other uncertainty strategies do not take into account the unique characteristics of the distribution of ground objects in RS images (dense, small targets) and appear to be inapplicable. Relatively speaking, we believe that the uncertainty in RS images is usually caused by insufficient understanding of hard-to-segment buildings with less frequency in the process of feature interaction, and our uncertainty-aware strategy can solve such a problem perfectly. ### _Complexity of UANet_ In order to validate the efficiency of the proposed UANet, we compared the amount of the parameters and the IoU on the Inria building dataset with the current SOTA methods. As shown in Fig.10, our UANet achieves the highest accuracy with the total parameter of 15.6 M, which is the lowest. ## V Conclusion In this paper, we argue that the complex distribution of the ground objects, inconsistent building scales, and various building styles bring some uncertainty to the predictions of the general deep learning models, causing the omission and the commission to a large extent. Therefore, we introduce the concept of uncertainty and propose a novel uncertainty-aware network (UANet). Firstly, we utilize a general encoder-decoder network to yield a general uncertain extraction map. Secondly, we propose the PIGM to enhance the highest-level features. Subsequently, the UAFM is proposed with the uncertainty rank algorithm (URA) to eliminate the uncertainty of features from high level to low level. Finally, the proposed UANet outputs the final extraction map with lower uncertainty. By conducting sufficient experiments, we validate the effectiveness of our UANet. The final high accuracy on three public datasets indicates that the introduction of the uncertainty concept in buildings has been extremely successful. However, although using such a way of ranking the level of uncertainty can help us get a better extraction result, how to allocate the weight adaptively of different uncertainty levels for URA is still an unsolved problem in the paper, which will be a focus of our future work.
2304.13547
An effective semilocal model for wave turbulence in 2D nonlinear optics
The statistical evolution of ensembles of random, weakly-interacting waves is governed by wave kinetic equations. To simplify the analysis, one frequently works with reduced differential models of the wave kinetics. However, the conditions for deriving such reduced models are seldom justified self-consistently. Here, we derive a reduced model for the wave kinetics of the Schr{\"o}dinger-Helmholtz equations in two spatial dimensions, which constitute a model for the dynamics of light in a spatially-nonlocal, nonlinear optical medium. This model has the property of sharply localising the frequencies of the interacting waves into two pairs, allowing for a rigorous and self-consistent derivation of what we term the semilocal approximation model (SLAM) of the wave kinetic equation. Using the SLAM, we study the stationary spectra of Schr{\"o}dinger-Helmholtz wave turbulence, and characterise the spectra that carry energy downscale, and waveaction upscale, in a forced-dissipated setup. The latter involves a nonlocal transfer of waveaction, in which waves at the forcing scale mediate the interactions of waves at every larger scale. This is in contrast to the energy cascade, which involves local scale-by-scale interactions, familiar from other wave turbulent systems and from classical hydrodynamical turbulence.
Jonathan Skipp, Jason Laurie, Sergey Nazarenko
2023-03-04T01:04:08Z
http://arxiv.org/abs/2304.13547v2
# An effective semilocal model for wave turbulence in 2D nonlinear optics ###### Abstract The statistical evolution of ensembles of random, weakly-interacting waves is governed by wave kinetic equations. To simplify the analysis, one frequently works with reduced differential models of the wave kinetics. However, the conditions for deriving such reduced models are seldom justified self-consistently. Here, we derive a reduced model for the wave kinetics of the Schrodinger-Helmholtz equations in two spatial dimensions, which constitute a model for the dynamics of light in a spatially-nonlocal, nonlinear optical medium. This model has the property of sharply localising the frequencies of the interacting waves into two pairs, allowing for a rigorous and self-consistent derivation of what we term the semilocal approximation model (SLAM) of the wave kinetic equation. Using the SLAM, we study the stationary spectra of Schrodinger-Helmholtz wave turbulence, and characterise the spectra that carry energy downscale, and wavaction upscale, in a forced-dissipated setup. The latter involves a nonlocal transfer of waveaction, in which waves at the forcing scale mediate the interactions of waves at every larger scale. This is in contrast to the energy cascade, which involves local scale-by-scale interactions, familiar from other wave turbulent systems and from classical hydrodynamical turbulence. ## I Introduction Wave turbulence is the statistical theory of large ensembles of random, weakly nonlinear, dispersive waves [1; 2]. Accordingly, when developing the wave turbulence description of a physical system, one is most frequently concerned with the mean square of the wave intensity: the waveaction spectrum. The equation of motion for the spectrum is known as the wave kinetic equation (WKE), and describes the irreversible evolution of the spectrum over nonlinear timescales (which are long compared to the linear wave period), due to resonant \(M\)-wave interactions. For a system in \(d\) spatial dimensions, the WKE involves an integration over \(\mathbb{R}^{(M-1)d}\), constrained to the resonant manifold of interacting waves. The complexity of this so-called collision integral makes solving the WKE a challenging task in general. Nonetheless, certain analytic techniques exist, in particular the Zakharov-Kraichnan transform that allows one to find the Kolmogorov-Zakharov (KZ) cascade spectra [3]. These are stationary solutions of the WKE on which, for many systems, the dynamical invariants cascade with constant flux through spatial scales, via a self-similar, spectrally local (scale-by-scale) transfer, analogous to the Kolmogorov energy spectrum in classical hydrodynamics. The stationary spectrum of thermodynamic equilibrium--the Rayleigh-Jeans (RJ) spectrum--can also be derived trivially as the spectrum on which the collision integral has an integrand that vanishes pointwise. In a seminal paper, Dyachenko et. al. [4] demonstrated that the collision integral can be greatly simplified if one makes the _ad-hoc_ assumption that the wave interaction coefficient is sharply peaked, so that all \(M\) waves taking part in interactions have approximately the same frequency. This assumption, which we will refer to as superlocality, allows one to reduce the collision integral to a differential operator. The resulting equation--the differential approximation model (DAM)--preserves a great deal of the structure of the original WKE, namely its conserved quantities, the degree of nonlinearity with respect to the spectrum, its scaling with frequency, and, as a result of the latter, the stationary RJ and KZ solutions. DAMs are the wave turbulence equivalent of the Leith model of classical hydrodynamics [5; 6]. Being differential equations, DAMs are much easier to work with than the collision integrals from which they are derived. They have been used in a wide variety of physical systems to examine topics such as the stationary RJ and KZ wave turbulence spectra [7; 8; 9; 10; 11; 12], thermalisation at the end of a cascade spectrum [13], the crossover from strong to weak turbulence [14], and the nature of transient spectra before the KZ spectra are established, including the anomalous scaling of spectral fronts [15; 16; 17; 18]. The reduction of a WKE to a DAM is predicated on the assumption of superlocality. However, this assumption is rarely justified in the cases where DAMs are applied, indeed Dyachenko et. al. [4] introduced the DAM in the context of the cubic nonlinear Schrodinger equation, whose interaction coefficient is a constant across Fourier space. Furthermore, DAMs are often constructed heuristically, based on the scaling properties of the interaction coefficient, with the desired stationary solutions and degree of nonlinearity built in. To our knowledge, there has been no rigorous derivation of the DAM for any system whose interaction coefficient has the required properties to justify any locality assumption. In this paper we derive such a reduced model for wave kinetics of the Schrodinger-Helmholtz equations (SHE). We introduce the SHE in Sec. II, along with their physical context, the dynamical invariants that they conserve (namely energy and wavaction), their WKE, and the directions in Fourier space that their invariants flow during the wave kinetic evolution. The SHE are of interest to us because they comprise the first system studied in the wave turbulence context in which the spectral locality of interactions arises naturally from the functional form of the interaction coefficient. In fact, the locality manifested by the SHE is one in which distinct pairs of interacting waves are localised in frequency space. We refer to the latter as a semilocal, as opposed to a superlocal, limit. We exploit this property in Sec. III to reduce the kinetic equation of the SHE to a simpler model, in the same spirit as the derivation of the DAM in Ref. [4]. It transpires that the semilocality property allows the collision integral to be reduced to an integro-differential operator, rather than a purely differential one. The resulting reduction of the WKE we term the semilocal approximation model (SLAM). Analysis of the SLAM allows us to extract the stationary spectra of the WKE, including prospective candidates for the KZ cascade spectra, in Sec. IV. However, we demonstrate that the KZ waveaction cascade spectrum is pathological, as it leads to a divergence of the SLAM at high frequency. Furthermore, in Sec. V we show how the KZ spectra lead to flux directions that are inconsistent with the more general argument we present in Sec. II.2.2, requiring us to reconsider the spectra that establish the turbulent transport of dynamical invariants across scales of the system. We proceed to find the true waveaction flux spectrum in Sec. VI, and conclude that both stationary solutions that describe the flux of energy on the one hand, and waveaction on the other, are very closely related to the RJ equilibrium spectrum. The wavaction flux spectrum, which carries waveaction to large scales, is dominated by nonlocal interactions, with waves at the forcing scale mediating wave interactions at all larger scales. By contrast, the energy flux spectrum, which carries energy to small scales, has local scale-by-scale interactions. We start by introducing the SHE in the next section. ## II Schrodinger-Helmholtz equation The SHE consist of a nonlinear Schrodinger equation for the dynamical variable \(\psi(\mathbf{x},t)\in\mathbb{C}\), \[i\frac{\partial\psi}{\partial t}+\nabla^{2}\psi-V(\psi)\psi=0,\] (1a) coupled, via the potential \[V(\psi)\in\mathbb{R}\], to the Helmholtz equation, \[\nabla^{2}V-\Lambda V=\gamma|\psi|^{2}. \tag{1b}\] They are thus a spatially nonlocal1 extension of the familiar cubic nonlinear Schrodinger equation (NLS, also known as the Gross-Pitaevskii equation), Footnote 1: The spatial nonlocality originates from the inversion of the Helmholtz operator in Eq. (1b). The term “local” is used here in a different sense to the locality of interactions in frequency space, to which the semilocal approximation refers. \[i\frac{\partial\psi}{\partial t}+\nabla^{2}\psi\pm|\psi|^{2}\psi=0. \tag{2}\] The NLS is obtained from Eqs. (1) by sending \(\gamma,\Lambda\to\infty\) in such a way that \(\gamma/\Lambda\) remains constant, and renormalising \(\psi\). The physical applications of the SHE were discussed in Ref. [12]; in brief, for \(d=3\) they describe so-called Fuzzy Dark Matter [19; 20; 21] in a universe with cosmological constant \(\Lambda\). In \(d=2\), Eqs. (1) describe the perpendicular dynamics of laser light in a thermo-optic or elasto-optic nonlinear medium [22; 23; 24]. In the optics case, \(\Lambda\) is the normalised Kerr coefficient of spatially-local interactions. The dynamical variable \(\psi(\mathbf{x},t)\) represents, respectively, the wavefunction of the putative dark matter boson, or the envelope of the electric field. Here we restrict ourselves to the \(d=2\) case. Also closely associated with the SHE are the Schrodinger-Newton equations (SNE) [25; 26; 27], \[i\frac{\partial\psi}{\partial t}+\nabla^{2}\psi-V(\psi)\psi=0, \tag{3a}\] \[\nabla^{2}V=\gamma|\psi|^{2}, \tag{3b}\] which are obtained by formally setting \(\Lambda=0\) in the SHE. However, as discussed in Ref. [12], the SNE are ill-posed in periodic settings, or when one wants to describe fluctuations over an infinite, static background, because that background does not solve the Poisson equation (3b), c.f. the "Jeans swindle" [28]. Non-trivial dynamics are recovered by introducing a spatially local term to the left-hand side of Eq. (3b), i.e. moving to the SHE [29]. This fact is reflected in the divergence of the SLAM when we set \(\Lambda=0\), see Sec. VII.1. ### Hamiltonian and invariants of the SHE Most commonly in wave turbulence, the equation of motion under study can be derived via Hamilton's equation \(i\partial_{t}\psi=\delta H/\delta\psi^{*}\). Equations (1) are no exception, with the Hamiltonian functional being \[H=\int|\nabla\psi|^{2}\,d\mathbf{x}+\int\frac{\gamma}{2}\left[(\nabla^{2}- \Lambda)^{-1/2}|\psi|^{2}\right]^{2}\,d\mathbf{x}. \tag{4}\] The first term on the right-hand side of Eq. (4) is the kinetic energy of the system. The second term is the energy of waves interacting via the spatially-nonlocal nonlinearity \(V(\psi)=(\nabla^{2}-\Lambda)^{-1}|\psi|^{2}\) that solves Eq. (1b). The operator \((\nabla^{2}-\Lambda)^{-q}\) with \(q\) rational and positive, is to be understood as a formal power series, and is made concrete in its Fourier-space representation (the latter is found in Ref. [12]). The Hamiltonian \(H\) is conserved under the evolution via the SHE, and is strictly positive. The other positive invariant is the waveaction (a.k.a. number of particles, in reference to the application to Bosonic systems), \[N=\int|\psi|^{2}\,d\mathbf{x}. \tag{5}\] The momentum \(\mathbf{P}=2i\int(\psi\nabla\psi^{*}-\psi^{*}\nabla\psi)\,d\mathbf{x}\) is yet another conserved quantity. However, not being sign-definite, it plays no role in the argument regarding the invariant cascade directions, see Sec. II.2.2. The momentum will not feature in the work we carry out in this paper. ### Wave kinetic equation We are concerned with the waveaction spectrum \(n_{\mathbf{k}}(t)=(L/2\pi)^{d}(|\psi_{\mathbf{k}}(t)|^{2})\). Here, \(L\) is the size of the physical domain in \(d\) spatial dimensions, and \(\psi_{\mathbf{k}}(t)=(1/L)^{d}\int\psi(\mathbf{x},t)\exp(-i\mathbf{k}\cdot \mathbf{x})d\mathbf{x}\) is the Fourier series coefficient of \(\psi(\mathbf{x},t)\) for the wave mode with wavevector \(\mathbf{k}\). The averaging operator \(\langle\cdot\rangle\) denotes an ensemble average over initial conditions \(\psi_{\mathbf{k}}(0)\) that have independent and uniformly distributed phases, and independent and identically distributed amplitudes. Taking the domain size \(L\to\infty\) and then assuming weak nonlinearity, one can derive the following WKE describing the evolution of the spectrum at intermediate times due to the nonlinear 4-wave interactions of the \(2\leftrightarrow 2\) type [2]: \[\frac{\partial n_{\mathbf{k}}}{\partial t}=4\pi\int|W^{12}_{3\mathbf{k}}|^{2} \delta^{12}_{3\mathbf{k}}\delta(\omega^{12}_{3\mathbf{k}})n_{1}n_{2}n_{3}n_{ \mathbf{k}}\left[\frac{1}{n_{\mathbf{k}}}+\frac{1}{n_{3}}-\frac{1}{n_{1}}- \frac{1}{n_{2}}\right]d\mathbf{k}_{1}\,d\mathbf{k}_{2}\,d\mathbf{k}_{3}. \tag{6}\] The right-hand side of Eq. (6) is the collision integral, and is taken across the joint \(\mathbf{k}\)-space \(\mathbb{R}^{2}\times\mathbb{R}^{2}\times\mathbb{R}^{2}\). Here, \(\delta^{12}_{3\mathbf{k}}\coloneqq\delta(\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{ k}_{3}-\mathbf{k})\) and \(\delta(\omega^{12}_{3\mathbf{k}})\) are Dirac delta functions that constrain interacting wave quartets to the resonant manifold defined by \[\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{k}_{3}-\mathbf{k} =0, \tag{7a}\] \[\omega^{12}_{3\mathbf{k}}\coloneqq\omega_{1}+\omega_{2}-\omega_{ 3}-\omega_{\mathbf{k}} =0. \tag{7b}\] Here \(\omega_{\mathbf{k}}=k^{2}\) is the linear dispersion relation, and \(k=|\mathbf{k}|\). We have also used the shorthand notation \(n_{i}=n_{\mathbf{k}_{i}},\omega_{i}=\omega_{\mathbf{k}_{i}}\) etc. for \(i=1,2,3\). The interaction coefficient for the SHE is \[W^{12}_{3\mathbf{k}}=-\frac{\gamma}{2}\left[\frac{1}{|\mathbf{k}_{1}-\mathbf{ k}|^{2}+\Lambda}+\frac{1}{|\mathbf{k}_{2}-\mathbf{k}|^{2}+\Lambda}\right]. \tag{8}\] where the functional dependence on the wavevectors is indicated in the super- and subscript indices. Using the wavevector resonance condition (7a), we establish the symmetries \(W^{12}_{3\mathbf{k}}=W^{21}_{3\mathbf{k}}=W^{12}_{\mathbf{k}3}=\left(W^{3 \mathbf{k}}_{12}\right)^{*}\). We see manifestly that the interaction coefficient decays rapidly when all wavevectors are very different. The first term in \(W^{12}_{3\mathbf{k}}\) becomes dominant when \(\mathbf{k}_{1}\to\mathbf{k}\). By Eq. (7a) we then have \(\mathbf{k}_{2}\to\mathbf{k}_{3}\). If we also have \(\Lambda\ll k^{2}\), then the interaction coefficient becomes sharply peaked in the joint \(\mathbf{k}\)-space where \(\mathbf{k}_{1}\approx\mathbf{k}\) and \(\mathbf{k}_{3}\approx\mathbf{k}_{2}\), with the latter following from the above symmetries. Likewise, if the second term in \(W^{12}_{3\mathbf{k}}\) is dominant then it becomes peaked over \(\mathbf{k}_{1}\approx\mathbf{k}_{3}\) and \(\mathbf{k}_{2}\approx\mathbf{k}\). These pairings are equivalent to the first pairings by exchange of dummy variables, as \(W^{12}_{3\mathbf{k}}\) always appears under an integral. Thus, the 4-wave interactions responsible for evolution of the system under the SHE (1), and therefore the corresponding WKE (6), are dominated by interactions in which \(\mathbf{k}_{1}\approx\mathbf{k}\) and \(\mathbf{k}_{3}\approx\mathbf{k}_{3}\). This property of the interaction coefficient, of picking out dominant interactions when pairs of wavevectors become equal, we refer to as semilocality. It is this property that will allow the collision integral to be reduced to a simpler operator. We will retain the possibility that \(\mathbf{k}_{1}\napproxeq\mathbf{k}_{2}\), so the reduction will be to an integro-differential, rather than a purely differential, operator. At this point we note that taking the NLS limit \(\gamma,\Lambda\to\infty\) with \(\gamma/\Lambda\to\mathrm{const.}\) sends \(W^{12}_{3\mathbf{k}}\to\pm 1\). In this case there is no natural pairing of \((\mathbf{k}_{1},\mathbf{k})\) and \((\mathbf{k}_{2},\mathbf{k})\), and we lose the semilocality property. We return to this point in Sec. VII.1. #### ii.1.1 Invariants of the kinetic equation In general, WKEs of the \(M/2\leftrightarrow M/2\) type (\(M\) being an even integer denoting the order of the resonant wave interaction), such as Eq. (6), conserve the two quadratic invariants \[E=\int\omega_{\mathbf{k}}n_{\mathbf{k}}\,d\mathbf{k}\quad\mathrm{and}\quad N= \int n_{\mathbf{k}}\,d\mathbf{k}. \tag{9}\] Comparing with Eq. (5) we see that \(N\) is the total waveaction, expressed in Eq. (9) as an integral over Fourier space. Both the original SHE and the WKE derived from them conserve the waveaction exactly. By contrast, \(E\) is the Fourier-space representation of the kinetic energy (first term on the right-hand side of Eq. (4)). Recall that the WKE is derived under the assumption of weak nonlinearity. Under this condition, \(E\) will be the leading contribution to the total Hamiltonian \(H\), i.e. the original equations of motion conserve \(H\) exactly, while their WKE conserve \(H\) asymptotically, and \(E\) exactly. Here, we are interested in the quantities that are conserved by the WKE and its reduced model, the SLAM. Therefore, we will simply refer to \(E\) as the energy hereafter. The interaction coefficient \(W^{12}_{3\mathbf{k}}\) is unchanged under global rotations. We further assume that when the system is forced and dissipated, it is done so in a spatially homogeneous and isotropic manner. Therefore, we expect that the spectra \(n_{\mathbf{k}}\) will be isotropic, depending only on \(|\mathbf{k}|\), or equivalently on frequency. Accordingly, we can consider the spectrum as a function of either \(\mathbf{k}\), or frequency \(\omega\) at that value of \(\mathbf{k}\), via the dispersion relation \(\omega=k^{2}\). Namely, we adopt the notation \(n_{\omega_{i}}\coloneqq n(\mathbf{k}_{i}(\omega_{i}))=n_{\mathbf{k}_{i}} \eqqcolon n_{i}\). Converting the \(\mathbf{k}\)-space integrals in Eqs. (9) into integrals over \(\omega\), the invariants of the WKE become, for 2D isotropic spectra, \[E=\pi\int\omega n_{\omega}\,d\omega\quad\mathrm{and}\quad N=\pi\int n_{\omega }\,d\omega. \tag{10}\] #### ii.1.2 Flux directions - the Fjortoft argument In a closed system, i.e. when there is no forcing or dissipation, the WKE redistributes \(E\) and \(N\) across \(\mathbf{k}\)-space while keeping their total values constant. The alternative is an open system, in which \(E\) and \(N\) enter the system at some forcing scale, and the WKE redistributes them across \(\mathbf{k}\)-space, until they reach some dissipation scale where they are removed. The way in which the WKE redistributes the invariants is predicted by the argument of Fjortoft [30]. This argument is recapitulated in many places in the wave turbulence literature, see for example Refs. [11; 12] for its application to the forced-dissipated SHE, and Ref. [2] for a version of the argument in closed systems. The conclusion of the argument is that the presence of each invariant constrains how the \(\mathbf{k}\)-space distribution of the other invariant can evolve, so that the bulk of each invariant moves to the sector of \(\mathbf{k}\)-space where its spectral density dominates. For the SHE this means that the majority of the energy \(E\), which has a spectral density of \(\omega=k^{2}\), moves towards high \(k\), whereas most of the waveaction \(N\), having a spectral density of 1, moves towards low \(k\). Here we focus on the isotropic case, which allows us to elide from \(\mathbf{k}\)-space to \(\omega\)-space via the dispersion relation, and speak of scales when referring to frequencies. Furthermore, we concentrate on the forced-dissipated setup, in which \(E\) and \(N\) are injected at some intermediate forcing scale \(\omega_{f}\), and dissipated at both high frequency \(\omega_{d+}\) and low frequency \(\omega_{d-}\), with a wide scale separation between these \(\omega_{d-}\ll\omega_{f}\ll\omega_{d+}\). We take the rate of forcing to match the rate of dissipation, so that the system is in a non-equilibrium, stationary state. In these circumstances, the Fjortoft argument predicts that most the energy injected at \(\omega_{f}\) will cascade with constant energy flux \(P\) through the direct inertial range (scales \(\omega\) such that \(\omega_{f}\ll\omega\ll\omega_{d+}\)), to be dissipated at small scales around \(\omega_{d+}\). Likewise, most of the waveaction injected at \(\omega_{f}\) will cascade with constant waveaction flux \(Q\) through the inverse inertial range (\(\omega_{d-}\ll\omega\ll\omega_{f}\)), until it is dissipated at large scales near \(\omega_{d-}\). This scenario is known as the dual cascade. The phenomenology of a dual cascade is common to all wave turbulence systems with two quadratic invariants, and also 2D hydrodynamic turbulence [2]. The Fjortoft argument in its open-system form is premised only on having positive-definite integral invariants, which are quadratic in wave amplitude, but which have different spectral densities, and on having widely-separated forcing and dissipation scales. Having such parsimonious assumptions, the predictions of the Fjortoft argument are robust, and must be recovered by any subtler manipulation of the WKE. More concretely, once we derive the SLAM, we can look for its stationary solutions that realise the dual cascade, but these solutions must have fluxes \(P>0\) and \(Q<0\) in their respective inertial ranges, to correspond to the predictions of the Fjortoft argument. On the other hand, the argument makes no assumptions about the character of the solutions, particularly the locality of interactions in \(\omega\)-space. Each cascade could either be realised by waves interacting locally, so that the invariant is transferred scale-by-scale through its inertial range, or by nonlocal interactions, involving waves at the forcing scale participating in every quartet of interacting waves. We will see that for the SLAM, the energy cascade is local whereas the waveaction cascade is nonlocal. ## III Derivation of the semilocal approximation model To derive the SLAM, we follow the initial strategy set out in Ref. [4] for deriving the DAM. First, we multiply Eq. (6) by an arbitrary test function \(\varphi_{\mathbf{k}}=\varphi(\mathbf{k})\), integrate with respect to \(\mathbf{k}\), and use the resulting symmetries of the integrand to split it into four pieces: \[\int\varphi_{\mathbf{k}}\frac{\partial n_{\mathbf{k}}}{\partial t }\,d\mathbf{k} =4\pi\int\varphi_{\mathbf{k}}|W^{12}_{3\mathbf{k}}|^{2}\delta^{12 }_{3\mathbf{k}}\delta(\omega^{12}_{3\mathbf{k}})n_{1}n_{2}n_{3}\mathbf{n}_{ \mathbf{k}}\left[\frac{1}{n_{\mathbf{k}}}+\frac{1}{n_{3}}-\frac{1}{n_{1}}- \frac{1}{n_{2}}\right]d\mathbf{k}_{1}\,d\mathbf{k}_{2}\,d\mathbf{k}_{3}\,d \mathbf{k}\] \[=\pi\int\left[\varphi_{\mathbf{k}}+\varphi_{3}-\varphi_{1}- \varphi_{2}\right]|W^{12}_{3\mathbf{k}}|^{2}\delta^{12}_{3\mathbf{k}}\delta( \omega^{12}_{3\mathbf{k}})n_{1}n_{2}n_{3}\mathbf{n}_{\mathbf{k}}\left[\frac{1 }{n_{\mathbf{k}}}+\frac{1}{n_{3}}-\frac{1}{n_{1}}-\frac{1}{n_{2}}\right]d \mathbf{k}_{1}\,d\mathbf{k}_{2}\,d\mathbf{k}_{3}\,d\mathbf{k}, \tag{11}\] Next we assume that the spectra \(n_{\mathbf{k}}\) and test functions \(\varphi_{\mathbf{k}}\) are isotropic, and consider both as functions of frequency (see Sec. II.2.1). At this point, following the discussion after Eq. (8), we make the semilocality assumption that \(\mathbf{k}_{1}\approx\mathbf{k}\), and hence \(\mathbf{k}_{3}\approx\mathbf{k}_{2}\), but retain the possibility of \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\) being distinct. This is in contrast to the procedure of Dyachenko et al. [4], who assume that _all_ interactions are superlocal in frequency space. Taylor expanding the terms in square brackets in Eq. (III) up to first order in frequency, and using Eq. (7b), we have \[\left[\frac{1}{n_{\mathbf{k}}}+\frac{1}{n_{3}}-\frac{1}{n_{1}}- \frac{1}{n_{2}}\right] \approx\partial_{\omega}n_{\omega}{}^{-1}(\omega-\omega_{1})- \partial_{\omega_{2}}n_{\omega_{2}}^{-1}(\omega_{2}-\omega_{3})\] \[=\left(\partial_{\omega}n_{\omega}^{-1}-\partial_{\omega_{2}}n_{ \omega_{2}}^{-1}\right)(\omega-\omega_{1})\] \[=\left(\partial_{\omega}n_{\omega}^{-1}-\partial_{\omega_{2}}n_{ \omega_{2}}^{-1}\right)(\mathbf{k}-\mathbf{k}_{1})\cdot(\mathbf{k}+\mathbf{k }_{1})\] \[\approx\left(\partial_{\omega}n_{\omega}^{-1}-\partial_{\omega_{2} }n_{\omega_{2}}^{-1}\right)(-\mathbf{p}_{1})\cdot 2\mathbf{k},\] where \(\mathbf{p}_{i}\coloneqq\mathbf{k}_{i}-\mathbf{k}\). Similarly, \[\left[\varphi_{\mathbf{k}}+\varphi_{3}-\varphi_{1}-\varphi_{2}\right] \approx\left(\partial_{\omega}\varphi_{\omega}-\partial_{\omega_{2}}\varphi_{ \omega_{2}}\right)(-\mathbf{p}_{1})\cdot 2\mathbf{k}.\] Therefore Eq. (III) simplifies to \[\int\varphi_{\mathbf{k}}\frac{\partial n_{\mathbf{k}}}{\partial t }\,d\mathbf{k}=\pi\int\frac{\gamma^{2}}{\left(\Lambda+p_{1}^{2}\right)^{2}} \delta(-2\mathbf{p}_{1}\cdot\mathbf{p}_{2})n_{2}^{2}n_{k}^{2}(\mathbf{p}_{1} \cdot 2\mathbf{k})^{2}\left(\partial_{\omega}n_{\omega}^{-1}-\partial_{\omega _{2}}n_{\omega_{2}}^{-1}\right)\left(\partial_{\omega}\varphi_{\omega}-\partial_ {\omega_{2}}\varphi_{\omega_{2}}\right)d\mathbf{k}_{1}\,d\mathbf{k}_{2}\,d \mathbf{k},\] where we have also used Eq. (10) in the argument of the frequency delta function, and exhausted the delta function of wavevectors by integrating out \(\mathbf{k}_{3}\). To constrain the integral to the resonant manifold, we fix \(\mathbf{k}\) and \(\mathbf{k}_{2}\) and change variables from \(\mathbf{k}_{1}\) to \((p_{1},\theta)\) where \(\theta\) is the angle between \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\). The volume element transforms as \(d\mathbf{k}_{1}=p_{1}dp_{1}\,d\theta\), and we perform the \(\theta\) integral as follows: \[\int(\ldots)\delta(-2\mathbf{p}_{1}\cdot\mathbf{p}_{2})p_{1}\,d\theta\,dp_{1}= \int(\ldots)\frac{1}{p_{2}}\,dp_{1},\] where we have taken into account the fact that \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) are orthogonal, see Appendix A. Then, using the properties of the scalar triple product, we can write \(\mathbf{k}\cdot\mathbf{p}_{1}=\pm\mathbf{k}\cdot(\mathbf{e}_{z}\times\mathbf{p} _{2})p_{1}/p_{2}=\pm\mathbf{e}_{z}\cdot(\mathbf{p}_{2}\times\mathbf{k})p_{1}/p _{2}\). Thus we have \[\int\varphi_{\mathbf{k}}\frac{\partial n_{\mathbf{k}}}{\partial t}\,d\mathbf{k} =4\pi\int\frac{\gamma^{2}p_{1}^{2}}{\left(\Lambda+p_{1}^{2}\right)^{2}}\frac{| \mathbf{p}_{2}\times\mathbf{k}|^{2}}{p_{2}^{3}}n_{2}^{2}n_{k}^{2}\left( \partial_{\omega}n_{\omega}^{-1}-\partial_{\omega_{2}}n_{\omega_{2}}^{-1} \right)\left(\partial_{\omega}\varphi_{\omega}-\partial_{\omega_{2}}\varphi_{ \omega_{2}}\right)dp_{1}\,d\mathbf{k}_{2}\,d\mathbf{k}. \tag{12}\] The dependence on \(p_{1}\) can be factored into the reduced interaction coefficient, which can be calculated exactly, \[S_{\Lambda}=4\pi\int_{0}^{\infty}\frac{\gamma^{2}p_{1}^{2}}{\left(\Lambda+p_ {1}^{2}\right)^{2}}\,dp_{1}=\frac{\pi^{2}\gamma^{2}}{\sqrt{\Lambda}}. \tag{13}\] This last step highlights the important feature of the SHE in this analysis: it is in this step that we have used the peaked nature of the interaction coefficient to reduce it to the coefficient \(S_{\Lambda}\) analytically. To our knowledge, the SHE are the first system analysed in wave turbulence theory whose interaction coefficient can be reduced in this way. To express the \(\mathbf{p}_{2}\) dependence of (12) in terms of \(\mathbf{k}_{2}\) and \(\mathbf{k}\), we denote the angle between vectors \(\mathbf{k}\) and \(\mathbf{k}_{2}\) by \(\phi\), note that \(|\mathbf{p}_{2}\times\mathbf{k}|^{2}=|\mathbf{k}_{2}\times\mathbf{k}|^{2}=k_{ 2}^{2}k^{2}\sin^{2}(\phi)\), and use basic trigonometry to re-express \(p_{2}\), giving \[\int\varphi_{\mathbf{k}}\frac{\partial n_{\mathbf{k}}}{\partial t}\,d\mathbf{ k}=S_{\Lambda}\int\frac{k_{2}^{2}k^{2}\sin^{2}(\phi)}{(k_{2}^{2}-2k_{2}k\cos( \phi)+k^{2})^{3/2}}n_{2}^{2}n_{k}^{2}\left(\partial_{\omega}n_{\omega}^{-1}- \partial_{\omega_{2}}n_{\omega_{2}}^{-1}\right)\left(\partial_{\omega}\varphi _{\omega}-\partial_{\omega_{2}}\varphi_{\omega_{2}}\right)d\mathbf{k}_{2}\,d \mathbf{k}.\] The \(\mathbf{k}\leftrightarrow\mathbf{k}_{2}\) symmetry of the integrand allows us to replace \(\left(\ldots\right)\left(\partial_{\omega}\varphi_{\omega}-\partial_{\omega_ {2}}\varphi_{\omega_{2}}\right)\to 2(\ldots)\left(\partial_{\omega}\varphi_{ \omega}\right)\). The next step is to move to frequency space by writing the integrations over \(\mathbf{k}\) and \(\mathbf{k}_{2}\) in polar form, so that \(d\mathbf{k}=(1/2)d\chi\,d\omega\) and \(d\mathbf{k}_{2}=(1/2)d\phi\,d\omega_{2}\). Here \(\chi\) is the polar angle of wavevector \(\mathbf{k}\), which we integrate out immediately and cancel from both sides. We obtain \[\int\varphi_{\omega}\frac{\partial n_{\omega}}{\partial t}\,d\omega=S_{\Lambda }\int\frac{\omega_{2}\omega\sin^{2}(\phi)}{(\omega_{2}-2\sqrt{\omega_{2} \omega}\cos(\phi)+\omega)^{3/2}}n_{\omega_{2}}^{2}n_{\omega}^{2}\left(\partial _{\omega}n_{\omega}^{-1}-\partial_{\omega_{2}}n_{\omega_{2}}^{-1}\right)\left( \partial_{\omega}\varphi_{\omega}\right)d\phi\,d\omega_{2}\,d\omega. \tag{14}\] Finally we integrate by parts with respect to \(\omega\) to isolate the test function \(\varphi_{\omega}\) on both sides, and use the fact that \(\varphi_{\omega}\) is arbitrary, to obtain the SLAM for the SHE: \[\frac{\partial n_{\omega}}{\partial t}=-\frac{1}{\pi}\frac{\partial Q}{ \partial\omega},\] (15a) where \[Q=\pi S_{\Lambda}\int f\!\left(\!\sqrt{\frac{\omega_{2}}{\omega}}\right)\frac{ \omega_{2}}{\sqrt{\omega}}\,n_{\omega_{2}}^{2}n_{\omega}^{2}\left(\partial_{ \omega}n_{\omega}^{-1}-\partial_{\omega_{2}}n_{\omega_{2}}^{-1}\right)d\omega_{2}\] (15b) is the waveaction flux flowing through \[\omega\] in frequency space (or circle of radius \[k=\sqrt{\omega}\] in wavevector space; the factor of \[\pi\] arises from the transformation between the two spaces). In Eq. ( 15b ) the function \[f(s)\] is defined as \[f(s)=\int_{0}^{2\pi}\frac{\sin^{2}(\phi)}{(1-2s\cos(\phi)+s^{2})^{3/2}}\,d\phi. \tag{15c}\] In Appendix B we note some properties of \(f(s)\). The SLAM, defined by Eqs. (15), is the main result of the present paper. In future studies we will use this model to analyse both stationary and evolving wave turbulence in nonlocal nonlinear optics modelled by the SHE. We envisage that an equivalent model can be derived for the 3D case, which would apply to systems of self-gravitating bosons. ### Conservation of invariants in the SLAM To show that the original invariants of the WKE continue to be conserved in the SLAM, we first note that (15a) is a continuity equation for waveaction, and so \(N\) is manifestly conserved. Secondly, we note that the energy density is \(\omega n_{\omega}\), and so the continuity equation for energy is \[\frac{\partial(\omega n_{\omega})}{\partial t}=-\frac{1}{\pi}\frac{\partial P} {\partial\omega} \tag{16}\] where \(P\) is the energy flux. Together with Eq. (15a), this gives \(\partial_{\omega}P=\omega\partial_{\omega}Q\). Integrating from \(0\) to \(\omega\), we obtain \[P(\omega)=\omega Q(\omega)-\int_{0}^{\omega}Q(\tilde{\omega})\,d\tilde{\omega}. \tag{17}\] Integrating Eq. (16) over all \(\omega\) gives \(\partial_{t}E=-[P(\infty)-P(0)]/\pi\). Using Eq. (17), and assuming that the particle flux decays fast enough at large and small \(\omega\), so that \(\omega Q(\omega)|_{\infty}=\omega Q(\omega)|_{0}=0\), we obtain \[\frac{\partial E}{\partial t}=S_{\Lambda}\int_{0}^{\infty}\int_{0}^{\infty}f \bigg{(}\!\sqrt{\frac{\omega_{2}}{\tilde{\omega}}}\bigg{)}\,\frac{\omega_{2}} {\sqrt{\tilde{\omega}}}\,n_{\omega_{2}}^{2}n_{\tilde{\omega}}^{2}\left( \partial_{\tilde{\omega}}n_{\tilde{\omega}}^{-1}-\partial_{\omega_{2}}n_{ \omega_{2}}^{-1}\right)d\omega_{2}d\tilde{\omega}. \tag{18}\] Now we observe that by Eq. (10), the factor \((\omega_{2}/\sqrt{\tilde{\omega}})\,f\!(\!\sqrt{\omega_{2}/\tilde{\omega}})\) is symmetric under \(\tilde{\omega}\leftrightarrow\omega_{2}\). This leaves the integrand on the right-hand side of Eq. (18) antisymmetric under exchange of the integration variables, and so we must have \(\partial_{t}E=0\). Therefore, in a closed system the SLAM preserves the same quadratic invariants as the WKE from which it is derived. The rest of this paper is devoted to obtaining solutions of the SLAM, particularly the solutions that realise the dual cascade of invariants that is predicted by the Fjortoft argument in a forced-dissipated system. ## IV Stationary solutions of the SLAM In this section we show that the usual stationary solutions of the WKE--the equilibrium RJ spectrum, and the KZ cascade spectra--are stationary solutions of the SLAM (15). We are particularly interested in spectra that are self-similar, i.e. of power-law form \(n_{\omega}=C\omega^{-x}\), where \(C\) is a constant that is positive for physical spectra. ### Thermodynamic equilibrium (RJ) spectrum The RJ spectrum describes the state of thermodynamic equilibrium where a linear combination of the integral invariants is partitioned equally over \(\mathbf{k}\)-space: \[n_{\omega}=\frac{T}{\mu+\omega}\quad\text{(equipartition of $\mu N+E$)}, \tag{19}\] where the thermodynamic potentials are the temperature \(T\) and chemical potential \(\mu\) (both constants). This spectrum is a stationary solution of the SLAM because the bracket \(\left(\partial_{\omega}n_{\omega}^{-1}-\partial_{\omega_{2}}n_{\omega_{2}}^{ -1}\right)\) in Eq. (15b) vanishes when Eq. (19) is substituted. The RJ spectrum has the asymptotic limits \[\begin{split} n_{\omega}&\propto\omega^{0}\quad \text{(equipartition of $N$)},\\ n_{\omega}&\propto\omega^{-1}\quad\text{(equipartition of $E$)},\end{split} \tag{20}\] which are self-similar spectra with spectral indices \(x=0\) and \(x=1\), respectively. ### Stationary nonequilibrium cascade (KZ) spectra As mentioned in Sec. I, in many systems one can find stationary solutions of the WKE that are of power-law form, and which describe the constant flux of invariants via a self-similar, scale-by-scale cascade. These are the KZ cascade spectra, and they are the first candidate for the spectra that realise the dual cascade predicted by the Fjortoft argument. When the KZ spectra are physically relevant, the flux of each dynamical invariant will be described by its own KZ spectrum, and on that spectrum the flux of all other dynamical invariants will be zero. To find the KZ spectra, we first substitute \(n_{\omega}=C\omega^{-x}\) into Eq. (15b), giving for the waveaction flux \[Q=\pi S_{\Lambda}C^{3}x\int f\bigg{(}\!\sqrt{\frac{\omega_{2}}{\omega}}\bigg{)} \,\frac{\omega_{2}}{\sqrt{\tilde{\omega}}}\,\omega_{2}^{-2x}\omega^{-2x}( \omega^{x-1}-\omega_{2}^{x-1})\,d\omega_{2}. \tag{21}\] Energy cascade spectrum In the wave turbulence literature, KZ spectra are frequently found by making non-identity transformations of the collision integral that allow one to read off spectral indices \(x\) that make the integrand of the transformed collision integral vanish. This technique is known as the Zakharov-Kraichnan transform [3]. We now adapt this method to the SLAM in order to find the KZ energy cascade spectrum. We split the right-hand side of Eq. (21) into two halves. In the second half, we substitute \(\omega_{2}=\omega^{2}/\tilde{\omega}_{2}\). Using Eq. (147), and dropping tildes immediately, we obtain \[Q=\frac{\pi S_{\Lambda}C^{3}}{2}x\int\Big{[}1-\Big{(}\frac{\omega_{2}}{\omega} \Big{)}^{y}\Big{]}\,f\!\left(\!\sqrt{\frac{\omega_{2}}{\omega}}\,\right)\frac {\omega_{2}}{\sqrt{\omega}}\,\omega_{2}^{-2x}\omega^{-2x}(\omega^{x-1}-\omega _{2}^{x-1})\,d\omega_{2},\] with \(y=3x-3/2\). Choosing the spectral index \(x=1/2\) leads to a vanishing waveaction flux \(Q\), suggesting that this represents the KZ energy cascade spectrum. To see that this is indeed the case, we extract the overall \(\omega\) dependence in Eq. (21), leaving a reduced, dimensionless collision integral \(I(x)\), as follows: \[Q(\omega)=2\pi S_{\Lambda}C^{3}\omega^{(1-6x)/2}I(x),\qquad\text{where}\qquad I (x)=x\int_{0}^{\infty}f(s)s^{3-4x}(1-s^{2x-2})\,ds, \tag{22}\] and \(s=\sqrt{\omega_{2}/\omega}\). Substituting Eq. (22) into Eq. (17) gives for the energy flux \[P(\omega)=2\pi S_{\Lambda}C^{3}\frac{1-6x}{3-6x}\omega^{(3-6x)/2}I(x). \tag{23}\] Setting \(x=1/2\) in Eq. (22) reproduces the result that \(Q(\omega)=0\), because \(I(1/2)\propto\int_{0}^{\infty}sf(s)\,ds-\int_{0}^{\infty}\!f(s)\,ds\), which vanishes by the \(s\to 1/s\) symmetry in Eq. (147) (note that the transformation \(s\to 1/s\) is exactly equivalent to making the Zakharov-Kraichnan transform). When \(x=1/2\), Eq. (23) gives \(P(\omega)=0/0\). To resolve this indeterminacy we use L'Hopital's rule, obtaining \[P(\omega)=\frac{2\pi S_{\Lambda}C^{3}}{3}I^{\prime}(1/2)=\frac{2\pi S_{ \Lambda}C^{3}}{3}\left(3\int_{0}^{\infty}f(s)\log(s)\,ds\right)=-4.85432\left( \pi S_{\Lambda}C^{3}\right),\] where the prime denotes differentiation with respect to \(x\), and we have again used Eq. (147). Thus, when \(x=1/2\) the energy flux \(P\) is a constant, independent of \(\omega\), while the waveaction flux \(Q\) vanishes, indicating that this is indeed the KZ energy cascade spectrum. However, we note that on this spectrum the sign of \(P\) is negative, which is opposite to the sign predicted by the Fjortoft argument. We elaborate on this Sec. V. #### iii.1.2 Waveaction cascade spectrum To determine the KZ spectrum for the cascade of waveaction, we put \(x=1/6\) in Eq. (22), obtaining \(Q(\omega)=2\pi S_{\Lambda}C^{3}I(1/6)\), which is independent of \(\omega\). Likewise Eq. (23) gives \(P(\omega)=0\times I(1/6)\). This would satisfy the requirements to be the KZ waveaction cascade spectrum if \(I(1/6)\) converged. However, from the second equation in (22) we see that \[I(1/6)=\frac{1}{6}\int_{0}^{\infty}f(s)(s^{7/3}-s^{2/3})\,ds.\] Noting the asymptotic behaviour of \(f(s)\) from Eq. (146c), we see that \(I(1/6)\) diverges as \(s\to\infty\). Therefore, even though the power-law spectrum with \(x=1/6\) superficially gives the correct properties for the KZ waveaction cascade spectrum, we must rule it out because the collision integral is divergent on that spectrum. #### iii.1.3 Summary of KZ spectra To summarise, the formal KZ cascade spectra that are our first candidates for realising the dual cascade are \[\begin{split} n_{\omega}\propto\omega^{-1/2}&\text{ (KZ spectrum: cascade of $E$)},\\ n_{\omega}\propto\omega^{-1/6}&\text{ (KZ spectrum: cascade of $N$)}.\end{split} \tag{24}\] However, both spectra suffer pathologies: on the first spectrum the flux of energy is in the wrong direction, and the second spectrum causes the collision integral to diverge. We must therefore rule them out, and seek other spectra on which the dual cascade can be supported. These pathologies nonwithstanding, it is still worth noting that the original interaction coefficient \(W_{3{\bf k}}^{12}\) is not a homogeneous function of the four wavevectors, i.e. it possesses no obvious properties that would lead to a self-similar scaling behaviour. Nevertheless, the semilocality property of \(W_{3{\bf k}}^{12}\) allows us to integrate out its non-homogeneous part, giving the constant coefficient \(S_{\Lambda}\). The resulting equation, the SLAM, is self-similar. However, unlike KZ spectra, the relevant solutions of the SLAM that manifest the dual cascade do not turn out to be self-similar themselves, as we demonstrate in the following sections. ### Interpretation of divergent spectra As mentioned above, the divergence of the collision integral at a certain scale causes us to rule out a prospective KZ spectrum. However, we expect the true solution to retain some characteristics indicated by this divergence. Namely, waves of a scale that approaches the divergent scale will be increasingly dominant in every quartet of interacting waves in which they participate. In other words, wave interactions at every scale will be mediated by waves whose scale approaches the divergent scale. In this situation, the true solution is termed a nonlocal flux spectrum, as opposed to the spectrally-local cascades that are described by physically-realisable KZ solutions. In the specific case here, the divergence of the KZ waveaction cascade spectrum as \(s\to\infty\) implies that the true waveaction flux spectrum is nonlocal, dominated by interactions at \(\omega_{2}\gg\omega\). By contrast, the fact that the KZ energy cascade spectrum gives convergence of the collision integral signals that the true cascade solution has local interactions. We need only resolve the matter of the cascade direction, which we do in Sec. V. In Appendix. C we present a full convergence study of the collision integral on general power-law spectra, allowing us to see the KZ spectra in their full context. The results of this convergence study are shown in Fig. 1(b). ## V Flux directions on power-law spectra For the sake of completeness, we present in this section a general diagrammatic argument [2] that determines the directions of both the energy flux \(P\) and the waveaction flux \(Q\), on all power-law spectra \(n_{\omega}=C\omega^{-x}\) (with \(C>0\)). In order to present the argument, we neglect for a moment the divergence of the collision integral on the KZ waveaction cascade spectrum. Recall that if the sign of a flux is positive (negative), the invariant flows towards large (small) \(\omega\). It is natural to assume that for very sharply peaked spectra, the resulting fluxes will flatten the spectra out. Thus, for \(x\to\infty\) (spectrum sharply peaked around \(\omega\approx 0\)), we expect the associated fluxes to be strongly positive. Likewise for \(x\to-\infty\) (spectrum sharply rising), the fluxes will be strongly negative. In between these two, the fluxes will both be zero on each of the thermodynamic spectra \(x=0,1\). As for the KZ spectra, by construction the KZ energy cascade spectrum is for the _pure_ flux of energy, with no waveaction flux. Likewise the energy flux is zero on the KZ spectrum for a pure waveaction cascade. In our case we respectively have \(Q=0\) for \(x=1/2\), and \(P=0\) for \(x=1/6\) (were the latter to give a convergent collision integral). Assuming that the fluxes vary continuously with spectral index \(x\) forces them to behave qualitatively as shown in Fig. 1(a). We see that the ordering of the zero crossings forces \(P\) to be negative on the KZ energy cascade spectrum, as found in Sec. IV.2.1, and also forces \(Q\) to be positive on the KZ waveaction cascade spectrum. These are both in direct contradiction to the conclusion of the Fjortoft argument, which is that \(P\) must be positive and \(Q\) negative on the stationary spectra that realise the cascades, see Sec. II.2.2. If the respective cascades are to be realised by the KZ spectra, the only way to reconcile the two arguments is for the KZ spectra to have non-positive (i.e. negative or even complex) prefactor constants \(C\), which is clearly unphysical. Therefore, we conclude once more that the KZ spectra found in Sec. IV cannot realise the dual cascade in any physically-relevant scenario. As the KZ spectrum must be ruled out, the true solution to realise a steady-state cascade must be related to the other stationary solution: the RJ spectrum [4]. Indeed, experience with other wave turbulence systems suggests that the true cascade solution is an RJ spectrum with small deviations that are nonetheless responsible for carrying the entire flux, see Eq. (25). Such solutions are termed warm cascade spectra [4; 10; 12]. We therefore hypothesise that the flux-carrying spectra that realise the dual cascade are warm spectra in both the direct and inverse inertial ranges. However, anticipating the results of Sec. VI, we will conclude that the inverse cascade of \(N\) is not only nonlocal in character, but is also realised by a warm spectrum with negative thermodynamic potentials \(T\) and \(\mu\). By contrast, the convergence of the KZ energy cascade spectrum found in Sec. IV.2.1, and discussion of Sec. IV.3, indicates that the true direct cascade of \(E\) is local. We therefore expect the direct cascade to be warm with positive \(T\) and \(\mu\), and spectrum \[n_{\omega}^{\rm dir}=\frac{T}{\mu+\omega+\Delta(\omega)}. \tag{25}\] Here, \(\Delta(\omega)\) is the deviation from the RJ spectrum, which remains small in the inertial range, far from the forcing and dissipation scales. At the end of the inertial range \(\Delta(\omega)\) becomes large, until the spectrum terminates at the dissipation scale \(\omega_{d+}\). We sketch the warm direct cascade \(n_{\omega}^{\rm dir}\) qualitatively in red in Fig. 2. It is a prediction from the superlocal DAM that the warm spectrum terminates in a logarithmic compact front, and that the temperature of the cascade spectrum \(T\) is determined by the energy flux \(P\), and small-scale dissipation range \(\omega_{d+}\)[12]. We leave it to future work, reinforced by numerical simulations, to examine these relations for the direct warm cascade realised by the SLAM. ## VI Nonlocal inverse cascade solution In this section, we seek the stationary solution of the SLAM that realises a constant inverse flux of waveaction, and that is nonlocal in the sense suggested by the divergence of the corresponding KZ spectrum, see Sec. IV.3. We also seek to parameterise the solution in terms of quantities that we can control externally, for example in simulations. These will turn out to be the flux \(Q\), the forcing and dissipation scales \(\omega_{f}\) and \(\omega_{d-}\), and the temperature of the inverse warm cascade \(T\). We set \(Q\) to be negative in Eq. (15b) to specify an inverse flux, and substitute \(f(s)=\pi/s^{3}\), its \(\omega_{2}\gg\omega\) limit. Equation (15b) becomes \[\frac{\partial n_{\omega}}{\partial\omega}=\frac{\hat{Q}}{A\omega}+\frac{B}{A }n_{\omega}^{2} \tag{26}\] where \(\hat{Q}=-Q/\pi^{2}S_{\Lambda}>0\) and the integrals over \(\omega_{2}\) are absorbed into the constants \[A=\int\frac{n_{\omega_{2}}^{2}}{\sqrt{\omega_{2}}}\,d\omega_{2} \quad\text{and}\quad B=\int\frac{1}{\sqrt{\omega_{2}}}\frac{\partial n_{ \omega_{2}}}{\partial\omega_{2}}\,d\omega_{2}. \tag{27}\] Figure 1: (a) Sketch of the energy flux \(P\) and waveaction flux \(Q\) dependence on spectral index \(x\), where \(n_{\omega}=C\omega^{-x}\). The signs of the fluxes are determined by the relative ordering of the RJ and KZ spectra (Eqs. (20) and (24) respectively), where one or both of the fluxes is zero, and the behaviour at large and small \(x\). The qualitative behaviour of the fluxes in between the zeros follows by continuity, see Sec. V. (b) Convergence (green) or divergence (red) of Eq. (15b) with respect to spectral index \(x\). Above the \(x\) axis refers to \(\omega_{2}\) in the ultraviolet range, and below the \(x\) axis refers to \(\omega_{2}\) in the infrared range, see Appendix C. (Convergence is unconditional exactly on the thermodynamic spectra \(x=0,1\). This is indicated by the narrow green strips around these two spectra.) Manifestly \(A>0\). Self-consistency of the asymptotic solutions of Eq. (26) demands that \(B>0\) also (see discussion after Eq. (29) below). ### Nonlocal inverse cascade: asymptotics First, we examine the asymptotics of Eq. (26) to extract key characteristics of the full solution. We denote the frequency at which the two terms on the right-hand side are equal as \(\omega_{s}\). For \(\omega\ll\omega_{s}\) the first term on the right-hand side of Eq. (26) dominates. The solution to the resulting asymptotic equation is \[n_{\omega}^{\ll}=\frac{\dot{Q}}{A}\log\left(\frac{\omega}{\omega_{d-}}\right). \tag{28}\] Here we have written the constant of integration as the frequency at which the solution \(n_{\omega}^{\ll}\) vanishes, and interpreted it as the dissipation scale \(\omega_{d-}\). Note that the solution finds a vanishing point naturally, without specifying a dissipation mechanism. This is in common with warm solutions of superlocal DAMs that contain compact fronts at which the solution vanishes logarithmically, see e.g. [10; 12]. (We expect that if dissipation is not provided, so that the flux is drained from the system by the time it reaches \(\omega_{d-}\), the spectrum would grow in this vicinity so that the situation would not be time-independent. Eventually the nonlinearity would become strong here, so that the wave turbulence assumptions would become violated.) For \(\omega\gg\omega_{s}\) the second term on the right-hand side of Eq. (26) is dominant, and we have the asymptotic solution \[n_{\omega}^{\gg}=\frac{A/B}{\omega_{*}-\omega}. \tag{29}\] Here the constant of integration appears as \(\omega_{*}\), the frequency at which the solution becomes singular. By hypothesis, \(\omega_{*}\) is greater than any \(\omega\) in the inverse cascade range. The integral of \(n_{\omega}^{\gg}\) is weakly (logarithmically) divergent as \(\omega\to\omega_{*}\), which is consistent with the assumption of a nonlocal solution that is dominated by interactions with \(\omega_{2}\gg\omega\). Obviously, in any realistic scenario the solution cannot continue up to \(\omega_{*}\). We therefore cut the solution off at \(\omega_{f}\) where \(\omega_{d-}\ll\omega_{f}<\omega_{*}\). This cutoff represents the end of the inverse cascade inertial range; in a forced-dissipated setup this is none other than the forcing scale. By choosing \(\omega_{f}\) in the vicinity of \(\omega_{*}\), so that \(\omega_{2}\) can approach the singularity frequency \(\omega_{*}\), we keep consistency with the nonlocality assumption \(\omega_{2}\gg\omega\). If we define the temperature \(T\coloneqq-A/B\) and chemical potential \(\mu\coloneqq-\omega_{*}\), we also see that \(n_{\omega}^{\gg}\) is actually a thermodynamic spectrum (19) with negative \(T\) and \(\mu\). The interpretation of RJ equilibria with negative thermodynamic potentials was given in Ref. [31] for the case of three sign-definite invariants. For the present case with two invariants, these are exactly equilibria with spectra diverging at some nonzero \(\mu\) (see appendix of Ref. [31]). Figure 2: Qualitative sketch of the steady-state dual cascade predicted by the SLAM, which realises the prediction of the Fjortoft argument. Forcing at \(\omega_{f}\) injects waveaction and energy into the system. The negative waveaction flux \(Q\) is realised by the inverse cascade spectrum \(n_{\omega}^{\rm inv}\) (blue), which terminates at the scale \(\omega_{d-}\) where the majority of the waveaction is dissipated. The positive energy flux \(P\) is realised by the direct cascade spectrum \(n_{\omega}^{\rm dir}\) (red), which terminates at \(\omega_{d+}\) where most of the energy is dissipated. The asymptotic solutions \(n_{\omega}^{\ll}\) and \(n_{\omega}^{\gg}\) are overlaid in white dashes. (Note that the inverse and direct cascade spectra meet at \(\omega_{f}\), which is strictly less than, but of the same order as, \(\omega_{*}\). In a realistic system, the break in gradient at \(\omega_{f}\) will be regularised by the specific forcing protocol, which we do not attempt to show here.) Note that, had we chosen \(B<0\), we would have obtained the asymptotic solution \(n_{\omega}^{B<0}=A/|B|(\omega+\omega_{*})\). For \(\omega_{*}<0\) this is negative in \(0<\omega<|\omega_{*}|\), which is unphysical. For \(\omega_{*}\geq 0\), if we substitute \(n_{\omega}^{B<0}\) back into Eq. (26), the first term on the right-hand side dominates for all \(\omega\geq 0\), which is inconsistent with the assumptions for deriving \(n_{\omega}^{B<0}\). We therefore rule out the \(B<0\) case. Had we chosen \(B=0\), the full solution \(n_{\omega}^{\ll}\) is the only solution, but then the second of Eqs. (27) gives \(B\neq 0\). Hence we rule out \(B=0\) as well, and therefore we must have \(B>0\). Thus, the full solution of Eq. (26) resembles an RJ spectrum for \(\omega\gg\omega_{d-}\), Eq. (29), but has a deviation that grows towards the infrared, and that terminates at \(\omega=\omega_{d-}\) with a logarithmic compact front around, Eq. (28). This is exactly to say that it is a warm cascade spectrum, but with negative thermodynamic potentials. #### iv.1.1 Determination of constants \(A\) and \(B\) The integrals in Eqs. (27) must be taken over the whole inverse cascade range, from \(\omega_{d-}\) up to \(\omega_{f}\). Since the inverse cascade spectrum is nonlocal, the dominant contributions to the integrals occurs at large \(\omega\). Using the asymptotic spectrum \(n_{\omega}^{\gg}\), and evaluating Eqs. (27) at the upper limit \(\omega_{f}\), we obtain, to leading order, \(B^{2}/A=1/\sqrt{\omega_{f}}(\omega_{*}-\omega_{f})\). In terms of the temperature \(T\) this gives \[A=\frac{T^{2}}{\sqrt{\omega_{f}}(\omega_{*}-\omega_{f})}\quad\text{and}\quad B =-\frac{T}{\sqrt{\omega_{f}}(\omega_{*}-\omega_{f})}, \tag{30}\] i.e. we have expressed \(A\) and \(B\) in terms of \(T,\omega_{f}\) and \(\omega_{*}\). ### Nonlocal inverse cascade: full solution Equation (26) can be solved analytically by noting that it is a Ricatti equation. Using standard techniques [32], its solution is found to be \[n_{\omega}^{\text{inv}}=-\sqrt{\frac{\hat{Q}}{B\omega}}\left(\frac{Y_{0}(K)+J _{0}(K)c}{Y_{1}(K)+J_{1}(K)c}\right),\quad\text{with}\quad K=\frac{2\sqrt{B \hat{Q}\omega}}{A}, \tag{31}\] where \(J_{n}(K),Y_{n}(K)\) are \(n\)-th order Bessel functions of the first and second kinds respectively, and \(c\) is the constant of integration. We can relate \(c\) to the integration constants of the asymptotic solutions by noting that \(\omega_{d-}\) corresponds to the first zero of the right-hand side of Eq. (31). This will be at the first root of the numerator \(Y_{0}(K)+J_{0}(K)c\). Using the asymptotics of the Bessel functions for \(K\ll 1\) gives, to leading order, \[\omega_{d-}=\frac{A^{2}}{B\hat{Q}}e^{-2\gamma-\pi c}, \tag{32}\] where \(\gamma\approx 0.5772\) is the Euler-Mascheroni constant. Likewise, \(\omega_{*}\) corresponds to the first root of \(Y_{1}(K)+J_{1}(K)c\), the denominator of Eq. (31). To leading order this gives \[\frac{A^{2}}{B\hat{Q}\omega_{*}}+\log\left(\frac{A^{2}}{B\hat{Q}\omega_{*}} \right)=\pi c+2\gamma-1. \tag{33}\] This has solution \(\omega_{*}=A^{2}/[B\hat{Q}\,W(e^{\pi c+2\gamma-1})]\), where \(W(x)\) is the Lambert-W function. Firstly, we note that the first term on the right-hand side of Eq. (33) is dominant as we send \(c\to\infty\), while the left-hand side is greater than \(A^{2}/(B\hat{Q}\omega_{*})\). This gives \(\omega_{*}^{-1}(c)=\mathcal{O}(c)\), whereas from Eq. (32), \(\omega_{d-}\to 0\) exponentially as \(c\to\infty\). Thus the ratio \(\omega_{d-}/\omega_{*}\to 0\) as \(c\to\infty\), and so by adjusting \(A,B\), and \(\hat{Q}\) to set the overall scaling, we can make the inverse inertial range arbitrarily wide. Furthermore, we can eliminate \(c\) between Eqs. (32) and (33), and eliminate \(A\) and \(B\) using Eq. (30), obtaining \[\log\left(\frac{\omega_{*}}{\omega_{d-}}\right)=1+\frac{T^{3}}{\hat{Q}\sqrt{ \omega_{f}}(\omega_{*}-\omega_{f})\omega_{*}}. \tag{34}\] Equation (34) implicitly expresses \(\omega_{*}\) in terms of the control parameters \((\hat{Q},T,\omega_{f},\omega_{d-})\). Solving for \(\omega_{*}\) (for example numerically, or to any desired accuracy by iteration), and substituting into Eq. (30) allows \(A\) and \(B\) to be written in terms of the same set of parameters. We can likewise express \(c\) via Eq. (32), and finally obtain the solution \(n_{\omega}\) via Eq. (31), in terms of the control parameters \((\hat{Q},T,\omega_{f},\omega_{d-})\). Note that, unlike the case for superlocal DAMs, we cannot close the set of control parameters by writing \(T\) as a function of the flux and the forcing and dissipation scales. This is reminiscent of two-free-parameter stationary solutions of the Leith model [6]. (Closures could be provided by specific assumptions about the forcing or dissipation, for example that the forcing starts with a given flux and temperature, but these assumptions would not be universal.) In Fig. 2 we sketch the qualitative behaviour of the nonlocal inverse waveaction cascade spectrum \(n_{\omega}^{\rm imv}\) in blue. We also show the asymptotic solutions \(n_{\omega}^{\ll}\) and \(n_{\omega}^{\gg}\) in white dashes, and the frequency \(\omega_{*}\) where the inverse cascade spectrum becomes singular. ## VII Discussion and Conclusion ### Comparison with the NLS and SNE limits Before concluding, we make some remarks about the two limits of the SHE that were mentioned in Sec. II. The first is the NLS limit, where we send \(\gamma,\Lambda\to\infty\), while \(\gamma/\Lambda\to\rm const.\) After rescaling \(\psi\) we obtain Eq. (2). The second is the SNE limit, where we set \(\Lambda=0\) to obtain Eqs. (3). In both these limits, the interaction coefficient becomes a homogeneous function, in the sense that \(W_{\mu{\bf k}_{3}\mu{\bf k}_{2}}^{\mu{\bf k}_{1}\mu{\bf k}_{2}}=\mu^{\beta}W_{ {\bf k}_{3}\bf k}^{{\bf k}_{1}{\bf k}_{2}}\), with \(\beta=0\) in the NLS limit and \(\beta=-2\) in the SNE limit. This observation allowed us to heuristically construct superlocal DAMs of the NLS and SNE in Ref. [12]. There, we used the DAMs to examine the respective KZ spectra, and found that in both cases the flux directions contradicted the Fjortoft argument. We therefore proposed that the flux-carrying spectra were warm spectra in both the NLS and SNE. We are now in a position to revisit this work, in light of the rigorously-derived SLAM. The first thing to note is that in the NLS limit, the interaction coefficient becomes a constant across all wavevectors. In particular, this no longer respects the semilocality property: pairs of wavevectors are no longer picked out by the sharp decay of the interaction coefficient. We therefore cannot approximate the full WKE by the SLAM--to do so would neglect the majority of wave interactions, all of which are important in evolving the spectrum. However, we can still use the DAM for qualitative understanding, e.g. the argument about flux directions and the prediction of warm cascades [4; 12]. By contrast, the SNE are a singular limit of the SHE. We noted in Sec. II that the SNE are ill-posed, and that their regularisation requires restoring \(\Lambda\neq 0\), i.e. moving to the SHE. To elaborate: in order to develop the wave turbulence theory and derive the WKE, one starts with a periodic system [2], but in a periodic system, Eq. (3b) has no non-trivial solutions. Once we derive the SLAM, this ill-posedness is revealed in Eq. (13): setting \(\Lambda=0\) sends \(S_{\Lambda}\to\infty\), i.e. the SLAM diverges for every spectrum \(n_{\omega}\). This indicates that one can formally write down the kinetic equation of the SNE, but the collision integral becomes infinite when any two wavevectors become equal. Likewise, one can obtain KZ spectra for the SNE based on dimensional arguments, but these spectra will be invalid because the collision integral will be divergent on these spectra. Moreover, the KZ spectra will change discontinuously when we regularise the kinetic equation by setting \(\Lambda\neq 0\). This is indeed what we find when we compare the KZ spectra found in Ref. [12] (namely \(\omega^{0}\) for the KZ waveaction cascade spectrum, and \(\omega^{-1/3}\) for the KZ energy cascade spectrum, for the 2D case) to Eqs. (24). Thus, we see that retaining \(\Lambda\neq 0\) in the SHE is necessary in order to regularise the singular SNE limit. This is a salutary lesson as it highlights the hidden pitfalls of such heuristic derivations of DAMs: their predictions are misleading if, as in our case, they do not respect essential properties of the original interaction coefficient. We speculate that a similar derivation of a semilocal model might be applied to other examples in the literature, e.g. in the theory of gravitational waves in Einstein's vacuum field model [33]. ### Conclusion Starting from the wave kinetic equation of the SHE, we have rigorously derived a reduced kinetic equation, the SLAM, by exploiting the natural locality properties of the interaction coefficient. We believe this to be the first such derivation of a reduced kinetic equation in which the locality assumption can be justified self-consistently. Having derived the SLAM, we use it to obtain the stationary spectra that are responsible for realising the dual cascade of energy and waveaction that is predicted by the Fjortoft argument. After deriving the formal KZ cascade spectra, and examining their flux directions and locality, we conclude that neither the direct cascade of energy nor inverse cascade of waveaction are realised by the respective KZ spectra. Instead, we predict that the dual cascade is carried by warm spectra. This concurs with our examination of the limits of the SHE in Ref. [12], even though some of that was carried out in the SNE limit, which is, in fact, singular. Here, though, the SLAM allows us to refine our prediction about the characters of the warm spectra. We predict that the direct energy cascade spectrum will have positive thermodynamic parameters, and the interactions will be between waves that are local in frequency. By contrast, the inverse cascade of waveaction will be carried by a nonlocal spectrum, with the interactions at every frequency \(\omega\) being dominated by the spectrum near the forcing scale \(\omega_{f}\). Accordingly, we derive a nonlocal, warm, inverse cascade spectrum, that is parameterised by a negative temperature and chemical potential. Our results on the dual cascade were derived for the forced-dissipated 2D SHE, which is the setup that leads to the clearest manifestation of the cascades. Results on the inverse cascade may also apply to the case of turbulence that evolves freely from an initial condition, due to the inverse cascade having finite capacity (the integral of the inverse cascade spectrum converges when we send \(\omega_{d-}\to 0\)). Experience with finite capacity KZ spectra shows that an initial condition fills out its respective inertial range in finite time, with the KZ spectrum establishing after an initial transient [34; 35; 8]. It remains to be tested whether this phenomenology carries over to the inverse cascade spectrum of the SHE. By contrast, the direct cascade has infinite capacity for energy (the integral defining energy diverges as \(\omega_{d+}\to\infty\)), and so it can absorb an arbitrary amount of energy that is sent into it, unless there is some small-scale cutoff that arrests the direct cascade e.g. the finite numerical resolution of a computation. For such systems, the cascade spectrum typically does not form behind the front that propagates from an initial condition, unless continuously forced. The finite-capacity inverse cascade cannot absorb an arbitrary amount of waveaction. If no large-scale dissipation is provided, waveaction will arrive at the end of the inertial range and start to accumulate into coherent large-scale structures: condensates and solitons. The structure and dynamics of these structures are attracting much interest, particularly in astrophysics, where they could represent galactic dark matter halos [36; 37; 21; 38], or their 1D and 2D analogues in tabletop optical experiments [39; 40; 41; 24]. The dual cascade process is a universal mechanism whereby such large-scale structures emerge due to the interaction of weakly turbulent small-scale waves, at least in the initial transient phase where weak waves exist without any coherent structures present. Our results are therefore directly applicable to the 2D case, which may be accessible in optical experiments (in practice, optical turbulence evolving from an initial condition will be easier to realise in experiments, although forcing via periodic amplification of a recirculating beam may also be implemented [42]). They may also hold some relevance in the long-time regime where coherent, well-separated solitons exist and interact over long distances via the mutual exchange of weak waves [11]. In future work we hope to derive a similar SLAM for the 3D SHE, which will be applicable to the turbulent formation of galactic dark matter halos, and the 1D SHE, relevant to optical experiments such as those carried out in [43]. The same methodology should carry over to those cases, with the technical subtlety in the 1D case, the leading-order wave process is 6-wave, rather than the 4-wave case in 2D and 3D. ## VIII Acknowledgements This work was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 823937 for the RISE project HALT, and by the Simons Foundation Collaboration grant Wave Turbulence (Award ID 651471). J.L. and J.S. are supported by the Leverhulme Trust Project Grant RPG-2021-014. ## Appendix A Wavevectors relative to k form a right-angle triangle Waves on the resonant manifold are constrained to have a particular geometric relation. The frequency and wavevector resonance conditions (7) give \[\omega_{3\mathbf{k}}^{12} =(\mathbf{k}_{1}-\mathbf{k})\cdot(\mathbf{k}_{1}+\mathbf{k})+( \mathbf{k}_{2}-\mathbf{k}_{3})\cdot(\mathbf{k}_{2}+\mathbf{k}_{3})\] \[=2(\mathbf{k}_{1}-\mathbf{k})\cdot(\mathbf{k}-\mathbf{k}_{2})=0. \tag{10}\] Recalling the definition \(\mathbf{p}_{i}:=\mathbf{k}_{i}-\mathbf{k}\), we thus have \(\mathbf{p}_{1}\) is orthogonal to \(\mathbf{p}_{2}\). A similar calculation gives the Pythagorean relation \(p_{1}^{2}+p_{2}^{2}=p_{3}^{2}\). We conclude that on the resonant manifold, \(\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3}\) form a right-angle triangle. ## Appendix B Properties of \(f(s)\) In this appendix we examine the function \[f(s)=\int_{0}^{2\pi}\frac{\sin^{2}(\phi)}{(1-2s\cos(\phi)+s^{2})^{3/2}}\,d\phi, \tag{10}\] where in our application to the SLAM (15), \(s=\sqrt{\omega_{2}/\omega}\) and so \(0\leq s<\infty\). ### Writing \(f(s)\) in terms of complete elliptic integrals Writing Eq. (10) as a derivative, integrating by parts, using symmetry under \(\phi\to 2\pi-\phi\), and double-angle formulae, we obtain \[f(s) =-\int_{0}^{2\pi}\frac{\sin(\phi)}{s}\frac{\partial}{\partial\phi }\left(\frac{1}{\sqrt{1-2s\cos(\phi)+s^{2}}}\right)d\phi\] \[=\frac{2(1+s)}{s^{2}}\left[\frac{1+s^{2}}{(1+s)^{2}}\,K\!\left( \frac{4s}{(1+s)^{2}}\right)-E\!\left(\frac{4s}{(1+s)^{2}}\right)\right], \tag{11}\] where \[K(z)=\int_{0}^{\pi/2}\frac{1}{\sqrt{1-z\sin^{2}(\sigma)}}\,d\sigma\quad\text{ and}\quad E(z)=\int_{0}^{\pi/2}\sqrt{1-z\sin^{2}(\sigma)}\,d\sigma\] are the complete elliptic integrals of the first and second kind, respectively, and \(\sigma=\phi/2\). ### Asymptotics of \(f(s)\) From Eq. (11), and using the asymptotics of \(K(z)\) and \(E(z)\) around \(s=0,1,\infty\), we obtain \[f(s)=\pi+\mathcal{O}(s^{2}) \quad\text{as} \quad s\to 0, \tag{12a}\] \[f(s)=-2\log|s-1|+\mathcal{O}(1) \quad\text{as} \quad s\to 1,\] (12b) \[f(s)=\frac{\pi}{s^{3}}+\mathcal{O}\left(\frac{1}{s^{5}}\right) \quad\text{as} \quad s\to\infty. \tag{12c}\] The integrand in Eq. (10) is undefined at \(s=1\) when \(\phi=0\). This behaviour is resolved by Eq. (12b): we see that \(f(s)\) has a logarithmic singularity as \(s\to 1\). This singularity is integrable in Eq. (15b) as long as the rest of the integrand is regular as \(\omega_{2}\to\omega\). This regularity holds for all cases presented in this paper. ### \(f(s)\) with reciprocal argument Note that from Eqs. (10) and (11), transforming \(s\to 1/s\) gives \[f(1/s)=s^{3}f(s). \tag{13}\] ## Appendix C Locality of power-law spectra In this appendix we carry out an analysis of the convergence of the collision integral for general power-law spectra \(n_{\omega}=C\omega^{-x}\). We do this for completeness, and as a demonstration of the ease of analysis that the SLAM permits. The integral in Eq. (21) could diverge as \(\omega_{2}\to 0\) or \(\omega_{2}\to\infty\). The behaviour of \(f(s)\) in these ranges is noted in Appendix B.2. For \(\omega_{2}\to\infty\), i.e. \(s\to\infty\), we have that \(f(s)\to\pi/s^{3}\), therefore \(Q\propto\int^{\infty}\omega_{2}^{1-3/2-2x}(1-\omega_{2}^{x-1})\ d\omega_{2}\), which is convergent for \(x>\max\{1/4,-1/2\}=1/4\). For \(\omega_{2}\to 0\), i.e. \(s\to 0\), we have \(f(s)\to\pi\), and so \(Q\propto\int_{0}\omega_{2}^{1-2x}(1-\omega_{2}^{x-1})\ d\omega_{2}\), which is convergent for \(x\leq 1\). For either choice of \(\omega_{2}\), the wavecation and energy equipartition spectra lead to convergence of the collision integral, since the factor \((\partial_{\omega}n_{u}^{-1}-\partial_{\omega_{2}}n_{\omega_{2}}^{-1})\) in Eq. (15b) vanishes exactly for any RJ spectrum (19). Thus for \(\omega_{2}\to\infty\), the SLAM converges for power-law spectra with spectral index \(x\in\{(1/4,\infty)\cup 0\}\), and for \(\omega_{2}\to 0\) it converges for \(x\in(-\infty,1]\). Otherwise, the SLAM is divergent. These convergence (green) and divergence (red) zones are indicated in Fig. 1(b), for the two choices of \(\omega_{2}\to\infty\) or \(0\), above and below the \(x\) axis respectively. The thin green (convergence) strips around the thermodynamic spectra are indicative only, and in reality shrink to the single points \(x=0,1\). We see that the KZ energy cascade spectrum, with \(x=1/2\), gives convergence of the SLAM, whereas the KZ waveaction cascade spectrum, with \(x=1/6\), gives divergence as \(\omega_{2}\to\infty\) (These results were found in Sec. IV.2.2, but now we see them in their full context of convergence or divergence on general power-law spectra). Note that analysing the locality of general power-law spectra is made possible in the SLAM precisely because of the _semilocality_ manifested by the interaction coefficient of the SHE. This analysis is not possible when working with a DAM, because in order to construct a DAM one assumes (without proof) from the outset that interacting waves are superlocal.
2304.05325
Mining the Characteristics of Jupyter Notebooks in Data Science Projects
Nowadays, numerous industries have exceptional demand for skills in data science, such as data analysis, data mining, and machine learning. The computational notebook (e.g., Jupyter Notebook) is a well-known data science tool adopted in practice. Kaggle and GitHub are two platforms where data science communities are used for knowledge-sharing, skill-practicing, and collaboration. While tutorials and guidelines for novice data science are available on both platforms, there is a low number of Jupyter Notebooks that received high numbers of votes from the community. The high-voted notebook is considered well-documented, easy to understand, and applies the best data science and software engineering practices. In this research, we aim to understand the characteristics of high-voted Jupyter Notebooks on Kaggle and the popular Jupyter Notebooks for data science projects on GitHub. We plan to mine and analyse the Jupyter Notebooks on both platforms. We will perform exploratory analytics, data visualization, and feature importances to understand the overall structure of these notebooks and to identify common patterns and best-practice features separating the low-voted and high-voted notebooks. Upon the completion of this research, the discovered insights can be applied as training guidelines for aspiring data scientists and machine learning practitioners looking to improve their performance from novice ranking Jupyter Notebook on Kaggle to a deployable project on GitHub.
Morakot Choetkiertikul, Apirak Hoonlor, Chaiyong Ragkhitwetsagul, Siripen Pongpaichet, Thanwadee Sunetnanta, Tasha Settewong, Vacharavich Jiravatvanich, Urisayar Kaewpichai
2023-04-11T16:30:53Z
http://arxiv.org/abs/2304.05325v1
# Mining the Characteristics of Jupyter Notebooks ###### Abstract. Nowadays, numerous industries have exceptional demand for skills in data science, such as data analysis, data mining, and machine learning. The computational notebook (e.g., Jupyter Notebook) is a well-known data science tool adopted in practice. Kaggle and GitHub are two platforms where data science communities are used for knowledge-sharing, skill-practicing, and collaboration. While tutorials and guidelines for novice data science are available on both platforms, there is a low number of Jupyter Notebooks that received high numbers of votes from the community. The high-voted notebook is considered well-documented, easy to understand, and applies the best data science and software engineering practices. In this research, we aim to understand the characteristics of high-voted Jupyter Notebooks on Kaggle and the popular Jupyter Notebooks for data science projects on GitHub. We plan to mine and analyse the Jupyter Notebooks on both platforms. We will perform exploratory analytics, data visualization, and feature importances to understand the overall structure of these notebooks and to identify common patterns and best-practice features separating the low-voted and high-voted notebooks. Upon the completion of this research, the discovered insights can be applied as training guidelines for aspiring data scientists and machine learning practitioners looking to improve their performance from novice ranking Jupyter Notebook on Kaggle to a deployable project on GitHub. empirical study, Jupyter Notebooks, data science open-source projects + Footnote †: journal: Information Systems 2023 Footnote †: journal: Information Systems 2023 Footnote 2: [https://www.kaggle.com/](https://www.kaggle.com/) 2023 Footnote 2: [https://www.kaggle.com/](https://www.kaggle.com/) ## 1. Introduction According to (Kula et al., 2017), a company with an investment in big data or data science has shown an increase in productivity from 3% to 7%. Over the past five years, we have observed an increase in various data-driven application deployments to drive business in small-, to medium-, sized organizations. The integration of software engineering, data analysis, data mining, and machine learning techniques has brought about a high demand in various industries (Bahdan et al., 2017; Kula et al., 2018; Kula et al., 2018). Such integration increases collaboration between data scientists and software engineering in the development teams (Kula et al., 2018). Due to the difference in nature of the data science project in comparison with those of a software project, the development teams must adjust their practice to the requirements of data science projects. In addition, data analysis, data mining, and machine learning techniques have been applied to improve software development and software maintenance processes (Kalal et al., 2018). To train data science skills, one must practice in various data science projects (Kula et al., 2018; Kula et al., 2018). GitHub and Kaggle1 are two prominent data-science communities on their platforms, which offer data science training resources as well as data science projects for practicing data science skills. Footnote 1: [https://www.kaggle.com/](https://www.kaggle.com/) On GitHub, such resources are offered via multiple projects ranging from training computational notebooks to a fully deployed data analytic project. On Kaggle, a cloud-based collaborative platform, practitioners conduct and contribute computational notebooks for machine learning competitions on various topics. The common resource for training is a computational notebook. A computational notebook, such as Jupyter Notebook, is a web-based interactive computing environment with executable codes. It allows a user to implement codes, see the results, and provide discussions on the computational notebook results in easier-to-understand data science work. This popularizes the computational notebook as a tool for not only reproducing, but also for tutoring and training purposes. In addition to hosting the collections of computational notebooks, Kaggle also hosts both rewarded and non-rewarded competitions, providing opportunities for individuals or teams to compete or contribute notebooks. Hence, one can expect the computational notebook shared via a project on GitHub to vary from those on Kaggle's competitions. Since both platforms offer resources ranging from tutorial notebooks to top-ranking competition notebooks, some of the notebooks are challenging for a newcomer to understand and learn how to master the art of data science. Kaggle recognizes this problem and helps the beginner by providing courses and guides (a curated list of high-quality resources). Some of the projects on GitHub also provide learning and training resources for beginners. However, only a small portion of community members are considered grandmasters on Kaggle. In addition, the shared data science-related computational notebooks are not always implemented with software engineering practices such as maintainability and readability. To this end, we want to aid and identify key features to help practitioners on Kaggle to increase their data science skills, as shown through the quality of their computational notebooks to the level of public data science projects on GitHub. We begin our work on this project by investigating data science projects on both platforms. For Kaggle, it guides the development of data science practitioners from beginner to expert by having a contribution level of the notebook to identify the high-quality contribution. Kaggle measures the contribution level of the notebook based on the number of votes from experienced practitioners. In its contribution ranking, Kaggle rewards contributors with medals and higher tiers, which reflect consistent and high-quality contributions. Becoming a grandmaster on Kaggle can be a challenging task, especially for newcomers who may face various obstacles in their practice. One such obstacle is the time required to gauge the impact of a notebook, as it takes time to receive upvotes from other users and determine its medal in a competition. Additionally, the process of becoming a notebook grandmaster, who is awarded the highest tier through consistent contributions of high-quality notebooks, requires not only expertise in skills but also a deep understanding and perspective in the field of data science. To the best of our knowledge, there is currently no research on identifying the characteristics of grandmaster-tier notebooks in Kaggle. To help newcomers overcome these challenges, we aim to study the factors that influence the progression of notebooks, with a focus on the characteristics of notebooks created by users who have achieved the grandmaster tier. By identifying these characteristics, we can classify notebooks based on their level of expertise (e.g., novice or grandmaster), which will aid newcomers in determining the quality of notebooks. For GitHub, we will identify and study its public data science projects. We will analyse those projects' code qualities and the characteristics similar to those found in the grandmaster notebooks on Kaggle. These will give us an insight into the difference between the data science projects on Kaggle and those on GitHub. To this end, we hope to provide suggestions to improve notebook quality and guidelines to further develop a project from a computational notebook to a deployable project. In summary, given the challenges faced by newcomers in the field of data science and machine learning, this research aims to: * Identify and extract key characteristics of the computational notebooks from Kaggle and GitHub. * Investigate the factors that influence the quality and success of these notebooks. The rest of this registered report is organized as follows. In the next section, we provide the background and related work. We define and explain our research questions in Section 3. Our plan of execution is detailed in Section 4. We discuss the implication of our work in Section 5 and conclude our current work in Section 6. ## 2. Background and Related Work In this section, we explain the background of our study, including the description of a computational notebook, Kaggle, GitHub, and related work. ### Computational notebook A computational notebook, such as Jupyter Notebook and R Markdown, is a coding platform that enables the combination of written text and executable code, with the results of the code being incorporated into the document (Brocker et al., 2017). The notebook document is visible content in the web application associated with the inputs and outputs of the computations, explanatory text, mathematics, images, and rich media representations of objects. There are typically three types of contents, referred to as "cells", which include code cells, markdown cells, and raw cells. Code cells contain executable code (e.g., Python), markdown cells contain texts and formatting, and raw cells contain unformatted texts. These cells can be run independently or together to perform computations, analyse data and write results in a structured and readable manner. Such a coding platform eases the process of reproducibility of analytical tasks and sharing analytical codes. For instance, Jupyter Notebook provides a web-based interactive computing environment that enables users to create a notebook for conducting computations, making it a valuable tool for data analysis and manipulation. According to (Kaggle, 2018), Jupyter is considered the tool choice for data science tasks, with over 2.5 million public Jupyter Notebooks found on GitHub alone. ### Kaggle Kaggle is a cloud-based computational notebook platform that serves as a collaborative space for individuals interested in data science. The platform offers a wide range of tools and resources for data scientists, including access to datasets, competitions, and a community of users to share knowledge and collaborate with. Especially, Kaggle also offers a platform for individuals to build their profile in the field of data science. This can be achieved through participation in various activities such as analyzing datasets, participating in competitions, and contributing to the Kaggle community. Through these activities, users can gain visibility and demonstrate their skills and knowledge in data science practices. For example, Kaggle hosts a competition that challenges participants to develop a model for predicting loan defaults using a dataset provided by American Express, with a prize of 100,000 USD awarded to the winner.2 Footnote 2: [https://www.kaggle.com/competitions/amex-default-prediction](https://www.kaggle.com/competitions/amex-default-prediction) As mentioned in the Introduction, apart from rewards from competitions, Kaggle has a ranking system to reflect the contributions of its users. This ranking system consists of different levels, which are achieved through participation in competitions, contributing to the community, and other activities on the platform. The progression through the ranks serves as a way for users to showcase their skills and experience. In this ranking system, a notebook's rank is determined by the number of upvotes received from other users. Thus, users who consistently produce high-quality work have the potential to achieve higher ranks and make progress toward their ranking within the platform. This incentivisises users to produce high-quality work and encourages them to share their knowledge and collaborate with others in the community. There are five ranks in Kaggle: Novice, Contributor, Expert, Master, and Grandmaster. A Novice is considered a user who is new to Kaggle, and who just started to learn and explore the platform. In contrast, Grandmaster is the highest tier in Kaggle's ranking system and the most respected rank in the community. This rank is awarded to users who have consistently produced exceptional work. To reach the Grandmaster tier, a user typically needs to have a certain number of medals, which are awarded for various contributions such as winning competitive notebooks, writing high-quality notebooks, or participating in challenging notebooks. For example, a notebook with five upvotes is awarded a Bivner medal and a notebook with fifty upvotes is awarded a Gold medal. The Grandmaster tier can be achieved by obtaining 15 Gold medals. These incentives thus serve as a way to recognize and acknowledge the high-quality work of the users and their contributions to the community. ### GitHub GitHub provides a version control system for software development and a hosting service for software projects. For data science projects, GitHub can be used to store and share code, data, and documentation in the form of computational notebook files such as Jupyter Notebook files. The ability to store and share computational notebooks on GitHub enables team members to collaborate and track the progress of data science projects. GitHub also contains multiple types of training materials in data science and related areas ranging from a curated list of free books (Boward et al., 2017) to tutorial projects (Kandra et al., 2017). ### Related work Wang et al. (Wang et al., 2018) provide a contribution to the understanding of data science documentation practices on Kaggle based on the textual description in markdown cells and code cells. By considering highly-voted notebooks as a proxy for well-documented notebooks, the authors are able to gain insights into what makes a notebook well-received by the Kaggle community. According to the preliminary findings of the study, there appears to be a difference between the top-voted notebooks by the Kaggle community and the top-ranked ones on the competition leaderboard, which is determined by the performance metrics solution and the competition's objective, specifically the accuracy of the model's predictions. This suggests that the Kaggle community places a significant value on factors beyond performance, such as clear documentation, reproducibility, and ease of use. Based on their findings, the authors of the study formulated the hypothesis that the high-voted notebooks receive a high number of upvotes due to their high levels of readability and comprehensive documentation, which may have contributed to their popularity among the Kaggle community. The studying of 80 high-voted notebooks selected from the top 1% of all notebooks submitted in the two most popular Kaggle competitions shows that the textual description in those notebooks provided a comprehensive description of the code cells. However, our work focuses on a broader range of notebook characteristics (e.g., code quality) and incorporates data from GitHub, which provides a more comprehensive understanding of the factors that contribute to the success of a notebook in data science. Furthermore, our work also relates to previous empirical studies that have investigated platforms that provide rewards or incentives to participants, such as monetary compensation or a reputation score, since the user ranking advancement system in Kaggle can be compared to bug bounty programs, as both offer incentives for contributors. Walshe et al. (Walshe et al., 2017) conducted a study on the bug bounty or vulnerability reward program (VRP), which involves offering rewards to white hat hackers to locate and report vulnerabilities in software. The VRP approach has become increasingly popular as a way to identify security flaws in software systems. The study reports that the amount of monetary reward offered did not influence the number of vulnerability reports submitted by the hackers. Instead, they found that the hackers were motivated by their reputation posted on the websites where they participated. In addition, the study by Kanda et al. (Kanda et al., 2017) found that projects with bounties on the Bountysource platform are more likely to be solved than those without bounties. Further research by Zhou et al. (Zhou et al., 2018, 2019) showed that the bounty value is not the most significant factor that attracts contributors to work on the issues, as some contributors may be motivated by their own interests or desires rather than solely by rewards or monetisation. ## 3. Research questions As discussed in the introduction, our goals are to aid and identify key features to help practitioners on Kaggle increase their data science skills. Our first step toward this goal is investigating data science projects on both platforms. Specifically, the quality of data science projects on Kaggle and GitHub can vary widely due to the diverse range of projects and contributors with varying levels of skill and experience, both platforms offer valuable resources for data science practitioners to learn from and improve their skills. By comparing the characteristics of computational notebooks between these two collaborative platforms, practitioners can gain insight into the notebook's characteristics reflecting the practices for organizing and presenting data science work. To this end, we define three following exploratory research questions. * **RQ1**: _What are the important features determining the ranking of notebook contributors?_ * **RQ2**: _Do the changing of notebook characteristics correspond to the notebook contributor ranking progression?_ * **RQ3**: _Do the characteristics of grand master's notebooks correlate with the highly popular notebooks in GitHub projects?_ To address the first two research questions, we will identify and extract features from Jupyter Notebook on the Kaggle platform. Then, we will track the changes in these features as the contributors improved from novices to grand masters. For the third research question, we plan to investigate whether findings can be applied to open-source data science projects using Jupyter Notebooks outside of Kaggle's context. We will sample data science projects from GitHub containing Jupyter Notebooks. Then, we will repeat the process in RQ1 and study the correlation by using a correlation test that matches the data. We provide the full details of our plans, including the list of targeting features and GitHub project selection criteria, in the next section. ## 4. Execution Plan In this section, we outline our approach to executing our research. We have chosen an exploratory case study design that adheres to ACM guidelines for the experiment (Koggle, 2017). Figure 1 presents an overview of our research study to address and answer our research questions. In this research execution plan, the first phase focuses on the collection, preprocessing, and labelling of the data. We test our data quality two ways. For the preprocessed data, we create the SQL assertion tests. For the labelling, we performed manual validation. In the subsequent phase, we will extract the features that characterize computational notebooks. This extraction process involves four distinct groups of features, including general notebook attributes, features related to textual descriptions, features related to visualizations, and features related to code quality. The next step is to construct models and conduct the analysis. The models will be designed to answer the research questions and test the hypothesis. The analysis will be performed using various statistical techniques and machine learning techniques, such as regression analysis and random forests. ### Data collection Our research plan involves the examination of computational notebooks data from two sources: Kaggle and GitHub. For Kaggle, we are using the KGtorrent dataset (Koggle, 2017), while the data from GitHub will be collected by ourselves. KGtorrent (Koggle, 2017) is a comprehensive dataset that consists of computational notebooks (i.e., code kernels) with accompanying metadata collected from Kaggle. The metadata contained in KGTorrent includes information about the notebooks, user profile achievements, and details about the competitions hosted on Kaggle. In addition, to gather computational notebook data from GitHub, we will collect open-source projects containing computational notebook files (e.g., Jupiter Notebook files) that meet our criteria through GitHub's API. ### Data preprocessing and labelling #### 4.2.1. KGTorrent dataset In order to use the data from KGTorrent (Koggle, 2017), we have investigated the dataset and performed a filtering process as the following criteria: 1) the notebooks must be from closed competitions, 2) the notebook's metadata and its corresponding notebook files must both exist in the dataset, and 3) the information of a contributor who created a notebook must be available in the dataset. Table 1 shows the number of notebooks that passed our filter in each tier based on the data recorded in the Kaggle meta data (i.e., user profile). In total, we can retrieve 11,939 notebooks. In this study, the notebook creator tiers are used as a proxy to indicate the quality of the computational notebooks, as the progression and achievement of the creator tier are representatives of community recognition and appreciation. These notebooks will be labelled as tier levels from 0 to 4 (i.e., target variable) based on the classification used in the Meta Kaggle database: 0 represents the rank of a novice, 1 represents a contributor, 2 represents an expert, 3 represents a master, and 4 represents a grandmaster. However, it is important to note that using the creator tier recorded in the metadata may not be entirely accurate, as the rank of the creator may have changed at the time the notebook was created. We will then perform notebook labelling by considering the progression of the user over time. In this method, we chronologically categorize the notebooks produced by each user and label them based on their tier criteria by processing the user's profile changelog recorded in the KGTorrent dataset. For example, a notebook is labelled as "grandmaster" tier if the creator had accumulated at least 15 gold medals at the time of creating the notebook. We then incorporate a correlation analysis between contributor rank and notebook up-votes and a manual validation on labelling in our study. This analysis can provide valuable insights into the relationship between these two variables and help to validate the appropriateness of using the extracted contributor rank. We acknowledge that the data is imbalanced. To address this concern, we aim to apply various techniques, including resampling methods and stratified sampling during model creation. Additionally, we will employ machine learning algorithms that allow for adjusting class weights, which can help account for the data imbalance. Furthermore, we will focus on evaluation metrics that are robust to class imbalance, ensuring a more reliable assessment of our model's performance. #### 4.2.2. GitHub data For our study, we will collect open-source data science projects hosted on GitHub using their API. The projects must satisfy the following criteria: 1) the project must have at least ten stars to filter out trivial or toy projects, and 2) the projects must contain Jupyter Notebooks as the majority of all files. For example, the ratio of Jupyter Notebook files must be greater than 60%. This cutoff threshold will be determined based on the distribution of the Jupyter Notebook file ratios in the selected projects. After obtaining the initial list of projects, we will use the number of stars and forks as proxies (i.e., target variable), as opposed to using contributor ranks in the Kaggle notebooks. The numbers of stars and the number of forks reflect popularity as perceived by the GitHub community (Koggle, 2017), which is similar to the ranking mechanisms in Kaggle. For the number of stars, we followed the same criteria suggested in (Koggle, 2017), where the authors suggested ten stars as a good compromise between the quality of data and the time required to \begin{table} \begin{tabular}{l c c} \hline \hline **Total number of** & **Numbers** & **Percentage** \\ \hline Notebooks in KGTorrent & 248,761 & \\ Retrieved notebooks & 11,939 & \\ Novice & 6,591 & 55.21\% \\ Contributor & 3,388 & 28.38\% \\ Expert & 1,674 & 14.02\% \\ Master & 215 & 1.80\% \\ Grandmaster & 71 & 0.59\% \\ \hline \hline \end{tabular} \end{table} Table 1. Notebooks retrieved from KGTorrent mine the GitHub projects. We will control the quality of our GitHub project sampling as follows. First, to mitigate the threats to validity that the star is given to a GitHub project and may not fully relate to the Jupyter Notebook files, we have set our selection criteria to only projects that contain Jupyter Notebook files as the majority compared to all other files. Second, we will apply the technique presented in Munia et al.'s study (Munia et al., 2018) to remove toy projects or tutorials and keep only engineered software projects. ### Features The summary of relevant features for all research questions is shown in Table 2. We will extract four groups of features: 1) the notebook features indicate the basic information of a notebook e.g., notebook's tags, 2) the code quality metrics describe the features of the code cells contained in the notebook, e.g., code complexity, 3) The textual description-related features capture the characteristics of the markdown cells, e.g., readability score, and 4) the visualization related features reflect the use of data visualization techniques appeared in the notebook, e.g., the number of visualizations. ### Analysis method For each research question, we explain our plan to analyse the data using statistical testing, feature importance, and correlation below. **RQ1: What are the important features determining the ranking of notebook contributors?** For this research question, we first apply data exploration techniques on the whole dataset for all the features listed in Table 2. The methods include the frequency count, mean, variance, outlier tests, and correlation (such as Pearson Correlation) for all pairs. This will give us an overview of the contributed notebooks. Then, we partition the data according to the ranking of the notebook contributors. For each group, we perform the same exploration techniques to identify features that can help distinguish the notebook in each group. We will perform feature-important studies based on the classification task using logistic regression, random forests, or XGBoost. **RQ2: Do the changing of notebook characteristics correspond to the notebook contributor ranking progression?** To answer this research question, we leverage the fact that notebooks are labeled based on the chronological rank progression of contributors. This enables us to observe whether the changes in feature values across contributor ranks are significant. We can utilize the features listed in Table 2 for our analysis. Specifically, we first identify contributors who created notebooks while they were at various rank levels. Then, for each individual contributor's ranking change, we can apply statistical techniques to investigate whether the characteristics of notebooks differ when they advance in rank. In addition, we can analyse the trend of each feature throughout the ranking change using regression. We can validate this by creating the classification task based on the key features. After we train a prediction model, we can ask if a contributor changes its values on the identified features following our results, i.e., can we correctly predict if a contributor will be promoted to a higher rank? **RQ3: Do the characteristics of grandmaster's notebooks correlate with the highly-popular notebooks in GitHub projects?** We answer this RQ by applying the study performed in RQ1 to open-source data science projects in GitHub. We will collect the GitHub project as explained in Section 4.2.2. After retrieving the set of both highly-popular and unpopular data science projects based on the statistical distribution of their popularity (i.e., stars and forks), we will extract the features (Table 2), except the Kaggle's specific features, including Tier, Dataset, and Tag. While we use the contributor's tier in Kaggle notebooks, we will use the number of stars and forks to be proxies for the quality of data science GitHub projects. Then, we will perform the statistical test among each feature of notebooks from Kaggle and GitHub. For the highly-popular Kaggle grandmaster notebooks and data science GitHub projects, we define the null hypothesis as \(H_{0}\): there is no difference between the feature of the grandmaster's notebooks and highly-popular GitHub data science project notebooks. For the unpopular Kaggle grandmaster notebooks and data science GitHub projects, we define the null hypothesis as \(H_{0}\): there is no difference between the feature of the low-ranking notebooks and unpopular GitHub data science project notebooks. Then, we test the two hypotheses by performing a statistical test of the data from each feature between the Kaggle grandmaster notebooks and GitHub Figure 1. Overview of our exploratory study data science projects by following the guidelines by du Prel et al. (2018). First, in the case of the features that are continuous, we will choose between the t-test and the Mann-Whitney U test depending on the normality of the data. We will use the level of significance (\(\alpha\)) at 0.05. Second, in the case of the features that are categorical, we will choose the chi-square test. We will use the same \(\alpha\) at 0.05. We also plan to compare the set of deterministic features to determine the notebook's popularity between Kaggle and GitHub. ## 5. Implications and Impact This study has the following implications on both the researchers and practitioners. **For researchers.** The findings from this study aid the understanding of researchers on the important characteristics to determine good computational notebooks. For instance, answering RQ1 can provide concrete features that can improve the quality of a notebook. This can lead to automation, e.g., an automated tool or collection of best practices. Moreover, the findings of important characteristics can also be used to select criteria for Jupyter Notebooks in future studies. Additionally, RQ3 studies the grandmaster by comparing notebook quality against traditional metrics. Due to the documentation flexibility of notebooks, the results do provide guidelines on the relationship between documentation and executable code, cross-cutting the field of software documentation and code analysis. \begin{table} \begin{tabular}{l l l l} \hline \hline **Group** & **Feature Name** & **Description** & **Rationale** \\ \hline Notebook attributes & Dataset & Whether the dataset used in a notebook is the same one as the dataset provided in the competition & The alignment between the usage dataset, notebook’s tags, and \\ & Notebook tag & List of tags of a notebook & competition tags may describe an \\ & Competition tag & List of tags of a competition associated with a notebook & objective of a notebook. Good quality \\ & Cosine sim tags & The cosine similarity score between Notebook’s tags & notebooks may have a strong \\ & and Competition’s tags & alignment among these variables \\ \hline Code quality & Number of code cells\({}^{*}\) & The number of code cells in a notebook & For data science, the code quality \\ & Number of code lines\({}^{*}\) & The total number of code lines in a notebook & effects the reproducibility and \\ & Number of comment lines & The total number of code comment lines in a notebook & verifiability of the results in notebooks. \\ & AVG. code lines per cell & The mean of numbers of code lines per code cell & For software engineering, code quality \\ & Number of functions & The number of functions declared in a notebook & affects the understanding and \\ & Cyclomatic complexity & The score reflecting the cyclomatic complexity of the code & maintainability of the software and \\ & Cognitive complexity & The score reflecting the understandability of code based on the code’s control flow and considers factors such as nesting, branching, and the use of logical operators, which contribute to the difficulty of comprehending the code & encourages collaboration which is a different concern in software engineering. Overall, higher code \\ & Duplication blocks & The number of duplicated code blocks of lines in a notebook & reliability of the results in a notebook \\ & Duplication lines & The number of duplicated code lines in a notebook & affect the understanding and \\ & Code smell & The number of identified code smell issues & maintainability of the software and \\ & Technical debt & The amount of effort required to fix all code smells & encourages collaboration which is a significant concern in software engineering. Overall, higher code quality can lead to better sources for training. Note that some features are at a lower level (e.g., Cyclomatic \\ & Reliability & The score reflecting the reliability of code based on the \\ & Vulnerability & The number of vulnerability issues & \\ & Security & The score reflecting the security rating of code & \\ \hline Textual descriptions & Number of markdown cells\({}^{*}\) & The number of markdown cells in a notebook & Readability metrics and other \\ & Number of markdown lines & The number of lines in markdown cells in a notebook & textual-related features can help better \\ & Flesch score & The readability metric computed from sentence length and syllables/words & understand notebooks. The initial \\ & Number of sections & The number of headers in markdown cells in a notebook & notebooks, the quantity and quality of \\ & Avg. sentences per cell & The mean of numbers of sentences per markdown cell & textual description are much higher \\ \hline Visualizations & Number of visualizations & The number of visualizations used for visualizing data & According to Settewong et al. (2018), \\ & Number of imported images & The number of imported images and figures in a notebook & there are different categories of \\ & Visualization libraries & List the imported library used in visualizations & visualizations used in notebooks. \\ & Visualization functions & List of functions used for creating visualizations e.g. bar, & \\ & & line & \\ \hline \hline \end{tabular} \end{table} Table 2. Features identified and extracted from Jupyter notebooks (\({}^{*}\) indicates that this feature was also used in (Kang et al., 2018)) For practitioners and developers.The findings (RQ2) from this study can be used as guidelines for practitioners and developers when developing their Jupyter Notebooks. In terms of learning, our findings can lead novice practitioners to improve their skills by focusing on the identified important characteristics. ## 6. Conclusion and Future Work We present our exploratory study of mining Jupyter Notebooks on Kaggle and GitHub. We plan to apply statistical and data analysis techniques to the four aspects of extracted features to identify characteristics influencing the quality of the notebooks. For future work, based on our findings, we will create a training guideline for data science practitioners and developers. This guideline will help them to improve the quality of the notebooks and advance through the contribution ranking on Kaggle. Then, we will conduct an evaluation of the effectiveness of this guideline using a user study. ## Acknowledgment This research project is supported by Mahidol University.
2305.12570
Generalizable synthetic MRI with physics-informed convolutional networks
In this study, we develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition and investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols. A dataset of fifty-five subjects acquired with a standard MRI protocol and a five-minute transient-state sequence was used to develop a physics-informed deep learning-based method. The model, based on a generative adversarial network, maps data acquired from the five-minute scan to "effective" quantitative parameter maps, here named q*-maps, by using its generated PD, T1, and T2 values in a signal model to synthesize four standard contrasts (proton density-weighted, T1-weighted, T2-weighted, and T2-weighted fluid-attenuated inversion recovery), from which losses are computed. The q*-maps are compared to literature values and the synthetic contrasts are compared to an end-to-end deep learning-based method proposed by literature. The generalizability of the proposed method is investigated for five volunteers by synthesizing three non-standard contrasts unseen during training and comparing these to respective ground truth acquisitions via contrast-to-noise ratio and quantitative assessment. The physics-informed method was able to match the high-quality synthMRI of the end-to-end method for the four standard contrasts, with mean \pm standard deviation structural similarity metrics above 0.75 \pm 0.08 and peak signal-to-noise ratios above 22.4 \pm 1.9 and 22.6 \pm 2.1. Additionally, the physics-informed method provided retrospective contrast adjustment, with visually similar signal contrast and comparable contrast-to-noise ratios to the ground truth acquisitions for three sequences unused for model training, demonstrating its generalizability and potential application to accelerate neuroimaging protocols.
Luuk Jacobs, Stefano Mandija, Hongyan Liu, Cornelis A. T. van den Berg, Alessandro Sbrizzi, Matteo Maspero
2023-05-21T21:16:20Z
http://arxiv.org/abs/2305.12570v1
# Generalizable synthetic MRI with ###### Abstract In this study, we develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition and investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols. A dataset of fifty-five subjects acquired with a standard MRI protocol and a five-minute transient-state sequence was used to develop a physics-informed deep learning-based method. The model, based on a generative adversarial network, maps data acquired from the five-minute scan to "effective" quantitative parameter maps, here named q*-maps, by using its generated PD, T\({}_{1}\), and T\({}_{2}\) values in a signal model to synthesize four standard contrasts (proton density-weighted, T\({}_{1}\)-weighted, T\({}_{2}\)-weighted, and T\({}_{2}\)-weighted fluid-attenuated inversion recovery), from which losses are computed. The q*-maps are compared to literature values and the synthetic contrasts are compared to an end-to-end deep learning-based method proposed by literature. The generalizability of the proposed method is investigated for five volunteers by synthesizing three non-standard contrasts unseen during training and comparing these to respective ground truth acquisitions via contrast-to-noise ratio and quantitative assessment. The physics-informed method was able to match the high-quality synthMRI of the end-to-end method for the four standard contrasts, with mean \(\pm\) standard deviation structural similarity metrics above \(0.75\pm 0.08\) and peak signal-to-noise ratios above \(22.4\pm 1.9\) and \(22.6\pm 2.1\). Additionally, the physics-informed method provided retrospective contrast adjustment, with visually similar signal contrast and comparable contrast-to-noise ratios to the ground truth acquisitions for three sequences unused for model training, demonstrating its generalizability and potential application to accelerate neuroimaging protocols.** **Key words: Synthetic MRI, quantitative MRI, deep learning, generative adversarial network** Introduction Conventionally, multiple complementary contrast-weighted magnetic resonance imaging (MRI) images are separately acquired for disease diagnosis. For example, standard brain imaging examinations include PD-weighted (PDw), T\({}_{1}\)-weighted (T\({}_{1}\)w), T\({}_{2}\)-weighted (T\({}_{2}\)w), and T\({}_{2}\)-weighted fluid-attenuated inversion recovery (T\({}_{2}\)-FLAIR) contrasts among many other acquisitions [1]. The qualitative nature of conventional MRI impedes the comparison of signal intensities between examinations from different time points, patients, and vendors. Although different scanners output qualitatively similar images, they exhibit considerable variation when viewed quantitatively [2]. Over the last decades, attempts to make MRI quantitative have been pursued. Quantitative MRI (qMRI) aims to facilitate standardized MRI-based measurements, reduce bias, and increase reproducibility [3]. In the last decades, efforts have been dedicated to developing qMRI techniques mapping biophysical tissue parameters such as T\({}_{1}\), T\({}_{2}\), and PD (q-maps) within a clinically viable acquisition and reconstruction time, i.e., 3-5 minutes or less. Examples of other tissue parameters are T\({}_{2}\)*, apparent diffusion coefficients, flow, perfusion, and stiffness, but these are not considered in this work. Three examples of such fast qMRI techniques are MR fingerprinting (MRF) [4], Magnetic Resonance Spin TomogrAphy in Time-domain (MR-STAT) [5], and fitting of a signal model to multi-dynamic multi-echo (MDME) MRI [6]. To still provide radiologists with the standard contrasts routinely used for neurological diagnoses, the reconstructed q-maps can be used to generate so-called "synthetic MRI" (synthMRI), making use of signal models [7]. This way, synthMRI provides intrinsically registered, multi-contrast images from a single scan, reducing examination time compared to conventional MRI. MRF [8, 9], MR-STAT [10, 11], and model fitting to MDME data [6] have all been explored to facilitate synthMRI. The latter commercial solution has even been implemented in clinics. Several studies showed that this commercial solution may facilitate patients' diagnosis on par with conventional contrasts [12], with applications for gliomas [13], brain metastases [14], Sturge-Weber syndrome [15], multiple sclerosis [16], and stroke [17]. However, synthetic T\({}_{2}\)-FLAIR contrasts remain challenging for synthMRI [18, 19]. The quality of synthetic T\({}_{2}\)-FLAIR contrasts can be hindered by oversimplified signal models that do not model effects like partial volume, magnetization transfer, and flow [16, 20, 21]. The subsequent artifacts can result in coarse hyperintensities and a lack of contrast between the lesion and surrounding tissues, making it necessary to acquire an additional conventional T\({}_{2}\)-FLAIR to confirm the diagnosis and lose the promised decrease in scan time from synthMRI [22]. Deep learning (DL) is a subfield of machine learning that focuses on developing models that learn abstract data representations using data-driven training strategies [23]. DL has achieved great success for various medical imaging tasks such as segmentation [24], MRI reconstruction [25], super-resolution [26], and image modality conversion [27] and has recently been proposed to facilitate synthMRI. Generative adversarial networks (GANs) [28] are a class of DL that has shown promising results for a variety of medical applications, particularly for image synthesis and image-to-image translation [29]. Specifically, due to its fast inference times and representation capability, it has the potential to overcome imperfections in qMRI reconstructions and signal models in a data-driven manner [30, 31]. So far, only DL-based synthMRI methods that exclusively synthesize specific contrasts provided during training have been extensively investigated. These methods cannot generalize to unseen contrasts during training, meaning additional acquisitions are still needed if contrast tweaking or non-standard contrasts are necessary to answer clinical demands. This work1 investigates the use of DL to obtain synthMRI from a single full-brain, five-minute scan for four routinely acquired contrasts and generalizability to unseen contrasts during training. We propose incorporating a physics-based signal model into the framework to achieve this generalizability. We will compare the physics-informed DL-based synthMRI constrasts to separately acquired ground truths and investigate whether generalizability to contrasts unseen during training for a volunteer is feasible. Footnote 1: Code will be made publicly available on [https://gitlab.com/computational-imaging-lab/qstarMRI](https://gitlab.com/computational-imaging-lab/qstarMRI) ## 2 Theory The standard synthMRI approach (Fig. 1, solid black arrows) consists of a qMRI reconstruction that results in q-maps and a voxel-wise signal model to synthesize contrasts from the q-maps. Various qMRI techniques have been proposed, and generally, an analytical solution to the Bloch equations is used as the signal model [6, 8, 10], as described by: \[\mathrm{S}=\mathrm{PD}\cdot e^{-\mathrm{TE/T_{2}}}\cdot\frac{1-\left[1-\cos \left(\theta\right)\right]e^{-\mathrm{TI/T_{1}}}-\cos\left(\theta\right)e^{- \mathrm{TD/T_{1}}}}{1-\cos\left(\mathrm{FA}\right)\cos\left(\theta\right)e^{ -\mathrm{TD/T_{1}}}} \tag{1}\] with echo time \(\mathrm{TE}\), saturation pulse angle \(\theta\) (\(\theta=180^{\circ}\) for inversion recovery (IR) sequences, otherwise \(\theta=0^{\circ}\)), inversion time \(\mathrm{TI}\), delay time \(\mathrm{TD}\) (\(\mathrm{TD}=\mathrm{TR}-\mathrm{ETL}\cdot\mathrm{ESP}\) Figure 1: **Schematic representation of possible synthMRI approaches.** The standard synthMRI approach (solid black arrows) starts with qMRI reconstruction, from which synthMRI is obtained via a signal model. End-to-end DL-based methods (dotted red arrow) have been proposed to skip the qMRI reconstruction and signal model. Our proposed physics-informed method (striped blue arrows) aims to address the lack of generalizability of the end-to-end approach by outputting effective q-maps (q*-maps) and feeding these to the signal model to obtain synthMRI. for turbo-spin echo (TSE) sequence, \(\mathrm{TD}=\mathrm{TR}\) for spin-echo (SE) sequence, with ETL = echo train length and ESP = echo spacing), and flip angle FA [6, 32]. Recently, supervised end-to-end DL-based methods (Fig. 1, dotted red arrow) have been proposed [30, 31], which train a neural network (NN) to directly synthesize contrasts from the acquired data by optimizing the following objective function: \[\boldsymbol{w}^{\star}=\arg\min_{\boldsymbol{w}}\mathcal{L}\big{(} \boldsymbol{y},\mathrm{NN}(\boldsymbol{x},\boldsymbol{w})\big{)} \tag{2}\] with weights \(\boldsymbol{w}\), loss function \(\mathcal{L}\), contrasts \(\boldsymbol{y}\), and acquired data \(\boldsymbol{x}\). For example, K. Wang et al. [30] proposed synthesizing T\({}_{1}\)w, T\({}_{2}\)w, and T\({}_{2}\)-FLAIR contrasts from MRF acquisition data using DL, allowing the model to learn physiological effects in a data-driven manner. However, this restricts the synthesis during inference to only the contrasts \(\boldsymbol{y}\), which are part of the paired dataset. Synthesizing different contrasts would require additional MRI scans to expand the dataset, which can be very expensive [33]. Furthermore, the NN would need to be retrained and re-evaluated with the new dataset. To provide generalizability to unseen contrasts for DL-based synthMRI methods, we propose a physics-informed method (Fig. 1, striped blue arrows). The model first maps the acquired data to quantitative PD, T\({}_{1}\), and T\({}_{2}\) values, so-called "effective q-maps" (q*-maps), which are then fed into the signal model described by Eq. (1). The model is trained to optimize for the following objective function: \[\boldsymbol{w}^{\star}=\arg\min_{\boldsymbol{w}}\mathcal{L}\bigg{(} \boldsymbol{y},\mathrm{S}\big{(}\mathrm{NN}(\boldsymbol{x},\boldsymbol{w}), \mathrm{TE},\mathrm{TR},\mathrm{TI}\big{)}\bigg{)} \tag{3}\] By performing contrast synthesis via q*-maps, all resulting synthMRI contrasts are enforced to be jointly consistent with the signal model. Similar to the standard synthMRI methods, a computationally inexpensive signal model is used for synthesis. However, our physics-informed DL-based network is trained to incorporate the signal model imperfections; effects that the signal model does not incorporate can still be captured in a data-driven manner via the q*-maps. Because of the effective nature of the q*-maps, their quantitative PD, T\({}_{1}\), and T\({}_{2}\) values may differ from q-maps reconstructed using standard qMRI methods, which could harm synthMRI quality. We hypothesize that with this physics-informed training setup, we can use the q*-maps to provide generalizability to unseen contrasts during inference by varying the desired sequence parameters in the signal model. Additionally, contrast synthesis via q*-maps is more interpretable compared to the "black box" end-to-end methods, which improves trust and conduces clinical adaption. ## 3 Methods ### Data acquisition and processing In this study, we collected data from forty patients (brain tumor (11), stroke (10), epilepsy (10), and multiple sclerosis (9)) and ten volunteers (10) included in an institutional review board-approved study to assess the clinical usability of MR-STAT-based synthMRI [11]. Subjects were imaged on a 3 T scanner (Ingenia MR-RT, Philips, Best, The Netherlands) and 15-channel head coil (HeadSpine, Philips, Best, The Netherlands). The acquired sequences included MR-STAT [5, 34] and four standard neuro contrast acquisitions (PDw, T\({}_{1}\)w, T\({}_{2}\)w, and T\({}_{2}\)-FLAIR), whose imaging protocols are summarized in Tab. 1. The MR-STAT sequence is a five-minute transient-state multi-2D spoiled gradient-echo with a slowly varying flip angle preceded by a non-selective 180\({}^{\circ}\) inversion pulse. Flip angle variations were subdivided into five acquisition subsets, obtaining five separate k-spaces. Two stroke patients were excluded due to severe motion artifacts in the standard contrast acquisitions. To assess the feasibility of the physics-informed approach to synthesize unseen contrasts, five additional volunteers were imaged with the MR-STAT, four standard contrasts, and three different sequences with increasing deviation from the standard contrasts to assess the framework flexibility (also summarized in Tab. 1): (i) T\({}_{12}\)w as an intermediate contrast between the T\({}_{1}\)w and T\({}_{2}\)w contrasts; (ii) TI400 as a T\({}_{2}\)w IR sequence like the T\({}_{2}\)-FLAIR contrast, but with a different TI; (iii) a double IR (DIR) [35] as a completely new sequence. The 15-channel MR-STAT data (each capturing five k-spaces) were compressed into a single virtual coil using singular value decomposition [36]. For pre-processing purposes, q-maps were reconstructed using the MR-STAT framework [5], from which "MR-STAT contrasts" (PDw, T\({}_{1}\)w, T\({}_{2}\)w, T\({}_{2}\)-FLAIR) were synthesized using the signal model [10]. White matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) were segmented from the MR-STAT contrasts using the predefined tissue probability maps in SPM12 (v7771) [37]. All other pre-processing steps were performed using Python (v3.7.4) [38]. The conventional contrasts were normalized by scaling the WM intensity mode to the average mode among all subjects and intensities were clipped based on estimated values that clipped most of the highest skull intensities. To simplify the contrast synthesis starting from the image domain, the five complex k-spaces of the virtual coil were fast Fourier transformed, creating five complex "fast Fourier transform (FFT) images". The magnitudes of the FFT images were min-max scaled to [0, 1] per volume, and the real and imaginary components were used as separate real-valued input channels. We compared \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & _Acquisition name_ & _TR_ & _TE_ & _TI_ & _FA_ & _ETL_ & _ESP_ & _TA_ \\ & & [ms] & [ms] & [ms] & [degrees] & [-] & [ms] & [min:sec] \\ \hline **MR-STAT** & Multi2D & spoiled & 8.9 & 4.7 & N/A & 0-90 & N/A & N/A & 4:50 \\ **acquisition** & Cartesian GRE & & & & & & & \\ \hline **Standard** & T\({}_{1}\)w MS SE & 864 & 14 & N/A & 70 & 1 & N/A & 3:16 \\ **contrasts** & T\({}_{2}\)w MS TSE & 3254 & 80 & N/A & 90 & 15 & 10 & 1:44 \\ & PDw MS TSE & 2800 & 20 & N/A & 90 & 14 & 9 & 1:41 \\ & T\({}_{2}\)-FLAIR MS TSE & 10000 & 120 & 2800 & 90 & 24 & 9.6 & 3:40 \\ \hline **Unseen** & T\({}_{1}\)/T\({}_{2}\)w MS TSE & 2000 & 50 & N/A & 70 & 14 & 6.7 & 3:00 \\ **contrasts** & T1400 MS TSE & 10000 & 120 & 400 & 90 & 24 & 9.6 & 3:40 \\ & DIR MS TSE & 10608 & 25 & 325/3400* & 90 & 17 & 6.8 & 5:39 \\ \hline \hline \end{tabular} **Abbreviations: GRE = gradient-echo; MS = multi-slice; DIR = double inversion recovery; TSE = turbo spin-echo; TR = repetition time; TE = echo time; TI = inversion time; FA = flip angle; ETL = echo train length; ESP = echo spacing; TA = acquisition time. *Two inversion pulses are applied with different TIs** \end{table} Table 1: **Sequence parameters of the MR-STAT acquisition, standard contrasts, and unseen contrasts.** For all sequences, geometric parameters were kept constant: field of view of \(224\)x\(224\)x\(134\) mm\({}^{3}\), acquisition and reconstruction resolution of 1x\(1\)x\(3\) mm\({}^{3}\), and \(30\) slices with \(1.5\) mm gaps for all scans. using raw FFT as input against using MR-STAT q-maps in a preliminary analysis reported in the supplementary materials, demonstrating that starting from raw FFT is beneficial for synthesis. Based on this study, only investigations using raw FFT images as input are therefore reported. Finally, the conventional contrasts were rigidly registered per slice to their respective MR-STAT contrast that is intrinsically registered to the FFT images since these are generated from the same k-spaces. The registration was performed with SimpleElastix (v2.0.0rc2) [39], adopting an adaptive stochastic gradient descent optimizer [40], the Mattes mutual information similarity metric [41], and a recursive pyramid strategy with isotropic smoothing and down-sampling of 8, 4, 2, and 1 for the four resolutions. ### Deep learning synthesis Two models were adopted: 1) end-to-end and 2) physics-informed (proposed). GANs were employed for both methods due to their success in various medical image translation tasks [29]. Specifically, conditional GANs (cGANs) [42] were used, where either a synthesized or conventional contrast was concatenated to the FFT images for conditioning [43]. The architectures of the generator and discriminators of the end-to-end and physics-informed approaches were kept similar (Fig. 2), consisting of a ResUNet generator [44] with three down-sampling operations (33M and 13M parameters, respectively) and four contrast-specific conditional PatchGAN discriminators [43] (2.8M parameters each). The end-to-end approach mapped the FFT images to the four conventional contrasts using a separate decoder (one output channel) for each contrast, similar to the work by K. Wang et al. [30]. Contrarily, the physics-informed method mapped the FFT images to the q*-maps using a single decode (three output channels) from which contrasts were synthesized in a voxel-wise manner using the signal model described in Eq. (1). The resulting signal value is min-max scaled to [0, 1] per slice and used as the signal intensities. Synthetic T\({}_{1}\) and T\({}_{2}\) q*-maps are clipped to 5 and 1.5 s, respectively, after observing that these values were never exceeded in the training population. All networks used the Kaiming weight initial Figure 2: **Schematic representation of the synthMRI methods.** The five complex FFT images (**x**) were fed into the two architectures to output four contrasts (\(\hat{\textbf{y}}\)) directly (end-to-end) or via q*-maps followed by the physics model M (physics-informed). ization [45] and rectified linear unit (ReLU) activations [46], where the leaky variant [47] with slope 0.2 was adopted for the discriminators. As described in (4), the content loss compares the synthesized contrasts \(\hat{y}\) to the conventional ones \(y\), for each contrast \(i\). \[\mathcal{L}_{\textit{cont, }i}=\lambda_{1}\|\hat{y}_{i}-y_{i}\|_{1}+\lambda_{2}\| \hat{y}_{i}-y_{i}\|_{2}-\lambda_{3}\cdot\mathrm{SSIM}(\hat{y}_{i},y_{i})+ \lambda_{4}\left\|(\phi(\hat{y}_{i})-\phi(y_{i}))\right\|_{2} \tag{4}\] The voxel-wise terms comprised a weighted linear combination of L1 and L2 losses. A structural similarity metric (SSIM) [48] loss was added to incorporate local structures and optimize perceptual quality. To further optimize the perceptual similarity between the generated and conventional contrasts, we also minimized the L2-norm of the features extracted from the 14th convolutional layer of a pre-trained VGG-19 net (\(\phi\)) [49]. Adding the least-squares adversarial losses gives us the full objective functions for the generator \(\mathrm{G}\) and discriminators \(\mathrm{D}_{i}\), described by \[\min_{G}\sum_{i=1}^{4}\left(\gamma_{1}\mathbb{E}\big{[}(1-D_{i}(x,\hat{y}_{i} ))^{2}\big{]}+\mathcal{L}_{\textit{cont, }i}\right) \tag{5}\] and \[\min_{D_{i}}\gamma_{2}\mathbb{E}\big{[}D_{i}(x,\hat{y}_{i})^{2}\big{]}+\gamma _{3}\mathbb{E}\big{[}(1-D_{i}(x,y_{i}))^{2}\big{]} \tag{6}\] respectively [50]. Five-fold cross-validation was performed, randomly splitting the 48 subjects (30 slices each) [11] into 34, 5, and 9 subjects for training, validation, and testing, respectively. All folds have an equal distribution of pathologies in the training and test sets. Hyperparameter optimization was performed on a single fold containing only healthy volunteers in the validation set. The training was repeated for the four remaining folds (with 35 and 8 subjects in the train and test set, respectively, for two of the folds) using these same hyperparameters. Each subject is in the test set once, except for the five held-out validation subjects, which were only used for hyperparameter optimization, giving a total of 43 subjects in the test set. The loss weights, types of augmentations, and batch size were manually optimized for the physics-informed approach and were adopted for the end-to-end approach. For this optimization, the SSIM and a sharpness estimation (the variance of the image convolved with a 3x3 Laplacian kernel) [51] were used as metrics, where SSIM was the primary metric considered when SSIM and sharpness disagreed. The discriminators were updated twice per generator update and weights [\(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\)] and [\(\gamma_{1},\gamma_{2},\gamma_{3}\)] were set to [1, 1, 4, 0.005] and [0.05, 0.025, 0.025]. During training, the images were augmented by applying flipping and translation (\(\pm 20\) voxels), both horizontally and vertically, and rotation (\(\pm 15^{\circ}\)). In preliminary experiments, we observed that unhealthy tissues were underrepresented compared to healthy tissue during training. To counteract this imbalance, we doubled the chance that a slice with pathology could occur. All methods were implemented in PyTorch v1.9.0 [52] and training with a batch size of 1 took about 38 and 26 hours for the end-to-end and physics-informed approach, respectively, on an Nvidia Tesla P100 GPU and Intel E5-2690v4 CPUs. A cosine learning rate decay over 300 epochs, with a restart after 100 epochs to prevent overfitting, was adopted for the generator and discriminators, where the final epoch was always used for evaluation. The initial learning rate values were determined using separate automatic Bayesian searches for the physics-informed and end-to-end approaches. These optimizations, performed in Weight&Biases [53], minimized the sum of the content losses as defined in Eq. (4) for learning rates between 5e-5 and 1e-2 (logarithmic uniform distribution) using ten searches. The optimal learning rate for the physics-informed and end-to-end approaches were 1e-3 and 2.5e-3, respectively. ### Experiments #### 3.3.1 Synthetic q*-maps The values of test volunteer's q*-maps were compared to literature values for WM and GM [54, 55]. Due to the signal equation's linear dependency on PD (see Eq. (1)) and the applied min-max scaling for contrast synthesis (see Section 3.2), the PD q*-maps were not considered quantitative and were compared to literature using the GM/WM ratio instead, to eliminate the effect of different scaling factors. #### 3.3.2 Synthetic contrasts To assess whether the physics-informed method is able to match the performance of the end-to-end approach for the conventional contrasts, quantitative and qualitative comparisons were performed. Firstly, the physics-informed method was quantitatively compared to the end-to-end method in terms of the test set's SSIM and peak signal-to-noise ratio (PSNR) values. The lower and upper five slices were left out for this and all upcoming experiments due to increased noise in the conventional contrasts. A Shapiro-Wilk test [56] showed that the image similarity metrics were not normally distributed. Therefore, a non-parametric paired Wilcoxon signed-rank test [57] was used to compare the physics-informed method to the end-to-end method (two-tailed with a significance level of \(p\leq 0.05\)), where the mean SSIM and PSNR of each subject were considered to satisfy the independent sample assumption. All statistical analyses were performed using SciPy (v1.3.1) [58] and statsmodels (v0.13.2) [59]. Secondly, a visual comparison is shown for a multiple sclerosis patient to assess robustness of the methods synthesizing lesions of various shapes and sizes. A more detailed qualitative comparison was performed focusing on T\({}_{2}\)-FLAIR contrasts, as these have been reported to be the most challenging contrasts [22]: The occurrence of motion artifacts, CSF flow, checkerboard artifacts, and unrealistic lesion suppression in T\({}_{2}\)-FLAIR contrasts were reported, as first classified by a non-clinical intern and revised by an experienced MRI physicist, together with example images of the observed artifacts. Finally, the mean \(\pm\) standard deviation inference times over all the test sets to obtain synthMRI for a single subject were reported for both methods. To assess the generalizability of the physics-informed approach to unseen contrasts, contrasts from three additional sequences were synthesized (Tab. 1) for five volunteers (see Tab. 1): (i) T\({}_{12}\)w, (ii) TI400, and (iii) DIR. The DIR contrast was synthesized using the following signal model: \[\mathrm{S}=\mathrm{PD}\cdot e^{-\mathrm{TE/T_{2}}}\cdot\left(1-2e^{-\mathrm{ TI_{1}/T_{1}}}+2e^{-\mathrm{TI_{2}/T_{1}}}-e^{-\mathrm{TD/T_{1}}}\right) \tag{7}\] with \(\mathrm{TI}_{1}\) the time between the first \(180^{\circ}\) pulse and the excitation pulse (suppressing CSF) and \(\mathrm{TI}_{2}\) the time between the second \(180^{\circ}\) pulse and the excitation pulse (suppressing WM). The synthetic contrasts were skull-stripped and visually compared to separately acquired ground truth acquisitions. For quantitative image comparison, the mean \(\pm\) standard deviation contrast-to-noise ratios (CNRs) were compared for five test volunteers, defined as the mean signal intensity difference between two regions of interest divided by the image noise (estimated by computing the standard deviation of intensities in a manually selected homogeneous square patch of WM). The regions of interest were chosen to be GM-WM for T\({}_{12}\)w, CSF-WM for TI400, and GM-WM for DIR. ## 4 Results ### Synthetic q*-maps The PD GM/WM ratios of the q*-maps are close to the ratio reported in literature [55] and present lower values where there are inflow effects, such as in the superior sagittal sinus (Fig. 3). The T\({}_{1}\) values in WM are close to the reported values, whereas the values in GM differ substantially [54]. The T\({}_{2}\) values of WM and GM are underestimated compared to the literature values [54]. ### Synthetic contrasts A quantitative comparison demonstrated that the physics-informed method performed comparable to the end-to-end method for the PDw, T\({}_{1}\)w, T\({}_{2}\)w, and T\({}_{2}\)-FLAIR contrasts in terms of SSIM and PSNR (Fig. 4). The biggest difference in performance is in the T\({}_{1}\)w contrasts, having SSIM values of \(0.88\pm 0.03\) and \(0.85\pm 0.03\) and PSNR values Figure 3: **Q*-maps for a volunteer.** A transverse slice example of the q*-maps for a volunteer. The GM/WM ratio is compared to Ref. [55] and the T\({}_{1}\) and T\({}_{2}\) values to the median value in Ref. [54]. The standard deviations of the PD ratio and T\({}_{1}\) and T\({}_{2}\) values describe inter-subject and intra-subject variability, respectively. The PD q*-maps and MR-STAT maps are normalized to the 75 percentile of the CSF volume for visualization purposes. The superior sagittal sinus is highlighted in the PD maps. of \(25.5\pm 2.0\) and \(24.0\pm 1.6\) for the end-to-end and physics-informed approaches, respectively. The inference times for all four contrasts averaged over all test subjects were \(2.38\pm 0.03\) s and \(1.34\pm 0.02\) s for the end-to-end and physics-informed approaches, respectively. The end-to-end and physics-informed approaches do not display large visual differences, for example, as demonstrated for a multiple sclerosis patient in Fig. 5. The conventional and synthetic PDw and T\({}_{1}\)w images lack contrast between the lesions and the Figure 4: **Quantitative comparison of the end-to-end and physics-informed approach.** The SSIM and PSNR of the contrasts synthesized using the end-to-end (blue) and physics-informed (red) approach are visualized using boxplots and statistically compared. *(\(0.01<p\leq 0.05\)) **(\(0.001<p\leq 0.01\)) ***(\(p\leq 0.001\)) n.s. = not significant (\(p>0.05\)). Figure 5: **SynthMRI on multiple sclerosis.** Contrasts are shown for conventional acquisition (top row), end-to-end (middle row), and physics-informed (bottom row) approaches, where the right columns zooming in on patches of the T\({}^{2}\)-FLAIR contrast with red and blue arrows highlighting missed and hypointense/blurred lesions, respectively. surrounding healthy tissue. The CSF of the physics-informed T\({}_{1}\)w contrast was consistently hyperintense compared to the conventional contrast. Both synthetic T\({}_{2}\)w contrasts accurately capture almost all lesions present in the conventional contrast, but for the T\({}_{2}\)-FLAIR contrasts, some smaller lesions were missed (Fig. 5, red arrows) and hypointense or blurred (Fig. 5, blue arrows). Here, a more detailed analysis of the T\({}_{2}\)-FLAIR contrasts is provided, where the artifact counts are determined using manual inspection of the images. Motion artifacts were detected for 17 out of 43 test subjects (40%) in the conventional contrasts, which were not observed for the synthMRI of any subject (Fig. 6a). CSF flow artifacts are present for 43 subjects (100%) in the synthMRI (Fig. 6b). Grid-like (chessboard) artifacts were introduced in the brainstem for 23 (53%) and 4 (9%) subjects for the physics-informed and end-to-end approach, respectively (Fig. 6c). Similar chessboard patterns were observed in the parenchyma of 4 (9%) subjects for the physics-informed method. Finally, synthMRI (especially the end-to-end approach) can appear blurrier than the conventional contrasts, for example, the basal ganglia structures (Fig. 6d). This finding has been quantitatively confirmed in a study reported in the supplementary materials investigating image sharpness. Regarding pathologies, both synthMRI methods sometimes miss or result in Figure 6: **Qualitative comparison of the conventional contrasts and synthMRI methods focusing on T\({}_{2}\)-FLAIR only.** In the left three columns, zoomed-in structures of volunteers with a) motion artifacts in the conventional contrast, b) flow artifacts in all contrasts, c) checkerboard artifacts in the brainstem of the synthMRI contrasts, and d) blurry basal ganglia structures in the synthMRI contrasts. In the right three columns, zoomed-in lesions are shown of patients with e) epilepsy, f) multiple sclerosis, g) stroke, and h) tumor. hypointense lesions with a smaller volume, for example, in epilepsy or multiple sclerosis patients (Fig. 6e-f). Also, more prominent, complicated pathologies such as a stroke can result in inaccuracies. Although the gross shape is captured, hypointensities and bluriness were observed (Fig. 6g). Finally, both synthMRI methods were found to result in an unrealistic suppression of tumor contrast for 4 out of 10 patients (40%) (Fig. 6h). The standard end-to-end approach can only synthesize contrasts seen during training. Contrarily, the experiment results suggest that the physics-informed approach can synthesize any desired contrast from the q*-maps (Fig. 7). Similar to the T\({}_{1}\)w contrast, the physics-informed synthetic T\({}_{12}\)w has a hyperintense CSF compared to the conventional T\({}_{12}\)w contrast. For the physics-informed synthetic TI400 and DIR contrasts, the signal suppression is almost identical to the ground truth contrasts, although slight hyperintensities can be observed in the WM of the synthetic DIR contrasts. The CNR values of the synthetic T\({}_{12}\)w and TI400 contrasts were close to the values of their respective ground truths: 2.65 \(\pm\) 0.12 versus 2.24 \(\pm\) 0.21 and 56.9 \(\pm\) 11.4 versus 54.8 \(\pm\) 7.96, respectively. The CNR values of the synthetic DIR contrast were considerably lower than for its ground truth: 12.8 \(\pm\) 3.31 versus 17.6 \(\pm\) 1.79. ## 5 Discussion This work investigated a physics-informed GAN-based framework synthesizing MRI from a single five-minute acquisition, obtaining four standard neurological sequences in about 2 seconds. For the PDw, T\({}_{1}\)w, T\({}_{2}\)w, and T\({}_{2}\) FLAIR contrasts, we showed that the physics-informed method closely resembles those of the end-to-end approach, visually and in terms of SSIM and PSNR. For the T\({}_{2}\)-FLAIR contrasts, both methods could result in hypointensities for smaller lesions and more checkerboard artifacts appeared Figure 7: **SynthMRI on a volunteer with unseen contrasts.** Standard contrasts seen during training (left) and unseen, skull-stripped contrasts (right) are shown for the conventional contrasts and the end-to-end and physics-informed approaches. Window leveling is kept identical among the same contrasts. for the physics-informed approach. Future work may focus on minimizing and finding the causes of these artifacts. However, we demonstrated that the physics-informed method resulted in sharper T\({}_{2}\)-FLAIR contrasts and facilitated synthetization of unseen contrasts, with only minor visual deviations and comparable CNR values compared to the conventional, separately acquired ground truth contrasts. The inferior CNR of the synthetic DIR contrasts compared to its ground truth can be explained by the suboptimal signal suppression leading to a higher WM tissue inhomogeneity and thus a higher noise, lowering the CNR. Further work is required to evaluate the clinical relevance of these trade-offs offered by the proposed physics-informed approach. This work is a continuation of our previous work, where the physics-informed framework was originally proposed [60, 61]. In this work, the framework was further developed and the evaluation was greatly expanded via cross-validation, correlation of q*-maps with baseline q-maps, quantitative artifact analysis, and a more detailed analysis of the generalizability. DL-based end-to-end methods have been proposed to improve synthMRI quality by using MRF or MDME acquisition data as input to a GAN and bypassing the oversimplified signal model [30, 31]. K. Wang et al. [30] and G. Wang et al. [31] adopted a single generator to synthesize all contrasts to exploit complementary information from different contrasts, forming our end-to-end method. In terms of network design, the single-branch and multi-branch U-net [24] architectures from K. Wang et al. [30] were adapted for the physics-informed and end-to-end approaches, respectively, where we implemented residual units to simplify training [44]. The real and imaginary components of the FFT images were used as separate real-valued input channels. Alternatively, complex-valued networks could be explored to accurately represent the magnitude and phase to improve the network's performance [62]. The translations performed by the generators were kept within the image domain, similar to K. Wang et al. [30] and G. Wang et al. [31], preventing the need for fully-connected layers in the generator to estimate the Fourier transform, which is memory intensive and require more data to avoid overfitting [63]. Similar to our proposed framework, Pal et al. [64] proposed to use a deep image prior to denoise three SE contrasts and fit a signal model to extract q-maps from the denoised contrasts, facilitating personalized synthMRI by omitting the need for an external training dataset. However, the high inference time is undesirable in a clinical setting and the methods were trained and evaluated using exclusively synthetic data. Denck et al. [65] proposed the first GAN to allow retrospective contrast adjustments to synthesize missing or corrupted knee MRI contrasts from existing contrasts, where MR physics was incorporated implicitly via style transfer. Our proposed method obtains this feature by explicitly modeling MR physics, similar to the physical priors used by Moya-Saez et al. [66, 67]. Ref. [66] was trained using exclusively synthetic data and Ref. [67] was finetuned using a relatively small and homogeneous in-vivo dataset, whereas we used a large cohort of in-vivo data containing heterogeneous pathologies. Furthermore, we pushed the generalizability of our method to contrasts which have not been observed by the network. This is the first GAN-based synthMRI framework allowing retrospective contrast adjustments from a single acquisition. The previously proposed MDME-based and MRF-based synthMRI frameworks suffer from sub-optimal T\({}_{2}\)-FLAIR contrasts compared to conventional acquisitions, generally attributed to an oversimplified signal model [16, 20, 21]. Signal model extensions incorporating partial volume and flow effects have shown promising results to improve the T\({}_{2}\)-FLAIR contrasts for MRF-based synthMRI [68, 69]. Partial volume and magnetization transfer modeling could be explored for our physics-informed method. We showed that our data-driven approach already resolved flow effects without requiring explicit modeling, for example, in the superior sagittal sinus. Grid-like structures in the parenchyma were reported by G. Wang et al. [31] when adding adversarial and perceptual losses. Future work should investigate the influence of these loss terms on our observed checkerboard artifacts. Although GANs may introduce hallucinations [70], these were not observed in our methods. Considering that the adopted similarity metrics have been suggested not to be ideal surrogate measures of MRI quality as determined by radiologist evaluation [71], a large-scale clinical study like the one performed by Kleinoog et al. [11] should be initiated to validate the proposed method further. Although the dataset was relatively small, a heterogeneous pathological cohort has been considered, exposing the networks to a diverse distribution of lesions, making the dataset representative of the real-world data that the network would infer in the clinic and improving generalization performance [33]. Although the MR-STAT acquisition is used as input for our synthMRI framework, substituting a different acquisition that encodes sufficient information to characterize the tissue parameters (such as MRF or MDME data) would be conceptually straightforward, making the proposed framework independent of MR-STAT and generally applicable to any acquisition. Furthermore, the data were acquired on a single scanner, so inter-scanner variability should be investigated before clinical use. Currently, the q*-maps are used to facilitate synthetization of unseen contrasts. In future work, we plan to investigate the q*-maps' diagnostic application as a fast, surrogate qMRI method to standardize MRI-based measurements, reduce bias, and increase reproducibility [3]. Acquiring a single, five-minute acquisition and synthesizing the contrasts reduces scan time compared to the conventional, separate contrast acquisitions. This way, synthMRI reduces examination costs, increases patient throughput, and makes MRI a more accessible imaging modality. The proposed incorporation of the signal model provides added interpretability, which is important for clinical adaptation of deep learning-based methods [33]. Furthermore, the provided retrospective contrast adjustment gives imaging specialists the flexibility to select the desired post-scan contrasts, which may decrease the number of patient recalls [72], possibly resulting in improved diagnostics. For example, it has been shown that synthesizing DIR contrasts may improve the detection of multiple sclerosis plaques [16]. Additionally, a shorter scan time is advantageous for pediatric brain imaging [73], potentially decreasing the use of general anesthesia and preventing potential long-term complications in the developing brain of children [72, 74]. ## 6 Conclusion We demonstrated that the proposed physics-informed method synthesizes high-quality standard contrasts from a single full-brain five-minute acquisition. Also, we proved the feasibility of generating synthMRI of unseen sequences during training. The proposed method is able to match the quality of PDw, T\({}_{1}\)w, T\({}_{2}\)w, and T\({}_{2}\) FLAIR contrasts of an end-to-end framework but provides additional flexibility in synthesizing additional contrasts. ## Acknowledgments We want to thank Sarah Jacobs for the clinical feedback. This work would not have been possible without the involvement of Anja van der Kolk, Beyza Koktas, and Sarah Jacobs, who contributed to patient selection and inclusion. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro RTX 5000 GPU used for the prototyping part of this research. ## Author contributions L.J. sketched the idea, implemented the methods, and performed the analyses. M.M. participated in developing the idea. S.M., H.L., and M.M. revised the methods and analyses. C.B. and A.S. supervised and facilitated the work. All authors discussed the results and contributed to the writing of the manuscript. ## Financial disclosure None reported. ## Conflict of interest The authors declare no potential conflict of interests. ## References * [1] Langen, K.J., Galldiks, N., Hattingen, E., and Shah, N.J. (2017), Advances in neuro-oncology imaging, doi:10.1038/nmeurol.2017.44. URL [https://www.nature.com/articles/nrineurol.2017.44](https://www.nature.com/articles/nrineurol.2017.44). * [2] Cashmore, M.T., McCann, A.J., Wastling, S.J., McGrath, C., Thornton, J., and Hall, M.G. (2021), Clinical quantitative MRI and the need for metrology, doi:10.1259/bjr.20201215. URL [https://www.birpublications.org/doi/10.1259/bjr.20201215](https://www.birpublications.org/doi/10.1259/bjr.20201215). * [3] Margaret Cheng, H.L., Stikov, N., Ghugre, N.R., and Wright, G.A. (2012), Practical medical applications of quantitative MR relaxometry, doi:10.1002/jmri.23718. URL www.wileyblackwellcme.com. * [4] Ma, D., Gulani, V., Seiberlich, N., Liu, K., Sunshine, J.L., Duerk, J.L., and Griswold, M.A. (2013) Magnetic resonance fingerprinting. _Nature_, **495** (7440), 187-192, doi:10.1038/nature11971. URL [https://www.nature.com/articles/nature11971](https://www.nature.com/articles/nature11971). * [5] Sbrizzi, A., Heide, O.v.d., Cloos, M., Toorn, A.v.d., Hoogduin, H., Luijten, P.R., and van den Berg, C.A. (2018) Fast quantitative MRI as a nonlinear tomography problem. _Magnetic Resonance Imaging_, **46**, 56-63, doi:10.1016/j.mri.2017.10.015. * [6] Wartjes, J., Leinhard, O.D., West, J., and Lundberg, P. (2008) Rapid magnetic resonance quantification on the brain: Optimization for clinical usage. _Magnetic Resonance in Medicine_, **60** (2), 320-329, doi:10.1002/mrm.21635. URL [http://doi.wiley.com/10.1002/mrm.21635](http://doi.wiley.com/10.1002/mrm.21635). * [7] Ji, S., Yang, D., Lee, J., Choi, S.H., Kim, H., and Kang, K.M. (2020), Synthetic MRI: Technologies and Applications in Neuroradiology, doi:10.1002/jmri.27440. URL [https://onlinelibrary.wiley.com/doi/10.1002/jmri.27440](https://onlinelibrary.wiley.com/doi/10.1002/jmri.27440). * [8] Jiang, Y., Ma, D., Seiberlich, N., Gulani, V., and Griswold, M.A. (2015) MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout. _Magnetic Resonance in Medicine_, **74** (6), 1621-1631, doi:10.1002/mrm.25559. * [9] Mehta, B.B., Ma, D., Pierre, E.Y., Jiang, Y., Coppo, S., and Griswold, M.A. (2018) Image reconstruction algorithm for motion insensitive MR Fingerprinting (MRF): MORF. _Magnetic Resonance in Medicine_, **80** (6), 2485-2500, doi:10.1002/mrm.27227. URL [http://doi.wiley.com/10.1002/mrm.27227](http://doi.wiley.com/10.1002/mrm.27227). * [10] Mandija, S., D'Agata, F., Liu, H., van der Heide, O., Koktas, B., van den Berg, C.A., Hendrikee, J., van der Kolk, A., and Sbrizzi, A. (2020) A five-minute multi-parametric high-resolution whole-brain MR-STAT exam: first results from a clinical trial. _proceedings of ISMRM_. * [11] Kleinloog, J.P.D., Mandija, S., D'Agata, F., Liu, H., Heide, O.v.d., Koktas, B., Jacobs, S.M., Berg, C.A.T.v.d., Hendrikee, J., Kolk, A.G.v.d., and Sbrizzi, A. (2022) Synthetic MRI with Magnetic Resonance Spin TomogrAphy in Time-Domain (MR-STAT): Results from a Prospective Cross-Sectional Clinical Trial. _Journal of Magnetic Resonance Imaging_, doi:10.1002/JMRI.28425. URL [https://onlinelibrary.wiley.com/doi/full/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley](https://onlinelibrary.wiley.com/doi/full/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28425https://onlinelibrary.wiley). Open_, **5** (2), 205846011562675, doi:10.1177/2058460115626757. URL [https://journals.sagepub.com/doi/full/10.1177/2058460115626757](https://journals.sagepub.com/doi/full/10.1177/2058460115626757). * [15] Andica, C., Hagiwara, A., Nakazawa, M., Tsuruta, K., Takano, N., Hori, M., Suzuki, H., Sugano, H., Arai, H., and Aoki, S. (2016) The advantage of synthetic MRI for the visualization of early white matter change in an infant with Sturge-Weber syndrome. _Magnetic Resonance in Medical Sciences_, pp. ci-2015. * [16] Hagiwara, A., Hori, M., Yokoyama, K., Takemura, M.Y., Andica, C., Tabata, T., Kamagata, K., Suzuki, M., Kumamaru, K.K., Nakazawa, M., Takano, N., Kawasaki, H., Hamasaki, N., Kunimatsu, A., and Aoki, S. (2017) Synthetic MRI in the detection of multiple sclerosis plaques. _American Journal of Neuroradiology_, **38** (2), 257-263, doi:10.3174/ajnr.A5012. URL [http://dx.doi.org/10.3174/ajnr.A5012](http://dx.doi.org/10.3174/ajnr.A5012). * [17] Andre, J., Barrit, S., and Jissendi, P. (2022) Synthetic MRI for stroke: a qualitative and quantitative pilot study. _Scientific reports_, **12** (1), 11 552, doi:10.1038/s41598-022-15204-8. URL /pmc/articles/PMC9262877/?report=abstract[https://www.ncbi.nlm.nih](https://www.ncbi.nlm.nih) * [18] Gulani, V., Schmitt, P., Griswold, M.A., Webb, A.G., and Jakob, P.M. (2004) Towards a single-sequence neurologic magnetic resonance imaging examination: multiple-contrast images from an IR TrueFISP experiment. _Investigative radiology_, **39** (12), 767-774. * [19] Redpath, T.W., Smith, F.W., and Hutchison, J.M. (2014) Magnetic resonance image synthesis from an interleaved saturation recovery/inversion recovery sequence. _[http://dx.doi.org/10.1259/0007-1285-61-727-619_](http://dx.doi.org/10.1259/0007-1285-61-727-619_), **61** (727), 619-624, doi:10.1259/0007-1285-61-727-619. URL [https://www.birpublications.org/doi/abs/10.1259/0007-1285-61-727-619](https://www.birpublications.org/doi/abs/10.1259/0007-1285-61-727-619). * [20] Granberg, T., Uppman, M., Hashim, F., Cananau, C., Nordin, L.E., Shams, S., Berglund, J., Forslin, Y., Aspelin, P., Fredrikson, S., and Kristoffersen-Wiberg, M. (2016) Clinical feasibility of synthetic MRI in multiple sclerosis: A diagnostic and volumetric validation study. _American Journal of Neuroradiology_, **37** (6), 1023-1029, doi:10.3174/ajnr.A4665. URL [http://dx.doi.org/10.3174/ajnr.A4665](http://dx.doi.org/10.3174/ajnr.A4665). * [21] Hagiwara, A., Warntjes, M., Hori, M., Andica, C., Nakazawa, M., Kumamaru, K.K., Abe, O., and Aoki, S. (2017), SyMRI of the Brain: Rapid Quantification of Relaxation Rates and Proton Density, with Synthetic MRI, Automatic Brain Segmentation, and Myelin Measurement, doi:10.1097/RLI.000000000000365. URL /pmc/articles/PMC5596834/pmc/articles/PMC5596834/?report=abstract[https://www.ncbi.nlm.nih](https://www.ncbi.nlm.nih) * [22] Tanenbaum, L.N., Tsiouris, A.J., Johnson, A.N., Naidich, T.P., DeLano, M.C., Melhem, E.R., Quarterman, P., Parameswaran, S.X., Shankaranarayanan, A., Goyen, M., and Field, A.S. (2017) Synthetic MRI for clinical neuroimaging: Results of the magnetic resonance image compilation (MAGiC) prospective, multicenter, multireader trial. _American Journal of Neuroradiology_, **38** (6), 1103-1110, doi:10.3174/ajnr.A5227. URL [http://dx.doi.org/10.3174/ajnr.A5227](http://dx.doi.org/10.3174/ajnr.A5227). * [23] Lecun, Y., Bengio, Y., and Hinton, G. (2015), Deep learning, doi:10.1038/nature14539. URL [https://www.nature.com/articles/nature14539](https://www.nature.com/articles/nature14539). * [24] Ronneberger, O., Fischer, P., and Brox, T. (2015) U-net: Convolutional networks for biomedical image segmentation. _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_, **9351**, 234-241, doi:10.1007/978-3-319-24574-4{_}28. URL [http://lmb.informatik.uni-freiburg.de/http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net](http://lmb.informatik.uni-freiburg.de/http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net). * International Symposium on Biomedical Imaging_, **2016-June**, 514-517, doi:10.1109/ISBI.2016.7493320. * [26] Chaudhari, A.S., Fang, Z., Kogan, F., Wood, J., Stevens, K.J., Gibbons, E.K., Lee, J.H., Gold, G.E., and Hargreaves, B.A. (2018) Super-resolution musculoskeletal MRI using deep learning. _Magnetic Resonance in Medicine_, **80** (5), 2139-2154, doi:10.1002/mrm.27178. URL [https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary](https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/10.1002/mrm.27178https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27178https://onlinelibrary). inversion-recovery and double inversion-recovery sequences. _Magnetic Resonance in Medicine_, **54** (1), 241-245, doi:10.1002/mrm.20541. * [33] Lundervold, A.S. and Lundervold, A. (2019), An overview of deep learning in medical imaging focusing on MRI, doi:10.1016/j.zemedi.2018.11.002. * [34] van der Heide, O., Sbrizzi, A., Luijten, P.R., and van den Berg, C.A. (2020) High-resolution in vivo MR-STAT using a matrix-free and parallelized reconstruction algorithm. _NMR in Biomedicine_, **33** (4), doi:10.1002/nbm.4251. * [35] Turetschek, K., Wunderbaldinger, P., Bankier, A.A., Zontsich, T., Graf, O., Mallek, R., and Hittmair, K. (1998) Double inversion recovery imaging of the brain: initial experience and comparison with fluid attenuated inversion recovery imaging. _Magnetic resonance imaging_, **16** (2), 127-135. * [36] Golub, G.H. and Van Loan, C.F. (2013) _Matrix computations_, JHU press. * [37] Penny, W.D., Friston, K.J., Ashburner, J.T., Kiebel, S.J., and Nichols, T.E. (2011) _Statistical parametric mapping: the analysis of functional brain images_, Elsevier. * [38] Van Rossum, G. and Drake, F.L. (2009) _Python 3 Reference Manual_, CreateSpace, Scotts Valley, CA. * [39] Marstal, K., Berendsen, F., Staring, M., and Klein, S. (2016) SimpleElastix: A User-Friendly, Multi-Lingual Library for Medical Image Registration. _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_. * [40] Klein, S., Pluim, J.P., Staring, M., and Viergever, M.A. (2009) Adaptive stochastic gradient descent optimisation for image registration. _International Journal of Computer Vision_, **81** (3), 227-239, doi:10.1007/s11263-008-0168-y. URL [https://link.springer.com/article/10.1007/s11263-008-0168-y](https://link.springer.com/article/10.1007/s11263-008-0168-y). * [41] Mattes, D., Haynor, D.R., Vesselle, H., Lewellyn, T.K., and Eubank, W. (2001) Nonrigid multimodality image registration. _Medical Imaging 2001: Image Processing_, **4322**, 1609-1620, doi:10.1117/12.431046. URL [https://www.spiedigitallibrary.org/conference-proceedings-of-spie/4322/0000/Nonrigid-multimodal-image-registration](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/4322/0000/Nonrigid-multimodal-image-registration). * [42] Mirza, M. and Osindero, S. (2014) Conditional generative adversarial nets. _arXiv preprint arXiv:1411.1784_. * 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017_, **2017-Janua**, 5967-5976, doi:10.1109/CVPR.2017.632. URL [https://github.com/phillipi/pix2pix](https://github.com/phillipi/pix2pix). * [44] Zhang, Z., Liu, Q., and Wang, Y. (2018) Road Extraction by Deep Residual U-Net. _IEEE Geoscience and Remote Sensing Letters_, **15** (5), 749-753, doi:10.1109/LGRS.2018.2802944. URL [https://www.cs.toronto.edu/](https://www.cs.toronto.edu/). * [45] He, K., Zhang, X., Ren, S., and Sun, J. (2015) Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. _Proceedings of the IEEE International Conference on Computer Vision_, **2015 Inter**, 1026-1034, doi: 10.1109/ICCV.2015.123. * Proceedings, 27th International Conference on Machine Learning_, pp. 807-814. * [47] Maas, A.L., Hannun, A.Y., Ng, A.Y., and others (2013) Rectifier nonlinearities improve neural network acoustic models. _Proc. icml_, **30** (1), 3. * [48] Wang, Z., Bovik, A.C., Sheikh, H.R., and Simoncelli, E.P. (2004) Image quality assessment: From error visibility to structural similarity. _IEEE Transactions on Image Processing_, **13** (4), 600-612, doi:10.1109/TIP.2003.819861. * Conference Track Proceedings_. * [50] Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., and Smolley, S.P. (2017) Least Squares Generative Adversarial Networks. _Proceedings of the IEEE International Conference on Computer Vision_, **2017-Octob**, 2813-2821, doi:10.1109/ICCV.2017.304. * International Conference on Pattern Recognition_, **15** (3), 314-317, doi: 10.1109/ICPR.2000.903548. * [52] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019) PyTorch: An Imperative Style, High-Performance Deep Learning Library. _Advances in Neural Information Processing Systems_, **32**. * [53] Biewald, L. (2020), Experiment Tracking with Weights and Biases. URL [https://www.wandb.com/](https://www.wandb.com/). * [54] Bojorquez, J.Z., Bricq, S., Acquitter, C., Brunotte, F., Walker, P.M., and Lalande, A. (2017), What are normal relaxation times of tissues at 3 T?, doi:10.1016/j.mri.2016.08.021. URL [http://dx.doi.org/10.1016/j.mri.2016.08.021](http://dx.doi.org/10.1016/j.mri.2016.08.021). * [55] Hagiwara, A., Hori, M., Cohen-Adad, J., Nakazawa, M., Suzuki, Y., Kasahara, A., Horita, M., Haruyama, T., Andica, C., Maekawa, T., Kamagata, K., Kumamaru, K.K., Abe, O., and Aoki, S. (2019) Linearity, Bias, Intracanner Repeatability, and Interscanner Reproducibility of Quantitative Multi-dynamic Multiecho Sequence for Rapid Simultaneous Relaxometry at 3 T: A Validation Study with a Standardized Phantom and Healthy Controls. _Investigative Radiology_, **54** (1), 39-47, doi:10.1097/RLI.0000000000000510. URL [https://journals.lww.com/investigativeradiology/Fulltext/2019/01000/Linearity_](https://journals.lww.com/investigativeradiology/Fulltext/2019/01000/Linearity_) Bias_Intrascann * [56] Shapiro, S.S. and Wilk, M.B. (1965) An Analysis of Variance Test for Normality (Complete Samples). _Biometrika_, **52** (3/4), 591, doi:10.2307/2333709. URL [https://www.jstor.org/stable/2333709?origin=crossref](https://www.jstor.org/stable/2333709?origin=crossref). * [57] Wilcoxon, F. (1945) Individual Comparisons by Ranking Methods. _Biometrics Bulletin_, **1** (6), 80, doi:10.2307/3001968. * [58] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., and others (2020) SciPy 1.0: fundamental algorithms for scientific computing in Python. _Nature methods_, **17** (3), 261-272. * [59] Seabold, S. and Perktold, J. (2010) Statsmodels: Econometric and statistical modeling with python. _Proceedings of the 9th Python in Science Conference_, **57**, 61. * [60] Jacobs, L., Mandija, S., Liu, H., Sbrizzi, A., van den berg, C.A., and Maspero, M. (2022) Generalizable synthetic multi-contrast MRI generation using physics-informed convolutional networks. _proceedings of ISMRM_. * [61] Jacobs, L. (2022) Generalizable synthetic multi-contrast MRI using physics-informed convolutional networks. _[Unpublished master's thesis, Technical University Eindhoven]_. * [62] Cole, E., Cheng, J., Pauly, J., and Vasanawala, S. (2021) Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications. _Magnetic Resonance in Medicine_, **86** (2), 1093-1109, doi:10.1002/mrm.28733. URL [https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/10](https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28733https://onlinelibrary.wiley.com/doi/10). training with synthetic data. _Computer Methods and Programs in Biomedicine_, **210**, 106 371, doi:10.1016/j.cmpb.2021.106371. * [67] Moya-Saez, E., Luis-Garcia, R.d., and Alberola-Lopez, C. (2021) A self-supervised deep learning approach to synthesize weighted images and T1, T2, and PD parametric maps based on MR physics priors. _proceedings of ISMRM_. * [68] Cencini, M., Buonincontri, G., Biagi, L., Gomez, P.A., Schulte, R.F., and Tosetti, M. (2019) Chasing True FLAIR: a three-component Magnetic Resonance Fingerprinting approach to synthetic MRI. _Proceedings of the 27th Annual Meeting of ISMRM_, p. 816. * [69] Deshmane, A., McGivney, D., Badve, C., Yu, A., Jiang, Y., Ma, D., and Griswold, M.A. (2016) Accurate synthetic FLAIR images using partial volume corrected MR fingerprinting. _Proceedings of the Annual Meeting and Exhibition of International Society for Magnetic Resonance in Medicine_, pp. 7-13. * [70] Cohen, J.P., Luck, M., and Honari, S. (2018) Distribution matching losses can hallucinate features in medical image translation. _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_, **11070 LNCS**, 529-536, doi:10.1007/978-3-030-00928-1{_}60. URL [https://doi.org/10.1007/978-3-030-00928-1_60](https://doi.org/10.1007/978-3-030-00928-1_60). * [71] Mason, A., Rioux, J., Clarke, S.E., Costa, A., Schmidt, M., Keough, V., Huynh, T., and Beyea, S. (2020) Comparison of Objective Image Quality Metrics to Expert Radiologists' Scoring of Diagnostic Quality of MR Images. _IEEE Transactions on Medical Imaging_, **39** (4), 1064-1072, doi:10.1109/TMI.2019.2930338. URL [https://pubmed.ncbi.nlm.nih.gov/31535985/](https://pubmed.ncbi.nlm.nih.gov/31535985/). * [72] Goncalves, F.G., Serai, S.D., and Zuccoli, G. (2018), Synthetic brain MRI: Review of current concepts and future directions, doi:10.1097/RMR.000000000000189. URL [https://journals.lww.com/topicsinnri/Fulltext/2018/12000/Synthetic_Brain_MRI_Review_of_C0](https://journals.lww.com/topicsinnri/Fulltext/2018/12000/Synthetic_Brain_MRI_Review_of_C0). * [73] West, H., Leach, J.L., Jones, B.V., Care, M., Radhakrishnan, R., Merrow, A.C., Alvarado, E., and Serai, S.D. (2017) Clinical validation of synthetic brain MRI in children: initial experience. _Neuroradiology_, **59** (1), 43-50, doi:10.1007/s00234-016-1765-z. URL [https://link.springer.com/article/10.1007/s00234-016-1765-z](https://link.springer.com/article/10.1007/s00234-016-1765-z). * [74] Ing, C., Wall, M.M., DiMaggio, C.J., Whitehouse, A.J., Hegarty, M.K., Sun, M., Von Ungern-Sternberg, B.S., Li, G., and Sun, L.S. (2017) Latent class analysis of neurodevelopmental deficit after exposure to anesthesia in early childhood. _Journal of Neurosurgical Anesthesiology_, **29** (3), 264-273, doi:10.1097/ANA.00000000000303. URL [https://pubmed.ncbi.nlm.nih.gov/27077892/](https://pubmed.ncbi.nlm.nih.gov/27077892/).
2302.07864
Denoising Diffusion Probabilistic Models for Robust Image Super-Resolution in the Wild
Diffusion models have shown promising results on single-image super-resolution and other image- to-image translation tasks. Despite this success, they have not outperformed state-of-the-art GAN models on the more challenging blind super-resolution task, where the input images are out of distribution, with unknown degradations. This paper introduces SR3+, a diffusion-based model for blind super-resolution, establishing a new state-of-the-art. To this end, we advocate self-supervised training with a combination of composite, parameterized degradations for self-supervised training, and noise-conditioing augmentation during training and testing. With these innovations, a large-scale convolutional architecture, and large-scale datasets, SR3+ greatly outperforms SR3. It outperforms Real-ESRGAN when trained on the same data, with a DRealSR FID score of 36.82 vs. 37.22, which further improves to FID of 32.37 with larger models, and further still with larger training sets.
Hshmat Sahak, Daniel Watson, Chitwan Saharia, David Fleet
2023-02-15T18:56:06Z
http://arxiv.org/abs/2302.07864v1
# Denoising Diffusion Probabilistic Models for Robust Image Super-Resolution in the Wild ###### Abstract Diffusion models have shown promising results on single-image super-resolution and other image-to-image translation tasks. Despite this success, they have not outperformed state-of-the-art GAN models on the more challenging _blind super-resolution_ task, where the input images are out of distribution, with unknown degradations. This paper introduces SR3+, a diffusion-based model for blind super-resolution, establishing a new state-of-the-art. To this end, we advocate self-supervised training with a combination of composite, parameterized degradations for self-supervised training, and noise-conditioning augmentation during training and testing. With these innovations, a large-scale convolutional architecture, and large-scale datasets, SR3+ greatly outperforms SR3. It outperforms Real-ESRGAN when trained on the same data, with a DRealSR FID score of 36.82 vs. 37.22, which further improves to FID of 32.37 with larger models, and further still with larger training sets. Machine Learning, ICML ## 1 Introduction Diffusion models (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020; Song et al., 2020) have quickly emerged as a powerful class of generative models, advancing the state-of-the-art for both text-to-image synthesis and image-to-image translation tasks (Dhariwal and Nichol, 2021; Rombach et al., 2022; Saharia et al., 2022; Li et al., 2022). For single image super-resolution Saharia et al. (2022), showed strong performance with self-supervised diffusion models, leveraging their ability to capture complex multi-modal distributions, typical of super-resolution tasks with large magnification factors. Although impressive, SR3 falls short on out-of-distribution (OOD) data, i.e., images in the wild with unknown degradations. Figure 1: Blind super-resolution test results (\(64\times 64\to 256\times 256\)) for SR3+, SR3 and Real-ESRGAN. Hence GANs remain the method of choice for _blind super-resolution_(Wang et al., 2021). This paper introduces SR3+, a new diffusion-based super-resolution model that is both flexible and robust, achieving state-of-the-art results on OOD data (Fig. 1). To this end, SR3+ combines a simple convolutional architecture and a novel training process with two key innovations. Inspired by Wang et al. (2021) we use parameterized degradations in the data augmentation training pipeline, with significantly more complex corruptions in the generation of low-resolution (LR) training inputs compared to those of (Saharia et al., 2022). We combine these degradations with _noise conditioning augmentation_, first used to improve robustness in cascaded diffusion models Ho et al. (2022). We find that noise conditioning augmentation is also effective at test time for zero-shot application. SR3+ outperforms both SR3 and Real-ESRGAN on FID-10K when trained on the same data, with a similar sized model, and applied in zero-shot testing on both the RealSR (Cai et al., 2019) and DRealSR (Wei et al., 2020) datasets. We also show further improvement simply by increasing model capacity and training set size. Our main contributions are as follows: 1. We introduce SR3+, a diffusion model for blind image super-resolution, outperforming SR3 and the previous SOTA on zero-shot RealSR and DRealSR benchmarks, across different model and training set sizes. 2. Through a careful ablation study, we demonstrate the complementary benefits of parametric degradations and noise conditioning augmentation techniques (with the latter also used at test time). 3. We demonstrate significant improvements in SR3+ performance with increased model size, and with larger datasets (with up to 61M images in our experiments). ## 2 Background on Diffusion Models Generative diffusion models are trained to learn a data distribution in a way that allows computing samples from the model itself. This is achieved by first training a _denoising_ model. In practice, given a (possibly conditional) data distribution \(q(\mathbf{x}|\mathbf{c})\), one constructs a Gaussian _forward process_ \[q(\mathbf{z}_{t}|\mathbf{x},\mathbf{c})=\mathcal{N}(\mathbf{z}_{t};\sqrt{\alpha_{t}}\mathbf{x},(1- \alpha_{t})\mathbf{I}) \tag{1}\] where \(\alpha_{t}\) is a monotonically decreasing function over \(t\in[0,1]\), usually pinned to \(\alpha_{0}\approx 1\) and \(\alpha_{1}\approx 0\). At each training step, given a random \(t\sim\mathrm{Uniform}(0,1)\), the neural network \(\mathbf{x}_{\theta}(\mathbf{z}_{t},t,\mathbf{c})\) must learn to map the noisy signal \(\mathbf{z}_{t}\) to the original (noiseless) \(\mathbf{x}\). Ho et al. (2020) showed that a loss function that works well in practice is a reweighted evidence lower bound (Kingma and Welling, 2013): \[L(\theta)=\mathbb{E}_{\mathbf{x},t,\mathbf{c}}\|\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t, \mathbf{c})-\mathbf{c}\|^{2} \tag{2}\] where the neural network learns to infer the additive noise \(\epsilon\), as opposed to the image itself. Recovering the image is then trivial, since we can use the reparametrization trick (Kingma and Welling, 2013) with Eqn. 1 to obtain \(\mathbf{x}_{\theta}=\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{z}_{t}-\sqrt{1-\alpha_{t}} \mathbf{\epsilon}_{\theta})\). After training, we repurpose the denoising neural network into a generative model by starting with Gaussian noise at the maximum noise level \(t=1\), i.e., \(\mathbf{z}_{1}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), then iteratively refining the noisy signal, gradually attenuating noise and amplifying signal, by repeatedly computing \[\hat{\mathbf{x}}_{t}=\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{z}_{t}-\sqrt{1- \alpha_{t}}\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathbf{c})) \tag{3}\] \[\mathbf{z}_{s}\sim q(\mathbf{z}_{s}|\mathbf{z}_{t},\hat{\mathbf{x}},\mathbf{c})\,\ \ \ s<t\, \tag{4}\] for which Ho et al. (2020) show that \(q(\mathbf{z}_{s}|\mathbf{z}_{t},x,\mathbf{c})\) can be obtained in closed form when \(s<t\). To sample with \(T\) denoising steps, we typically choose \(s\) to be \(\frac{T-1}{T}\), then \(\frac{T-2}{T}\), and so on, until reaching \(s=0\). At the last denoising step, we omit the step that adds noise again, and simply take the final \(\hat{\mathbf{x}}\) to be our sample. For single-image super-resolution, we used conditional diffusion models. The data distribution \(q(\mathbf{x},\mathbf{c})\) is comprised of high-resolution (HR) images \(\mathbf{x}\) and corresponding low-resolution (LR) images \(\mathbf{c}\). ## 3 Related Work Two general approaches to blind super-resolution involve _explicit_(Shocher et al., 2018; Liang et al., 2021; Yoo et al., 2022) and _implicit_(Patel et al., 2021; Yan et al., 2021) degradation modeling. Implicit degradation modeling entails learning the degradation process; however, this requires large datasets to generalize well (Liu et al., 2021). The best results in the literature employ explicit degradation modeling, where the degradations are directly incorporated as data augmentation during training. Luo et al. (2021); Wang et al. (2021) produce the augmented conditioning images \(c\) by applying blur before downsampling the original HR image, and then adding noise and applying JPEG compression to the downsampled result. The Real-ESRGAN model (Wang et al., 2021) demonstrates that applying this degradation scheme _more than once_ leads to a LR distribution closer to those of images in the wild. These degradation schemes have been crucial for GAN-based methods to achieve state-of-the-art results. Other methods for super-resolution beyond GANs include diffusion models, and even simpler, non-generative models. The preliminary work of SRCNN (Dong et al., 2015) showed the superiority of deep convolutional neural networks over simple bicubic or bilinear upsampling. Dong et al. (2016); Shi et al. (2016) improved the efficiency of these results by learning a CNN that itself performs image upsampling. Further architectural and training innovations have since been found to deepen neural networks via residual connections (Kim et al., 2016; Lim et al., 2017; Ahn et al., 2018) and other architectures (Fan et al., 2017; Kim et al., 2016; Tai et al., 2017; Lai et al., 2017). Contrastive learning has also been applied to super-resolution (Wang et al., 2021; Yin et al., 2021). Attention-based networks have been proposed (Choi and Kim, 2017; Zhang et al., 2018); however, we still opt to explore a fully convolutional model as it can better generalize to unseen resolutions (Whang et al., 2022). Recent work on super-resolution has demonstrated the potential of image-conditional diffusion models (Saharia et al., 2022; Li et al., 2022), which were shown to be superior to regression-based models that cannot generate both sharp and diverse samples (Ho et al., 2022; Saharia et al., 2022). One advantage of diffusion models is their ability to capture the complex statistics of the visual world, as they can infer structure at scales well beyond those available in LR inputs. This is particularly important at larger magnification factors, where many different HR images may be consistent with a single LR image. By comparison, GAN models often struggle with mode collapse, thereby reducing diversity (Thanh-Tung and Tran, 2020). ## 4 Methodology SR3+ is a self-supervised diffusion model for blind, single-image super-resolution. Its architecture is a convolutional variant of that used in SR3, and hence more flexible with respect to image resolution and aspect ratio. During training, it obtains LR-HR image pairs by down-sampling high-resolution images to generate corresponding, low-resolution inputs. Robustness is achieved through two key augmentations, namely, composite parametric degradations during training (Wang et al., 2021;a), and noise conditioning augmentation (Ho et al., 2022), both during training and at test time, as explained below. ### Architecture Following Saharia et al. (2022), SR3+ uses a UNet architecture, but without the self-attention layers used for SR3. While self-attention has a positive impact on image quality, it makes generalization to different image resolutions and aspect ratios very difficult (Whang et al., 2022). We also adopt modifications used by Saharia et al. (2022) for the Efficient U-Net to improve training speed. Below we ablate the size of the architecture, demonstrating the performance advantages of larger models. ### Higher-order degradations Self-supervision for super-resolution entails down-sampling HR images to obtain corresponding LR inputs. Ideally, one combines down-sampling kernels with other degradations that one expects to see in practice. Otherwise, one can expect a domain shift between training and testing, and hence poor zero-shot generalization to images in the wild. Arguably, this is a key point of failure of SR3, results of which are evident for ODD test data shown in Figure 1. SR3+ is trained with a data-augmentation pipeline that comprises multiple types of degradation, including image blur, additive noise, JPEG compression and down-sampling. While the use of multiple parametric deformations in super-resolution training pipelines are common (Zhang et al., 2021; Wang et al., 2021), Wang et al. (2021) found that applying repeated sequences of deformations, called _higher-order deformations_, has a substantial impact on ODD generalization. For simplicity and comparability to Real-ESRGAN, SR3+ uses the same degradation pipeline, but _without_ additive noise (see Figure 2). Empirically, we found in our preliminary experiments that noise conditioning augmentation (explained later) is better than including noise in the degradation pipeline. Training a 400M parameter model on the same dataset as Real-ESRGAN, but with noise in the degradations instead of noise conditioning augmentation, we obtain an FID(10k) score of 42.58 (vs. 36.28, see Table 1). For completeness, we now document Figure 2: The SR3+ data pipeline applies a sequence of degradations to HR training images (like Real-ESRGAN but without additive noise). To form the conditioning signal for the neural denoiser, we up-sample the LR image and applied noise conditioning augmentation. all the degradation hyperparameters. These should match those used by Wang et al. (2021). **Blur.** Four blur filters are used, i.e., Gaussian, generalized Gaussian, a plateau-based kernel, and a sinc (selected with probabilities 0.63, 0.135, 0.135 and 0.1). With probability \(\frac{9}{14}\) the Gaussians are isotropic, and anisotrpic otherwise. The plateau kernel is isotropic with probability \(0.8\). When anisotropic, kernels are rotated by a random angle in \((-\pi,\pi]\). For isotropic kernels, \(\sigma\in[0.2,3.0]\). For anisotropic kernels, \(\sigma_{x},\sigma_{y}\in[0.2,3.0]\). The kernel radius \(r\) is random between 3 and 11 pixels (with only with odd value). For the sinc-filter blur, \(w_{c}\) is randomly selected from \([\pi/3,\pi]\) when \(r\!<\!6\) and from \([\pi/5,\pi]\) otherwise. For generalized Gaussians, the shape parameter \(\beta\) is sampled from \([0.5,4.0]\); it is sampled from \([1.0,2.0]\) for the plateau filter. The second blur is omitted with probability 0.2; but when used, \(\sigma\in[0.2,1.5]\). **Resizing.** Images are resized in one of three (equiprobable) ways, i.e., area resizing, bicubic interpolation, or bilinear interpolation. The scale factor is random in \([0.15,1.5]\) for the first stage resize, and in \([0.3,1.2]\) for the second. **JPEG compression.** The JPEG quality factor is drawn randomly from \([30,95]\). In the second stage we also apply a sinc filter (described above), either before or after the JPEG compression (with equal probability). After two stages of degradations, as illustrated in Fig. 2, the image is resized using bicubic interpolation to the desired magnification between the original HR image and the LR degraded image. SR3+ is trained for \(4\times\) magnification. ### Noise Conditioning Augmentation Noise conditioning was first used in cascaded diffusion models (Ho et al., 2022; Saharia et al., 2022). It was introduced so that super-resolution models in a cascade can be self-supervised with down-sampling, while at test time it will receive input from the previous model in the cascade. Noise conditioning augmentation provided robustness to the distribution of inputs from the previous stage, even though the stages are trained independently. While the degradation pipeline should already improve robustness, it is natural to ask whether further robustness can be achieved by also including this technique. In essence, noise-conditioning augmentation entails addding noise to the up-sampled LR input, but also providing the noise level to the neural denoiser. At training time, for each LR image in a minibatch, it entails 1. Sample \(\tau\sim\mathrm{Uniform}(0,\tau_{\mathrm{max}})\). 2. Add noise to get \(\mathbf{c}_{\tau}\sim q(\mathbf{z}_{\tau}|\mathbf{c})\), reusing the marginal distribution of the diffusion forward process. 3. Condition the model on \(\mathbf{c}_{\tau}\) instead of \(\mathbf{c}\), and we also condition the model on (a positional embedding of) \(\tau\). The model learns to handle input signals at different noise levels \(\tau\). In practice, we set \(\tau_{\mathrm{max}}=0.5\); beyond this value, the input signal to noise ratio is too low for effective training. At test time, the noise level hyper-parameter in noise-conditioning augmentation, \(t_{\mathrm{eval}}\), provides a trade-off between alignment with the LR input and hallucination by the generative model. As \(t_{\mathrm{eval}}\) increases, more high-frequency detail is lost, so the model is forced to rely more on its knowledge of natural images than on the conditioning signal per se. We find that this enables the hallucination of realistic textures and visual detail. ## 5 Experiments SR3+ is trained with a combination of degradations and noise-conditioning augmentation on multiple datasets, and applied zero-shot to test data. We use ablations to determine the impact of the different forms of augmentation, of model size, and dataset size. Here, we focus on the blind super-resolution task with a \(4\times\) magnification factor. For baselines, we use SR3 (Saharia et al., 2022) and the previous state-of-the-art in blind super-resolution, i.e., Real-ESRGAN (Wang et al., 2021). Like SR3, the LR input up-sampled by \(4\times\) using bicubic interpolation. The output samples for SR3 and SR3+ are obtained using DDPM ancestral sampling (Ho et al., 2020) with 256 denoising steps. For simplicity and to train with continuous timesteps, we use the cosine log-SNR schedule introduced by Ho and Salimans (2022). **Training.** For fair comparison with Real-ESRGAN, we first train SR3+ on the datasets used to train Real-ESRGAN (Wang et al., 2021); namely, DF2K+OST (Agustsson and Timofte, 2017), a combination of Div2K (800 images), Flick2K (2650 images) and OST300 (300 images). To explore the impact of scaling, we also train on a large dataset of 61M images, combining a collection of in-house images with DF2K+OST. During training, following Real-ESRGAN, we extract a random \(400\!\times\!400\) crop for each image and then apply the degradation pipeline (Fig. 2). The degraded image is then resized to \(100\!\times\!100\) (for \(4\times\) magnification). LR images is then up-sampled using bicubic interpolation to \(400\times 400\) from which center crops yield \(256\!\times\!256\) images for training the \(64\!\times\!64\to 256\!\times\!256\) task. Since the model is convolutional, we can then apply it to arbitrary resolutions and aspect ratios at test time. For the results below, SR3+ and all ablations are trained on the same data with the same hyper-parameters. Note that SR3+ reduces to SR3 when the degradations and noise-conditioning augmentation are removed. All models were trained for 1.5M steps, using a batch size of 256 for models trained on DF2K+OST and 512 otherwise. We additionally consider two models sizes, with 40M and 400M weights. The smaller enables direct comparison to Real-ESRGAN, which also has about 40M parameters. The larger model exposes the impact of model scaling. **Testing.** For testing, as mentioned above, we focus on zero-shot application to test datasets disjoint from those used for training. In all experiments and ablations, we use the RealSR (Cai et al., 2019) v3 and DRealSR (Wei et al., 2020) datasets for evaluation. RealSR has 400 paired low-and-high-resolution images, from which we compute 25 random but aligned \(64\times 64\) and \(256\times 256\) crops per image pair. This yields a fixed test set of 10,000 image pairs. DRealSR contains more than 10,000 image pairs, so we instead extract \(64\times 64\) and \(256\times 256\) center crops for 10,000 random images. Model performance is assessed with a combination of PSNR, SSIM (Wang et al., 2004) and FID (10k) (Heusel et al., 2017). While reference-based metrics like PSNR and SSIM are useful for small magnification factors, at Figure 3: Sample comparison between Real-ESRGAN and various SR3+ models (ours). We observe that Real-ESRGAN often suffers from oversmoothing and excessive contrast, while SR3+ is capable of generating high-fidelity, realistic textures. magnifications of 4x and larger, especially when using a generative model and noise-conditioning augmentation in testing, the posterior distribution is complex, and one expects significant diversity in the output in contrast to regression models. For SR tasks with multi-modal posterios, e.g., at larger magnifications, reference-based metrics do not agree well with human preferences. While blurry images tend to minimize RMSE from ground truth, they are scored worse by human observers (Chen et al., 2018; Dahl et al., 2017; Menon et al., 2020; Saharia et al., 2022c). In particular PSNR and SSIM tend to over-penalize plausible but infered high-frequency detail that may not agree precisely with ground truth images. We nevertheless consider reconstruction metrics to remain important to evaluate SR models, as they reward alignment and this is a desirable property (especially on regions with less high-frequency details). In addition to PSNR and SSIM, we also report FID, which on sufficiently larger datasets provides a measure of aggregate statistical similarly with ground truth image data. This correlates better with human quality assessment. As generative models are applied to more difficult inputs, or with large amounts of NCA or larger magnifications, we will need to rely more on FID and similar measures. For in such cases, we will be relying on model inference to capture stats of natural images, and this requires a much larger model, as generative models are hard to learn. So one would expect larger data and larger models would perform better. ### Comparison with Real-ESRGAN and SR3 As previously discussed, we compare SR3+ models of different sizes with Real-ESRGAN, the previous state-of-the-art model on blind super-resolution, all trained on the same data. Moreover, in order to attain the best possible results in general, we compare our best SR3+ model trained on said data with an identical one that was instead trained on the much larger 61M-image dataset (and with twice the batch size). For evaluation, we perform a grid sweep over \(t_{\mathrm{eval}}\) from 0 to 0.4, with increments of 0.05, and report results with \(t_{\mathrm{eval}}=0.1\), which we consistently find to be the best value. We provide side-by-side comparisons in Figure 3, and show quantitative results in Table 1. We find that, with a 40M-parameter network, SR3+ achieves competitive FID scores with Real-ESRGAN, achieving better scores on RealSR but slightly worse on DRealSR. Qualitatively, it creates more realistic textures without significant oversmoothing or saturation, but it does worse for certain kinds of images where we care about accurate high-frequency detail, such as images with text. The results and realism of the images improve significantly with a 400M-parameter SR3+ model, outperforming Real-ESRGAN on FID scores when trained on the same dataset, and this gap is furthered widened simply by training on the much larger dataset. In the latter case, some of the failure modes of the earlier models (e.g., the text case) are also alleviated, and rougher textures are more coherent within the images. We provide additional samples in the Supplementary Material. SR3+ does not outperform on reference-based metrics (PSNR, SSIM) are slightly worse, but this expected from strong generative models with either larger magnification factors or larger noise-conditioning augmentation (where the generative model is forced to infer more details). This is also shown by prior work (Chen et al., 2018; Dahl et al., 2017; Menon et al., 2020; Saharia et al., 2022c). We verify this empirically in the samples shown in Figure 4 and Table 2, where, notably, SR3 attains better PSNR and SSIM scores, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{SR Model (Parameter Count, Dataset)} & \multicolumn{2}{c|}{FID(10k) \(\downarrow\)} & \multicolumn{2}{c|}{PSNR \(\uparrow\)} & \multicolumn{2}{c|}{SSIM \(\uparrow\)} \\ \cline{2-7} & RealSR & DRealSR & RealSR & DRealSR & RealSR & DRealSR \\ \hline Real-ESRGAN & 34.21 & 37.22 & **25.14** & **25.85** & **0.7279** & **0.7808** \\ SR3+ (40M, DF2K + OST) & 31.97 & 40.26 & 24.84 & 25.18 & 0.6827 & 0.7201 \\ SR3+ (400M, DF2K + OST) & 27.34 & 36.28 & 23.84 & 24.36 & 0.662 & 0.719 \\ SR3+ (400M, 61M Dataset) & **24.32** & **32.37** & 24.89 & 25.74 & 0.6922 & 0.7547 \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison between Real-ESRGAN and SR3+ (ours). We achieve similar FID scores with a 40M parameter model, and find significant improvement upon increasing model and dataset sizes. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{SR Model (400M parameters, 61M Dataset)} & \multicolumn{2}{c|}{FID(10k) \(\downarrow\)} & \multicolumn{2}{c|}{PSNR\(\uparrow\)} & \multicolumn{2}{c|}{SSIM\(\uparrow\)} \\ \cline{2-7} & RealSR & DRealSR & RealSR & DRealSR & RealSR & DRealSR \\ \hline SR3+ & **24.32** & **32.37** & 24.89 & 25.74 & 0.6922 & 0.7547 \\ \hline SR3+ (no noise cond. aug.) & 34.20 & 49.93 & 22.34 & 22.28 & 0.6469 & 0.6994 \\ \hline SR3+ (no degradations) & 36.93 & 44.18 & 25.00 & 26.22 & 0.6824 & 0.7687 \\ \hline SR3 (i.e., ablating both) & 85.77 & 93.05 & **27.89** & **28.25** & **0.784** & **0.83** \\ \hline \end{tabular} \end{table} Table 2: Ablation study over SR3+ on the RealSR and DRealSR test sets. Note that ablating both components yields the SR3 model. but the model produces blurry results in the blind task. In the 4x magnification task starting from 64x64, \(p(\mathbf{x}|\mathbf{c})\) can be very multimodal (especially on high-frequency details), and these metrics overpenalize plausible but hallucinated high-frequency details. ### Ablation studies We now empirically demonstrate the importance of our main contributions, which we recall are (1) the higher-order degradation scheme and (2) noise conditioning augmentation. We conduct an ablation study using our strongest model, i.e., the 400M-parameter SR3+ model trained on the 61M-image dataset, as worse models break more dramatically upon removing said components. We train similar models as our strongest SR3+ model: one without noise conditioning augmentation, one without the higher-order degradations, and one with neither (which is equivalent to an SR3 model, though using the UNetv3 architecture (Saharia et al., 2022b) and in a larger dataset than in the original work). We then compare FID, PSNR and SSIM on the blind SR task, as before. Whenever using noise-conditioning augmentation, we set \(t_{\mathrm{eval}}=0.1\). Results are included in Table 2 and a sample comparison in Figure 4. Our results show that FID scores increase significantly upon the removal of either of our main contributions (by over 10 points in all cases). And, upon removing both, FID scores are much worse, as this metric punishes the consistent blurriness of SR3 when applied in the wild to out-of-distribution images. We also observe that, specifically without the higher-order degradations, we also observe some bluriness and a slight improvement across reconstruction metrics. With the SR3 model, which qualitatively appears to suffer most from blur in generations, both PSNR and SSIM improve significantly, and, interestingly enough, sufficiently to outperform Real-ESRGAN in both metrics and both evaluation datasets. ### Noise conditioning augmentation at test time Recall that, due to the use of noise conditioning augmentation, we introduce a degree of freedom \(t_{\mathrm{eval}}\) at sampling time that we are free to play with. Intuitively, it would seem that using \(t_{\mathrm{eval}}=0\) would be most appropriate, as adding noise removes some information from the conditioning low-resolution input. Empirically, however, we find that using a nonzero \(t_{\mathrm{eval}}\) can often lead to better results; especially on images where highly detailed textures are desirable. To demonstrate this, we present a comparison of FID scores across different values of \(t_{\mathrm{eval}}\) in Figure 6, for our two 400M-parameter SR3+ models (recall, one trained on DF2K+OST and one on the 61M Figure 4: Ablation samples (\(t_{eval}\!=\!0.1\)), illustrating the importance of higher-order degradations and noise conditioning augmentation. image dataset). We additionally include samples from the SR3+ model trained on the 61M-image dataset in Figure 5. For both models and both evaluation datasets, we find that FID scores can visibly drop when using noise conditioning augmentation at test time, with the best value often about \(t_{\mathrm{eval}}=0.1\). With the model trained on the 61M-image dataset, we curiously find that more aggressive noise conditioning augmentation can be used at test time while still attaining better FID scores than with \(t_{\mathrm{eval}}=0\). In our samples, we show that the effect of using small amounts of test-time noise conditioning augmentation has a subtle but beneficial effect: higher-quality textures appear and there is less bluriness than without any noise, and alignment to the conditioning image remains good or can even improve (e.g., the flower pot seemed to shift up without noise). As we increase \(t_{\mathrm{eval}}\), however, we begin to see initially small but increasingly more apparent misalignment to the conditioning image, as more high-frequency information is destroyed with increasing amounts of noise applied to the conditioning signal. This forces the model to rely on its own knowledge to hallucinate such details and textures, which can be beneficial in most cases (but less so with, e.g., text). ## 6 Conclusion In this work, we propose SR3+, a diffusion model for blind super-resolution. By combining two recent techniques for image enhancement, a higher order degradation scheme and noise conditioning augmentation, SR3+ achieves state-of-the-art FID scores across test datasets for blind super-resolution. We further improve quantitative and qualitative results significantly just by training on a much larger dataset. Unlike prior work, SR3+ is both robust to out-of-distribution inputs, and can generate realistic textures in a controllable manner, as test-time noise conditioning augmentation can force the model to rely on more of its own knowledge to infer high-frequency details. SR3+ excels at natural images, and with enough data, it performs reasonably well on other images such as those with text. We are most excited about SR3+ improving diffusion model quality and robustness more broadly, especially those relying on cascading (Ho et al., 2022), e.g., text-to-image models. SR3+ nevertheless has some limitations. When using noise conditioning augmentation, some failure modes can be observed such as gibberish text, and more training steps might needed for convergence as the task becomes more challenging than with conditioning signals that are always clean. We believe that models with larger capacity (i.e., parameter count), as well as improvements on neural architectures, could address these issues in future work. Figure 5: Samples from SR3+ (400M weights, 61M dataset) using different amounts of test-time noise conditioning augmentation, \(t_{eval}\). Figure 6: FID score comparisons for different amounts of test-time noise conditioning augmentation. We include results with two 400M-parameter SR3+ models, one trained on the DF2K+OST dataset, and another trained on the much larger 61M-image dataset.
2306.04642
DiffusionShield: A Watermark for Copyright Protection against Generative Diffusion Models
Recently, Generative Diffusion Models (GDMs) have showcased their remarkable capabilities in learning and generating images. A large community of GDMs has naturally emerged, further promoting the diversified applications of GDMs in various fields. However, this unrestricted proliferation has raised serious concerns about copyright protection. For example, artists including painters and photographers are becoming increasingly concerned that GDMs could effortlessly replicate their unique creative works without authorization. In response to these challenges, we introduce a novel watermarking scheme, DiffusionShield, tailored for GDMs. DiffusionShield protects images from copyright infringement by GDMs through encoding the ownership information into an imperceptible watermark and injecting it into the images. Its watermark can be easily learned by GDMs and will be reproduced in their generated images. By detecting the watermark from generated images, copyright infringement can be exposed with evidence. Benefiting from the uniformity of the watermarks and the joint optimization method, DiffusionShield ensures low distortion of the original image, high watermark detection performance, and the ability to embed lengthy messages. We conduct rigorous and comprehensive experiments to show the effectiveness of DiffusionShield in defending against infringement by GDMs and its superiority over traditional watermarking methods. The code for DiffusionShield is accessible in https://github.com/Yingqiancui/DiffusionShield.
Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, Yue Xing, Jiliang Tang
2023-05-25T11:59:28Z
http://arxiv.org/abs/2306.04642v4
# DiffusionShield: A Watermark for Copyright Protection against Generative Diffusion Models ###### Abstract Recently, Generative Diffusion Models (GDMs) have showcased their remarkable capabilities in learning and generating images. A large community of GDMs has naturally emerged, further promoting the diversified applications of GDMs in various fields. However, this unrestricted proliferation has raised serious concerns about copyright protection. For example, artists including painters and photographers are becoming increasingly concerned that GDMs could effortlessly replicate their unique creative works without authorization. In response to these challenges, we introduce a novel watermarking scheme, DiffusionShield, tailored for GDMs. DiffusionShield protects images from copyright infringement by GDMs through encoding the ownership information into an imperceptible watermark and injecting it into the images. Its watermark can be easily learned by GDMs and will be reproduced in their generated images. By detecting the watermark from generated images, copyright infringement can be exposed with evidence. Benefiting from the uniformity of the watermarks and the joint optimization method, DiffusionShield ensures low distortion of the original image, high watermark detection performance, and the ability to embed lengthy messages. We conduct rigorous and comprehensive experiments to show the effectiveness of DiffusionShield in defending against infringement by GDMs and its superiority over traditional watermarking methods. ## 1 Introduction Generative diffusion models (GDMs), such as Denoising Diffusion Probabilistic Models (DDPM) Ho et al. (2020) have shown their great potential in generating high-quality images. This has also led to the growth of more advanced techniques, such as DALL-E2 Ramesh et al. (2022), Stable Diffusion Rombach et al. (2022), and ControlNet Zhang and Agrawala (2023). In general, a GDM learns the distribution of a set of collected images, and makes samplings to generate images that follow the learned distribution. As these techniques become increasingly popular, concerns have arisen regarding the copyright protection of creative works shared on the Internet. For instance, a fashion company may invest significant resources in designing a new fashion. After the company posts the pictures of this fashion to the public for browsing, an unauthorized entity can train their GDMs to mimic its style and appearance, generating similar images and resulting in products. This infringement highlights the pressing need for copyright protection mechanisms. To provide protection for creative works, watermark techniques such as Cox et al. (2002); Podlichuk and Delp (2001); Zhu et al. (2018); Navas et al. (2008); Yu et al. (2021) are often deployed, which aim to inject (invisible) watermarks into images and then detect them to track the malicious copy and accuse the infringement. However, directly applying these existing methods to GDMs still faces tremendous challenges. Indeed, since existing watermark methods have not specifically been designed for GDMs, their watermarks in the original images could be eliminated by the denoising process of GDMs so they could disappear in GDM-generated images. Then, the infringement cannot be effectively verified and accused. As empirical evidence in Figure 1, we train two popular GDMs on a CIFAR10 dataset whose samples are watermarked by two representative watermark methods (Navas et al., 2008; Zhu et al., 2018), and we try to detect the watermarks in the GDM-generated images. The result demonstrates that the watermarks from these methods are either hardly learned and reproduced by GDM (e.g., FRQ Navas et al. (2008)), or require a very large budget (the extent of image distortion) to partially maintain the watermarks (e.g., HiDDeN (Zhu et al., 2018)). Therefore, dedicated efforts are still greatly desired to developing the watermark technique tailored for GDMs. In this work, we argue that one critical factor that causes the infle-ficacy of these existing watermark techniques is the inconsistency of watermark patterns. It means that, in these methods (Navas et al., 2008; Zhu et al., 2018), the watermark in each image for each user is distinct. Thus, GDMs can hardly learn the distribution of watermarks and reproduce them in the generated samples. To address this challenge, we propose **DiffusionShield** to successfully enhance the "_pattern uniformity_" (Section 3.2) of the watermarks to make them consistent and easily reproduced by GDMs. Different from existing methods, DiffusionShield manages to increase this "_pattern uniformity_" by designing **blockwise watermarks** that are divided into basis patches. Each user can have a specified sequence of these basis patches to watermark his / her images, identifying the unique copyright. As a result, the watermarks will repeatedly appear in the training set of GDMs, to make them be reproducible and detectable. Furthermore, DiffusionShield introduces a joint optimization method for basis patches and the watermark detector to enhance each other, which achieves protection with a smaller budget and higher accuracy. In addition, once the watermarks are obtained, DiffusionShield does not require re-training when there is an influx of new users and images. As a result, DiffusionShield enables great flexibility to accommodate multiple users. To the best of our knowledge, this work is the first one that accomplishes the goal to protect the copyright of data against GDMs via watermark techniques. ## 2 Related Work ### Generative Diffusion Models (GDM)s In recent years, Generative Diffusion Models (GDMs) have made significant strides. A breakthrough in GDMs is achieved by DDPM Dhariwal and Nichol (2021), which demonstrates great superiority in generating high-quality images. The work of Ho and Salimans (2022) further advances the field by proposing a novel approach that eliminates the need for classifiers in the training process of GDMs. The work Song et al. (2020) presents Denoising Diffusion Implicit Models (DDIMs), a variant of GDMs with improved efficiency in sampling. Besides, techniques such as Rombach et al. (2022) achieve high-resolution image synthesis and text-to-image synthesis by applying the diffusion processes in the latent space of images. These advancements underscore the growing popularity and efficacy of GDM-based techniques. To train Generative Diffusion Models (GDMs), many existing methods rely on collecting a significant amount of training data from public resources (Deng et al., 2009; Yu et al., 2015; Guo et al., 2016). However, there is a concern that if a GDM is trained on copyrighted material and produces outputs that are substantially similar to the original copyrighted works, it could potentially infringe on the copyright owner's rights. This issue has already garnered public attention Vincent (2023). This paper focuses on mitigating this risk by employing a watermarking technique to detect copyright infringements associated with GDMs. Figure 1: Watermark detection accuracy (%) on GDM-generated images and the corresponding budget (\(l_{2}\) norm) of watermarks. ### Image Watermarking Image watermarking involves embedding invisible information into the carrier images and is commonly used to identify ownership of the copyright. Traditional watermarking techniques include spatial domain methods and frequency domain methods Cox et al. (2002); Navas et al. (2008); Shih and Wu (2003); Kumar (2020). These techniques embed watermark information by modifying the pixel values Cox et al. (2002), frequency coefficients Navas et al. (2008), or a combination of both Shih and Wu (2003); Kumar (2020). In recent years, various digital watermarking approaches based on Deep Neural Networks (DNNs) Zhu et al. (2018); Zhang et al. (2019); Tancik et al. (2020); Weng et al. (2019) have been proposed. For example, an autoencoder-based network architecture is introduced to conduct the embedding and extracting of watermarks Zhu et al. (2018), while the structure of GAN is utilized Zhang et al. (2019) to realize high-capacity imperceptible watermark embedding. Those techniques are then further generalized to physical photographs Tancik et al. (2020) or videos Weng et al. (2019). Notably, there are existing studies focusing on watermarking generative neural networks, such as GANs (Goodfellow et al., 2020) and image processing networks (Sehwag et al., 2022). Different from our work, their goal is to safeguard the _intellectual property (IP) of generative neural networks_ or to make synthetic images distinguishable from natural images to _prevent the spread of visual misinformation_. To accomplish their goals, the works Wu et al. (2020); Yu et al. (2021); Zhao et al. (2023); Zhang et al. (2020) embed imperceptible watermarks into every output of a generative model, enabling the defender to determine whether an image was generated by a specific model or not. Various approaches have been employed to inject watermarks, including reformulating the training objectives of the generative models Wu et al. (2020), modifying the model's training data Yu et al. (2021); Zhao et al. (2023), or directly applying a watermark embedding process to the output images before they are presented to end-users Zhang et al. (2020). Additionally, a backdoor trigger is extended to protect data by Li et al. (2022). However, it has been designed solely for classification models instead of GDMs, and cannot be applied to our task since it does not encode any text information about the copyright and requires the access to the suspicious model. In this paper, we delve into the development of watermarks specifically designed for GDMs with the principal aim of safeguarding the copyright of data against potential infringement by these GDMs. This could be a more difficult problem as the data owners cannot control the training and the inference process of GDMs. ## 3 Method In this section, we first formally define our studied problem and the key notations. Next, we point out that the "pattern uniformity" is a key factor for the watermark to be reproduced in GDM-generated samples. Based on this finding, we introduce the details for the two essential components of our watermarking method DiffusionShield, i.e., the blockwise watermark with pattern uniformity and the joint optimization, respectively. ### Problem Statement In this work, we consider there are two roles: (1) **a data owner** who holds the copyright of the data, releases them solely for public browsing, and aspires to protect them from being replicated by GDMs, and (2) **a data offender** who employs a GDM on the released data to appropriate the creative works and infringe the copyright. In reality, we often collect data from multiple resources to train GDMs. Figure 2: An overview of watermarking against GDM that consists of two stages. Thus, we consider a scenario where there are multiple owners to protect their copyright against GDMs by encoding the copyright information into watermarks. We start by defining the one-owner case, and then extend the discussion to the multiple-owner case: \(\bullet\)**Protection for one-owner case.** An image owner aims to release \(n\) images, \(\{\mathbf{X}_{1:n}\}\), strictly for browsing. Each image \(\mathbf{X}_{i}\) has a shape of \((U,V)\) where \(U\) and \(V\) are the height and width, respectively. As shown in Figure 2, the protection process generally comprises two stages: 1) _a protection stage_ when the owner encodes the copyright information into the invisible watermark and adds it to the protected data; and 2) _an audit stage_ when the owner examines whether a GDM-generated sample infringes upon their data. In the following, we introduce crucial definitions and notations. 1. _The protection stage_ happens before the owner releases \(\{\mathbf{X}_{1:n}\}\) to the public. To protect the copyright, the owner encodes the copyright message \(\mathbf{M}\) into an invisible watermark \(\mathbf{W}_{i}\), and adds the watermark into \(\mathbf{X}_{i}\) to get a protected data \(\tilde{\mathbf{X}}_{i}=\mathbf{X}_{i}+\mathbf{W}_{i}\). \(\mathbf{M}\) can contain information like texts which can signify the owners' unique copyright. \(\tilde{\mathbf{X}}_{i}\) and \(\mathbf{X}\) appear similar in human eyes because the budget of the watermark is restrained by \(\|\mathbf{W}_{i}\|_{p}\leq\epsilon\). Hence, the watermark does not detrimally affect normal browsing. Instead of releasing \(\{\mathbf{X}_{1:n}\}\), the owner releases the protected \(\{\tilde{\mathbf{X}}_{1:n}\}\) for public browsing. 2. _The audit stage_ refers to the scenario that the owner finds suspicious images which potentially offend the copyright of their images, and they scrutinize whether these images are generated by GDMs from their released data. We assume that the data offender collects a dataset \(\{\mathbf{X}_{1:N}^{\mathcal{G}}\}\) that contains the protected images \(\{\tilde{\mathbf{X}}_{1:n}\}\), i.e. \(\{\tilde{\mathbf{X}}_{1:n}\}\subset\{\mathbf{X}_{1:N}^{\mathcal{G}}\}\) where \(N\) is the total number of both protected and unprotected images, and trains a GDM, \(\mathcal{G}\), from scratch to generate images, \(\mathbf{X}_{\mathcal{G}}\), which mimics the protected images. If \(\tilde{\mathbf{X}}_{\mathcal{G}}\) contains the copyright information of the data owner, once \(\mathbf{X}_{\mathcal{G}}\) is inputted to a decoder \(\mathcal{D}\), the copyright message should be decoded by \(\mathcal{D}\). Notably, the data owner is not required to have access to \(\mathcal{G}\) during this stage. \(\bullet\)**Protection for multiple-owner case.** When there are \(K\) data owners to protect their distinct sets of images, we denote their sets of images as \(\{\mathbf{X}_{1:n}^{k}\}\) where \(k=1,...,K\). Following the methodology of one-owner case, each owner can re-use the same encoding process and decoder to encode and decode distinct messages in different watermarks, \(\mathbf{W}_{i}^{k}\), which signifies their specific copyright messages \(\mathbf{M}^{k}\). The protected version of images is denoted by \(\tilde{\mathbf{X}}_{i}^{k}=\mathbf{X}_{i}^{k}+\mathbf{W}_{i}^{k}\). Then the protected images, \(\{\tilde{\mathbf{X}}_{1:n}^{k}\}\), can be released by their respective owners for public browsing, ensuring their copyright is maintained. More details about the two protection cases can be found in Appendix A. ### Pattern Uniformity In this subsection, we uncover one important factor which we called "_pattern uniformity_" that could be an important reason for the failure of existing watermark techniques in GDMs. Before our work, there are previous studies Sehwag et al. (2022), Um and Ye (2023), Daras et al. (2023) suggesting that GDMs tend to learn data samples from high probability density regions in the data space and ignore the low probability density regions. However, many existing watermarks such FRQ Navas et al. (2008) and HiDDeN Zhu et al. (2018) can only generate distinct watermarks for different data samples without any relation between each other. In other words, their generated watermarks are dispersed and located in low-density areas in the data space. As a result, they cannot be effectively extracted and learned by GDMs. Therefore, in our work, we formally define the "pattern uniformity" as the consistency of different watermarks injected for different samples: \[Z=1-\frac{1}{n}\sum_{i=1}^{n}\left\|\frac{\mathbf{W}_{i}}{\left\|\mathbf{W}_{i} \right\|_{2}}-\mathbf{W}_{mean}\right\|_{2}\text{, where }\mathbf{W}_{mean}=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathbf{W}_{i}}{\left\|\mathbf{W}_{i} \right\|_{2}} \tag{1}\] where \(Z\) inversely corresponds to the standard deviation of normalized watermarks. A larger \(Z\) represents less diverse and higher pattern uniformity. We further conduct experiments to illustrate the importance of this "pattern uniformity". In the experiment shown in Figure 3, we test the ability of DDPM Ho et al. (2020) to learn watermarks with Figure 3: Pattern uniformity vs. watermark detection rate. different pattern uniformity. The watermarks \(\mathbf{W}_{i}\) are random pictures whose pixel value is re-scaled by the budget \(\sigma\) to a limited range, and the watermarked images are \(\tilde{\mathbf{X}}_{i}=\mathbf{X}_{i}+\sigma\times\mathbf{W}_{i}\). More details about the settings for this watermarks and the detector can be found in Appendix B.1. Figure 3 illustrates a positive correlation between the watermark detection rate in the GDM-generated images and the pattern uniformity, which implies that higher pattern uniformity facilitates better watermark reproduction. Motivated by this finding, we propose a novel watermarking approach characterized by high pattern uniformity, specifically designed to enhance protection against GDMs. ### Watermarks and Decoding Watermarks In this subsection, we introduce our proposed approach, referred as DiffusionShield. This model is designed to resolve the problem of inadequate reproduction of prior watermarking approaches in GDM-generated images. It adopts a blockwise watermarking approach to augment pattern uniformity, which improves the reproduction of watermarks in generated images and enhances flexibility. **Blockwise watermarks.** In DiffusionShield, we use the sequence of _basis patches_ to encode the textual copyright message \(\mathbf{M}\). In detail, the message \(\mathbf{M}\) can be converted into a sequence of binary numbers by predefined rules like ASCII. To condense the sequence's length, we convert the binary sequence into a \(B\)-nary sequence, denoted as \(\{\mathbf{b}_{1:m}\}\), where \(m\) is the message length and \(B\)-nary represents different numeral systems like quarternary (\(B=4\)) and octal (\(B=8\)). Accordingly, DiffusionShield partitions the whole watermark \(\mathbf{W}\) into a sequence of \(m\) patches, \(\{\mathbf{w}_{1:m}\}\), and each patch is chosen from a set of basis patch \(\{\mathbf{w}^{(1:B)}\}\). The set \(\{\mathbf{w}^{(1:B)}\}\) has \(B\) basis patch candidates with a shape \((u,v)\), which represent different values of the \(B\)-nary bits. The sequence of \(\{\mathbf{w}_{1:m}\}\) denotes that of \(B\)-nary bits \(\{\mathbf{b}_{1:m}\}\) derived from \(\mathbf{M}\). For example, as depicted in Figure 4, we have four basis patches \((B=4)\), and each of the patches has a unique pattern. To encode the copyright message \(\mathbf{M}\) = "37th NeurIPS2023", we first convert it into binary sequence "00110011 00110111 01110100..." based on ASCII, and transfer it into quarternary sequence \(\{\mathbf{b}_{1:m}\}\), "030303131310...". Then we concatenate these basis patches in the order of \(\{\mathbf{b}_{1:m}\}\) to get the complete watermark \(\mathbf{W}\) and add \(\mathbf{W}\) to the images from the data owner. Once the data offender uses a GDM to learn from it, the watermarks will appear on the generated images, serving as evidence of copyright infringement. **Decoding the watermarks.** To detect the watermark and decode the message from the watermark, DiffusionShield employs a decoder, \(\mathcal{D}_{\theta}\), which is a classifier that can decode \(\mathbf{w}_{i}\) into a bit \(\mathbf{b}_{i}\). Here, \(\theta\) is the parameter of the classifier. \(\mathcal{D}_{\theta}\) accepts an image block, \(\mathbf{x}\), which is watermarked by a basis patch as input and outputs the category of the basis patch, i.e., \(\mathbf{b}_{i}=\mathcal{D}_{\theta}(\mathbf{x}_{i}+\mathbf{w}_{i})\). The sequence \(\{\hat{\mathbf{w}}_{1:m}\}\) in the generated sample is classified into \(\{\hat{\mathbf{b}}_{1:m}\}=\{\mathcal{D}_{\theta}(\hat{\mathbf{x}}_{i}+\hat{\mathbf{w}}_{ i})|i=1,...,m\}\), which is the \(B\)-nary message that we embed into the watermark. With the decoded message, we can accurately identify the owner of the data, thereby confirming its origin. Our watermark can be reproduced in the generated samples when the protected data is trained by a GDM. In the generated samples, if the copyright information can be decoded by \(\mathcal{D}_{\theta}\), we can verify the infringement of GDM. _Remarks._ From the discussion above, it is evident that the designed watermarks have higher uniformity. It is because each user has the same watermark in their images. Therefore, these basis blocks and watermarks are more likely to be learned by GDMs. Additionally, DiffusionShield demonstrates remarkable flexibility when applied to multiple-owner scenarios since the basis patches and decoder can be reused by new owners. Once the watermarks are generated and the decoder is obtained, new users can form the new sequence of these basis patches without re-training. ### Jointly Optimize Watermark and Decoder While pattern uniformity facilitates the reproduction of watermarks in GDM-generated images, it does not guarantee the detection performance of the decoder, \(\mathcal{D}_{\theta}\). Therefore, we further propose a joint optimization method to search for the optimal basis patch patterns and obtain the optimized detection decoder in this subsection. Ideally, the basis patches and the decoder should satisfy: \[\mathbf{b}^{(i)}=\mathcal{D}_{\theta}\left(\mathbf{p}+\mathbf{w}^{(i)}\right)\text{for }\forall\ i\in\{1,2,...,B\}, \tag{2}\] Figure 4: An \(8\times 8\) sequence of basis patches encoded with message ”030303131310...”. Different patterns represent different basis patches. where \(\mathbf{w}^{(i)}\) is one of the \(B\) basis patch candidates, \(\mathbf{b}^{(i)}\) is the correct label for \(\mathbf{w}^{(i)}\), and \(\mathbf{p}\) can be a random block with the same shape as \(\mathbf{w}^{(i)}\) cropped from any image. The ideal decoder, capable of accurately predicting all the watermarked blocks, ensures that all embedded information can be decoded from the watermark. To increase the detection performance of the decoder, we simultaneously optimize the basis patches and the decoder using the following bi-level objective: \[\min_{\mathbf{w}^{(1)}:B}\min_{\theta}\mathbb{E}\left[\sum_{i=1}^{B}- \mathds{1}\left[\mathcal{D}_{\theta}\left(\mathbf{p}+\mathbf{w}^{(i)}\right)=\mathbf{b}^{ (i)}\right]\log\left(\mathcal{D}_{\theta,(i)}\left(\mathbf{p}+\mathbf{w}^{(i)}\right) \right)\right]\text{ s.t. }\|\mathbf{w}^{(i)}\|_{\infty}\leq\epsilon, \tag{3}\] where the inner formulation is the cross-entropy loss for the classification of the basis patches, \(\mathds{1}\) is the indicator function, and \(\mathcal{D}_{\theta,i}\) is the softmax probability for the \(i\)-th class. The \(l_{\infty}\) budget is constrained by \(\epsilon\). To reduce the number of categories of basis patches, we set \(\mathbf{w}^{(1)}=\mathbf{0}\), which means that the blocks without watermark should be classified as \(\mathbf{b}=1\). Thus, the bi-level optimization problem can be rewritten as: \[\left\{\begin{aligned} \theta^{*}&=\operatorname*{arg \,min}_{\theta}\mathbb{E}\left[\sum_{i=1}^{B}-\mathds{1}\left[\mathcal{D}_{ \theta}\left(\mathbf{p}+\mathbf{w}^{(i)}\right)=\mathbf{b}^{(i)}\right]\log\left(\mathcal{ D}_{\theta,(i)}\left(\mathbf{p}+\mathbf{w}^{(i)}\right)\right)\right]\\ \mathbf{w}^{(2:B),*}&=\operatorname*{arg\,min}_{\mathbf{w} ^{(2:B)}}\mathbb{E}\left[\sum_{i=2}^{B}-\mathds{1}\left[\mathcal{D}_{\theta^{* }}\left(\mathbf{p}+\mathbf{w}^{(i)}\right)=\mathbf{b}^{(i)}\right]\log\left(\mathcal{D}_{ \theta^{*},(i)}\left(\mathbf{p}+\mathbf{w}^{(i)}\right)\right)\right]\text{ s.t. }\|\mathbf{w}^{(i)}\|_{ \infty}\leq\epsilon\end{aligned}\right. \tag{4}\] The upper-level objective aims to increase the performance of the classifier \(\mathcal{D}_{\theta}\), while the lower-level objective optimizes the basis patches to facilitate their detection by the decoder. By the two levels of the objectives, the basis patches and the decoder potentially promote each other to achieve higher accuracy on a smaller budget. To ensure the basis patches can be adapted to various image blocks and thereby increase their flexibility, we use randomly cropped image blocks as the host images in the training process of the basis patches and decoder. More details about the algorithm of the joint optimization can be found in Appendix C. ## 4 Experiment In this section, we assess the efficacy of DiffusionShield across various budgets, datasets, and protection scenarios. We first introduce our experimental setups in Section 4.1. In Section 4.2, we evaluate and analyze the performance of DiffusionShield in terms of its performance and invisibility during both the protection and audit stages. Then we further investigate DiffusionShield from Section 4.3 to Section 4.5 in terms of its flexibility and efficacy in multiple-user cases, its capacity for message length and robustness against image corruptions and GDM sampling accelerators, respectively. ### Experimental Settings **Datasets, baselines and GDM.** We conduct the experiments using three datasets and compare DiffusionShield with four baseline methods. The datasets include CIFAR10 and CIFAR100, both with \((U,V)=(32,32)\) and STL10 with \((U,V)=(64,64)\). The baseline methods include a simplified version of DiffusionShield without joint optimization called Image Blending (IB), DWT-DCT-SVD based watermarking in the frequency domain (FRQ) Navas et al. (2008); HiDDeN Zhu et al. (2018), and DeepFake Fingerprint Detection (DFD) Yu et al. (2021) (which is designed for DeepFake Detection and adapted to our data protection goal). In the audit stage, we use the improved DDPM proposed by Nichol et al. Nichol and Dhariwal (2021) as the GDM model to train on the watermarked data. More details about the baselines and the improved DDPM is in Appendix B.3 and B.4, respectively. **Evaluation metrics.** In our experiments, we generate \(T\) images from each GDM and decode copyright messages from them. We compare the effectiveness of watermarks in terms of their invisibility, the decoding performance, and the capacity to embed longer messages: * **(Perturbation) Budget.** We use the LPIPS Zhang et al. (2018) metric together with \(l_{2}\) and \(l_{\infty}\) differences to measure the visual discrepancies between the original and watermarked images. The lower values of these metrics indicate better invisibility. * **(Detection) Accuracy.** Following Yu et al. (2021); Zhao et al. (2023), we apply bit accuracy to evaluate the correctness of copyright messages encoded in the generated images. To compute bit accuracy, we first transform the ground truth \(B\)-nary message \(\{\mathbf{b}_{1:m}\}\) and the decoded message \(\{\mathbf{\hat{b}}_{1:m}\}\) back into binary messages \(\{\mathbf{b}^{\prime}_{1:m\log_{2}B}\}\) and \(\{\mathbf{\hat{b}}^{\prime}_{1:m\log_{2}B}\}\). Then the bit accuracy for one watermark is calculated as: \[\text{Bit-Acc}\ \equiv\frac{1}{m\log_{2}B}\sum_{k=1}^{m\log_{2}B}\mathds{1} \left(\mathbf{b}^{\prime}_{1:m\log_{2}B}=\mathbf{\hat{b}}^{\prime}_{1:m\log_{2}B}\right).\] (5) The worst bit accuracy is expected to be 50%, which is equivalent to random guessing. * **Message length.** The length of encoded message reflects the capacity of encoding. Apart from FRQ and HiDDeN, we encode a 128-bit message into each image of CIFAR10 and CIFAR100, and a 512-bit message into each image of STL10. To ensure the bit accuracy of FRQ and HiDDeN, we use 32 bits for CIFAR images and 64 bits for STL10. **Implementation details**. We set \((u,v)=(4,4)\) as the shape of the basis patches and set \(B=4\) for quarternary messages. We use ResNet He et al. (2016) as the decoder to classify different basis patches. For the joint optimization, we use 5-step PGD Madry et al. (2017) limited by \(l_{\infty}\leq\epsilon\) to update the basis patches and use SGD to optimize the decoder. For training the GDMs, we consider the scenario where the data offender may collect and train the watermarked images and non-watermarked images together, as mentioned in Section 3.1. Hence, in all the datasets, we designate one random class of images as watermarked images, while treating other classes as unprotected images. To generate images of the protected class, we either 1) directly use a **class-conditional** GDM to generate images from the specified class, or 2) apply an object classifier to filter images of the protected class from the **unconditional** GDM's output. The bit accuracy on unconditionally generated images may be lower than that of the conditional generated images because object classifiers cannot achieve 100% accuracy. More details are presented in Appendix B.2. ### Results on Protection Performance against GDM In this subsection, we demonstrate that our DiffusionShield can provide much better protection than other methods in invisibility and bit accuracy by the experimental results in Table 1. We compare the results on two groups of images: (1) the originally released images with watermarks (**Released**) and (2) the generated images from GDMs trained on the watermarked data via class condition GDM or unconditional GDM (**Cond.** and **Uncond.**). Based on the results in Table 1, we can see: **First, DiffusionShield can protect the images with the highest bit accuracy and the lowest budget among all the methods. For example, on CIFAR10 and STL10, with all the budgets from 1/255 to 8/255, DiffusionShield can achieve almost 100% bit accuracy on released images and conditionally generated images, which is better than all the baseline methods. Even constrained by the smallest budget with an \(l_{\infty}\) norm of \(1/255\), DiffusionShield can still achieve a high successful reproduction \begin{table} \begin{tabular}{c c c|c c c c|c c c c} \hline \hline & & IB & FRQ & HiDDeN & DFD & \multicolumn{3}{c}{DiffusionShield (ours)} \\ \hline \multirow{4}{*}{CIFAR10} & \multirow{4}{*}{Budget} & \(l_{\infty}\) & 7/255 & 13/255 & 65/258 & 28/255 & **1/255** & 2/255 & 4/255 & 8/255 \\ & & \(l_{2}\) & 0.52 & 0.70 & 2.65 & 1.21 & **0.18** & 0.36 & 0.72 & 1.43 \\ & & LPIPS & 0.01582 & 0.01790 & 0.14924 & 0.07095 & **0.00005** & 0.00020 & 0.00120 & 0.01470 \\ \cline{2-11} & \multirow{4}{*}{Accuracy} & Released & 87.2767 & 99.7875 & 99.0734 & 95.7763 & 99.6955 & 99.4966 & 99.9909 & **99.9933** \\ & & & Cond. & 87.4840 & 57.7469 & 98.9250 & 93.5703 & 99.8992 & 99.9945 & **100.0000** & 99.9996 \\ & & Uncond. & 81.4839 & 55.6907 & **97.1536** & 89.1977 & 93.8186 & 95.0618 & 96.8904 & 96.0877 \\ \cline{2-11} & \multirow{4}{*}{Budget} & \multirow{4}{*}{Budget} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} \\ \cline{2-11} & & & 0.963 & 0.056 & 0.260 & 0.236 & 0.974 & 0.971 & 0.964 & 0.954 \\ \hline \multirow{4}{*}{CIFAR100} & \multirow{4}{*}{Budget} & \(l_{\infty}\) & 7/255 & 14/255 & 75/255 & 44/255 & **1/255** & 2/255 & 4/255 & 8/255 \\ & & \(l_{2}\) & 0.52 & 0.69 & 3.80 & 1.58 & **0.18** & 0.36 & 0.72 & 1.43 \\ & & LPIPS & 0.00840 & 0.00641 & 0.16677 & 0.03563 & **0.00009** & 0.00013 & 0.00134 & 0.00672 \\ \cline{2-11} & \multirow{4}{*}{Accuracy} & Released & 84.6156 & 99.520 & 99.7000 & 96.1297 & 99.5547 & 99.9297 & 99.9977 & **99.9992** \\ \cline{2-11} & & & Cond. & 54.3406 & 54.4438 & 95.8640 & 90.5828 & 52.0078 & 64.3563 & 99.8000 & **99.9984** \\ \cline{2-11} & & Uncond. & 52.2786 & 55.5380 & 77.7616 & 77.7961 & 58.5230 & 54.4271 & **91.3021** & 87.2869 \\ \cline{2-11} & \multirow{4}{*}{Budget} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} & \multirow{4}{*}{\(l_{\infty}\)} \\ \cline{2-11} & & & 0.822 & 0.107 & 0.161 & 0.180 & 0.854 & 0.855 & 0.836 & 0.816 \\ \hline \multirow{4}{*}{STL10} & \multirow{4}{*}{Budget} & \(l_{\infty}\) & 8/255 & 14/255 & 119/255 & 36/255 & **1/255** & 2/255 & 4/255 & 8/255 \\ & & \(l_{2}\) & 1.09 & 1.40 & 7.28 & 2.16 & **0.38** & 0.76 & 1.51 & 3.00 \\ \cline{1-1} & & LPIPS & 0.06947 & 0.02341 & 0.32995 & 0.09174 & **0.00026** & 0.00137 & 0.00817 & 0.03428 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{Accuracy} & Released & 92.5895 & 99.5750 & 97.2769 & 94.2813 & 99.4969 & 99.9449 & 99.9762 & **99.9926** \\ \cline{1-1} \cline{2-11} & & & Cond. & 96.0541 & 54.3945 & 96.5164 & 94.7236 & 95.48484 & 99.8164 & 99.8883 & **99.99828** \\ \cline{1-1} \cline{2-11} & & Uncond. & 89.2259 & 56.3038 & 91.3919 & 91.8919 & 82.5841 & 93.4693 & **96.1360** & 95.0586 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{Pattern Uniformity} & rate. On CIFAR100, although DiffusionShield with 1/255 and 2/255 \(l_{\infty}\) budget cannot keep a high bit accuracy in generated images due to the small budget, DiffusionShield with an \(l_{\infty}\) budget of \(4/255\) achieves a higher bit accuracy in generated images with a much lower \(l_{\infty}\) difference and LPIPS than baseline methods. For baselines, FRQ cannot be reproduced by GDM, while HiDDeN and DFD require a much larger perturbation budget over DiffusionShield, which is also visualized in Figure 5. Despite having a larger budget and a naive blockwise watermark, the accuracy of IB is much worse than the DiffusionShield with 1/255 budget on CIFAR10 and STL10. The reason for this is that without joint optimization, the decoder cannot perform well on released images and thus cannot guarantee its accuracy on generated images. This indicates the importance of joint optimization in producing optimal basis patches and decoder to increase the accuracy while keeping a low budget. **Second**, we also show that enforcing pattern uniformity can promote the reproduction of watermarks in generated images. In Table 1, we can see that the bit accuracy of the conditionally generated images watermarked by DiffusionShield is as high as that of released images except those with 1/255 and 2/255 budget on CIFAR100, which is due to the limited budget. In addition to DiffusionShield, IB's accuracy in released data and conditionally generated data are also similar. This is because IB is a simplified version of our method without joint optimization and also has high pattern uniformity. In contrast, other methods without pattern uniformity all suffer from a drop of accuracy from released images to conditionally generated images, especially FRQ, which has pattern uniformity lower than 0.11 and an accuracy level on par with a random guess. This implies that the decoded information in watermarks with high pattern uniformity (e.g., IB and ours in CIFAR10 are higher than 0.952) does not change much from released images to generated images and the watermarks can be exactly and easily captured by GDM. It is worthwhile to note that the performance drop on CIFAR100 in 1/255 and 2/255 budgets is also partially due to the low watermark rate. In fact, both a small budget and a low watermark rate can hurt the reproduction of watermarks in generated images, which is discussed in Appendix E. Footnote 2: The perfect pattern uniformity is 1.00 if all the watermarks are exactly the same. But the pixel values of the watermarked image may exceed the range of [0, 255], so we need to clip it to [0, 255], which hurts uniformity. ### Flexibility and Efficacy in Multiple-user Case In this subsection, we demonstrate another advantage of DiffusionShield: being flexibly transferred to new users and maintaining good protection against GDMs. We assume that multiple copyright owners are using DiffusionShield to protect their images, and different copyright messages should be encoded into the images from different copyright owners. In Table 2, we use one class in the dataset as the first owner and the other classes as the new owners. The basis patches (with 4/255 \(l_{\infty}\) budget) and decoder are optimized on the first class and re-used to protect the new classes. Images within the same class have the same message embedded, while images from different classes have distinct messages embedded in them. This process of transferring from one class to the other classes does not take any additional calculation except reordering the basis patches according to different copyright messages, which is very efficient. We train class-conditional GDM on all of the protected data and get the average bit accuracy across classes. As shown in Table 2, on both CIFAR10 and CIFAR100, when we reorder the basis patches to protect the other 3 classes or 9 classes, the protection performance is almost the same as the one class case, with bit accuracy all close to 100%. In addition to its flexibility, this result also shows that \begin{table} \begin{tabular}{c c c} \hline \hline owners & CIFAR-10 & CIFAR-100 \\ \hline 1 & 100.0000 & 99.8000 \\ 4 & 99.9986 & 99.989 \\ 10 & 99.9993 & 99.9986 \\ \hline \hline \end{tabular} \end{table} Table 2: Average bit accuracy (%) across different numbers of copyright owners (on class-conditional GDM). Figure 5: Watermarked images by DiffusionShield and baseline approaches. From those examples, we can see that HiDDeN and DFD cause very obvious distortion of the original images, while DiffusionShield is almost invisible especially when the budget is 1/255 or 2/255. More examples can be found at Appendix D. our watermarks can protect each of the multiple users and can distinguish them clearly even when their data are mixed by the data offender. This is a crucial advantage since we cannot control how the offender might combine our released watermarked data with other datasets when training a GDM. ### Capacity for Message Length The capacity of embedding longer messages is important for watermarking methods since encoding more information can enhance protection by providing more conclusive evidence of infringement. In this subsection, we show the superiority of DiffusionShield over other methods in achieving high watermark capacity while maintaining a high bit accuracy and low budget. Here, we change the number of basis patches, \(B\), to control the capacity of DiffusionShield and change the hyperparameters of HiDDeN and DFD to control their capacity. Figure 6 shows the bit accuracy and \(l_{2}\) budgets of watermarks with different message lengths on the released protected images in CIFAR10. In Figure 5(a), we can see that HiDDeN consistently requires a large budget across varying message lengths, and its accuracy diminishes from 99% at 32 bits to 77% at 128 bits. Conversely, DiffusionShield maintains nearly 100% accuracy at 128 bits, even with a much smaller budget. Similarly, in Figure 5(b), although DFD has a smaller budget than HiDDeN, its accuracy drops from 95% at 128 bits to 72% at 256 bits. In contrast, DiffusionShield maintains 99% accuracy at 256 bits, with significantly lower \(l_{2}\) budget of 0.18. These observations indicate that DiffusionShield has significantly greater capacity compared to HiDDeN and DFD and can maintain good performance even with increased message lengths. ### Robustness of DiffusionShield Robustness of watermarks is important since there is a risk that the quality of the watermarks may be distorted by some disturbances, such as image corruption due to deliberate post-processing activities during the images' circulation, or the application of speeding-up sampling methods in the GDM. In this subsection, we demonstrate that DiffusionShield is robust to maintain its bit accuracy on generated images when the images are corrupted or the sampling procedure is fastened. **Robustness against image corruptions**. We consider Gaussian noise, low-pass filter, greyscale and JPEG compression to test the robustness of DiffusionShield against image corruptions. Different from the previous experiments, during the protection stage, we augment our method by incorporating corruptions into the joint optimization. Each corruption is employed after the basis patches are added to the images. Table 3 shows the bit accuracy of DiffusionShield (with an \(l_{\infty}\) budget of 8/255) on corrupted generated images. The results are compared with DFD, which also claimed robustness of their scheme. As shown by the results, DiffusionShield maintains around 99.8% accuracy under greyscale and low-pass filter, nearly matching the accuracy achieved without any corruption. In contrast, DFD performs nearly at the level of random guess under greyscale and only achieves an accuracy of 88.9383% under low-pass filter. Although DiffusionShield does experience some information loss under Gaussian noise and JPEG compression, with an accuracy of 81.9340% and 94.4461% respectively, its robust bit accuracy still surpasses DFD by about 13% under Gaussian noise and 32% under JPEG compression. From these results, we can see that DiffusionShield is robust against different image corruptions. \begin{table} \begin{tabular}{l l l l} \hline \hline & DFD & ours & \\ \hline No corruption & 93.5703 & 99.9996 \\ Gaussian noise & 68.6332 & 81.9340 \\ Low-pass filter & 88.9383 & 99.8582 \\ Greyscale & 50.8180 & 99.8129 \\ JPEG compression & 62.5484 & 94.4461 \\ \hline \hline \end{tabular} \end{table} Table 3: Bit accuracy (%) under corruptions \begin{table} \begin{tabular}{l l l l l} \hline \hline & DFD & ours & & & \\ \hline No corruption & 93.5703 & 99.9996 \\ Gaussian noise & 68.6332 & 81.9340 \\ Low-pass filter & 88.9383 & 99.8582 \\ Greyscale & 50.8180 & 99.8129 \\ JPEG compression & 62.5484 & 94.4461 \\ \hline \hline \end{tabular} \end{table} Table 4: Bit accuracy (%) with speeding-up models Figure 6: Bit acc. and \(l_{2}\) of different message lengths **Robustness under speeding-up sampling models**. Speeding-up sampling is often employed by practical GDMs due to the time-consuming nature of the complete sampling process, which requires thousands of steps. However, the quality of the images generated via speeded-up methods, such as Denoising Diffusion Implicit Model (DDIM) Song et al. (2020), is typically lower than normal sampling, which could destroy the watermarks on the generated images. In Table 4, we show the performance of DiffusionShield with DDIM to demonstrate its robustness against speeding-up sampling. Although DiffusionShield has low accuracy on CIFAR100 when the budget is 1/255 and 2/255 (same as the situation in Section 4.2), it can maintain high accuracy on all the other budgets and datasets. Even with a 1/255 \(l_{\infty}\) budget, the accuracy of DiffusionShield on CIFAR10 is still more than 99.7% in class-conditionally generated images and more than 94.6% in unconditionally generated images. This is because the easy-to-learn uniform patterns are learned by GDMs prior to other diverse semantic features like shape and textures. Thus, as long as DDIM can generate images with normal semantic features, our watermark can be reproduced in these images. ## 5 Conclusion and Limitations In this paper, we introduce DiffusionShield, a blockwise watermark to protect data copyright against GDMs, which is motivated by our observation that the pattern uniformity can effectively assist the watermark to be captured by GDMs. By enhancing the pattern uniformity of watermarks and leveraging a joint optimization method, DiffusionShield successfully secures copyright with better accuracy and a smaller budget. Experimental results demonstrate the superior performance of DiffusionShield. More discussions on its social impact can be found at Appendix F. However, DiffusionShield currently has its limitations. It is primarily designed for GDMs trained from scratch, and its effectiveness may not extend to GDMs that are fine-tuned from pre-trained models, such as Stable Diffusion Rombach et al. (2022). In addition, the performance of DiffusionShield is also influenced by the watermark rate. Our future work will focus on solving the limitations and enhancing its effectiveness and applicability.
2310.19693
The Effect of Structural Phase Changes on Fermi Level Shifts and Optoelectronic Properties of Lead-Free CsSnI3 Perovskites
The work carried out first-principles calculations within the framework of density functional theory to study the structural stability of the CsSnI3 compound and the influence of phase transitions on their electronic and optical properties. Using the GGA and SCAN functionals, the relaxed structures of the CsSnI3 phases were obtained and their geometric characteristics were assessed. Using the Phonopy code based on VASP, calculations of phonon and thermodynamic properties were performed, and the temperatures of phase transitions of CsSnI3 were determined. Electronic properties and Fermi level shifts as a result of phase transformations of CsSnI3 were assessed using the HSE06 functional and machine learning prediction. The values of the complex dielectric constant and the refractive index of all phases of the CsSnI3 were determined.
Dilshod D. Nematov, Amondulloi S. Burhonzoda, Mekhrdod S. Kurboniyon, Umar Zafari, Kholmirzo T. Kholmurodov, Tomoyuki Yamamoto, Farhod Shokir
2023-10-30T16:09:35Z
http://arxiv.org/abs/2310.19693v1
The Effect of Structural Phase Changes on Fermi Level Shifts and Optoelectronic Properties of Lead-Free CsSnI\({}_{3}\) Perovskites ###### Abstract The work carried out first-principles calculations within the framework of density functional theory to study the structural stability of the CsSnI\({}_{3}\) compound and the influence of phase transitions on their electronic and optical properties. Using the GGA and SCAN functionals, the relaxed structures of the CsSnI\({}_{3}\) phases were obtained and their geometric characteristics were assessed. Using the Phonopy code based on VASP, calculations of phonon and thermodynamic properties were performed, and the temperatures of phase transitions of CsSnI\({}_{3}\) were determined. The temperature dependences of the thermodynamic parameters \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\) were analyzed. The trends in free energy, entropy, enthalpy, heat of formation energy and heat capacity were justified in terms of the pattern of changes in the total energy of the four phases of CsSnI\({}_{3}\) from VASP calculations. It is shown that at 0 K the non-perovskite structure of the CsSnI\({}_{3}\) compound (\(\delta\)-CsSnI\({}_{3}\)) is the most stable (followed by \(\gamma\)-CsSnI\({}_{3}\)), and the tetragonal phase (\(\beta\)) is quite unstable, having the highest energy among the perovskite phases. It was revealed that at temperatures above 450 K the tetragonal phase becomes stable, and when the temperature drops, it transforms into the cubic phase (\(\alpha\)-CsSnI\({}_{3}\)). The phase transition between the \(\beta\) and \(\gamma\) phase of perovskite occurs in the range of 300-320 K, and at 320 K a black-yellow transformation of CsSnI\({}_{3}\) occurs in which the cubic phase (black perovskite) undergoes a phase transition to a non-perovskite conformation (yellow phase). The presence of temperature phase transitions between two orthorhombic phases of CsSnI\({}_{3}\) at 360 K was discovered, although direct transitions of the \(\alpha\)\(\leftarrow\)\(\gamma\) and \(\gamma\)\(\leftarrow\)\(\delta\) types have not yet been reported in any experiment, except for \(\gamma\)\(\rightarrow\)\(\delta\) transitions under the influence of moisture. Based on well-relaxed structures (from the SCAN calculations), the band gap widths for four CsSnI\({}_{3}\) phases were calculated and compared with experimental measurements. Electronic properties and Fermi level shifts as a result of phase transformations of CsSnI\({}_{3}\) were assessed using the HSE06 functional and machine learning prediction. The values of the complex dielectric constant and the refractive index of all phases of the CsSnI\({}_{3}\) were determined. Keywords:lead-free perovskites, instability, phase transitions, thermodynamic characteristics, Fermi level shift, electronic and optical properties, density function theory, phonopy calculations, photovoltaic applications. ## 1 Introduction Over the past few years, metal halide perovskites have intensively attracted the attention of a wide range of researchers and industrial enterprises around the world. They are used in various commercial and technological applications such as solar cells, catalysts, light emitting diodes (LEDs), lasers, X-ray detectors, photodetectors, and field-effect transistors [1-9]. Halide perovskites have also become widely used and recommended in the production of LEDs and luminous bodies with light pumping in the form of luminescent materials [9, 10]. The increasing demand in this area is justified by the fact that perovskite-based semiconductor functional materials have exceptional physical and chemical properties, such as tunable energy band gap, low reflectance, fairly broad absorption spectrum and high absorption coefficient, good photoconductivity and high charge carrier mobility, low exciton binding energy and long diffusion lifetime, optimal electron-hole diffusion lengths and ferroelectricity [6-8, 11-3]. In recent years, they have been widely used as the main raw materials in the absorption layers of solar converters and are actively participating in the program of a universal merciless fight against environmental pollution, reducing the share of carbon dioxide emissions. Along with other materials, perovskites are also actively involved in the program to reduce the rate of use of the earth's depleting fossil fuels in the long term, since the massive burning of fossil fuels in recent years has led to the release of huge amounts of greenhouse gases such as CO\({}_{2}\) and CH\({}_{4}\) into the atmosphere. To reduce greenhouse gas emissions and ensure energy independence, authorities in major powers are expressing growing interest in developing new alternatives to renewable (clean or low-carbon) energy sources to avoid the catastrophic consequences of global warming in the near future. Among the several types of clean energy sources available around the world, solar energy is the most promising and promising. According to NREL, in recent years, the efficiency of perovskite solar cells (PCE) has increased quite significantly from 3.8% to more than 26.1% [14]. For layered lead-based perovskites (MAPbI\({}_{3}\)) solar cells, the highest reported PCE is 25.2% [15]. However, these materials exhibit instability under environmental conditions caused by humidity, humidity, temperature and ultraviolet (UV) light [16]. In addition, Pb-containing perovskites (MA,Cs)PbX\({}_{3}\) (X = I, Br, Cl, F) have relatively low dielectric constants, due to which the rate of charge recombination increases and deteriorates the performance characteristics of solar cells, which is the main obstacle to applications of solar panels and cellular devices [17]. Another problem is the presence of lead (Pb), which is toxic and potentially hazardous to the environment [18]. Because of this, it is extremely important to move on to the development and improvement of the properties of lead-free perovskites as an alternative to perovskites containing toxic lead, which supports the EU regulation on the ban and restrictions on the use of compounds containing lead (Pb) in all electronic and electrical devices due to its toxicity impact [18], which corresponds to the goals of the UN sustainable development strategy, in particular SDGs 7 and 13 [19, 20]. In recent years, lead-containing perovskites have been replaced by alternative and lead-free new metal halide organic-inorganic materials, which are being intensively worked on by researchers in the field of materials science. Inorganic perovskites produced by replacing Pb with Ge and Sn have attracted attention due to their better conductivity and absorption than lead-based perovskites. However, it is argued that other problems exist for some of these compounds. For example, it became known that CsGeI\({}_{3}\) has brittle behavior, and CsSnBr\({}_{3}\) has plasticity [21, 22]. There are a number of other candidates, including yellow-phase compounds based on selenium and tin trihalides (\(\delta\)-CsSnI\({}_{3}\), \(\delta\)-CsSnBr\({}_{3}\), \(\delta\)-CsSnCl\({}_{3}\) and \(\delta\)-CsSnF\({}_{3}\)), devoid of the above disadvantages, but characterized by large band gaps, due to which the absorption capacity of the material. In principle, the band gap can be adjusted by changing the composition and doping with foreign ions [23], influencing hydrostatic pressure [24-26] or temperature-induced phase transitions [27,28]. Among these compounds, the most promising material is CsSnI\({}_{3}\). Black low-bandgap modifications of CsSnI\({}_{3}\) are well suited for photovoltaic devices, but the problem of low stability hinders further progress in this direction. To improve the properties of CsSnI\({}_{3}\) and successfully advance in this direction, it is necessary to develop a strategy for the appropriate introduction of external influences, including the influence of temperature and doping, so that along with increasing stability, the band gap can also be controlled and optimized. Regarding the dependence of the perovskite band gap on its composition, in recent years it has been proven that the stability and band gap in any of the metal halide perovskites of the general formula ABX\({}_{3}\) strongly depend on the interaction of "B" and "X" of the group X = I, Br, Cl, F) and increase with increasing electronegativity of the "X" cation, which in turn leads to a decrease in the length of the B-X bond [29]. On the other hand, it has been shown that replacing the "A" position with another element does not have a significant effect on the band gap, but through the lattice parameter it mediates the consequences and patterns of the B-X interaction, which sometimes accompanies a doping-induced phase transition [30]. In addition, for a detailed study of doping-induced phase transitions and their influence on the change in the band gap of CsSnI\({}_{3}\), it is first necessary to understand the nature of the influence of temperature-dependent phase transitions on the electronic structure and behavior of the Fermi level. In general, the influence of the thermodynamic parameters of perovskites on their electronic and optical properties has become a promising topic of research in recent decades, since many technological applications of these materials and the stable operation characteristics of devices based on them are directly related to the thermal and thermodynamic properties of the raw materials from which these materials are created. devices. Along with experimental measurements, the properties of perovskites have recently been studied by various theoretical methods, as a result of which the efficiency of solar cells based on them is constantly increasing. One such powerful theoretical approach is Density Functional Theory (DFT), which has become a major tool for the theoretical study of solid materials over the past 10 years, as this powerful approach provides a highly accurate reformulation of quantum mechanical calculations of solids and account for the behavior of electrons in all atomic-molecular environments. This is due to the fact that the Kohn-Sham equations are effectively solved using modern computing clusters [31]. On the other hand, these equations are based on one approximation, namely the exchange-correlation energy, which is responsible for the accuracy of quantum calculations. In this paper, aspects of structural stability, electronic and optical properties of lead-free perovskite based on CsSnI3 are investigated using DFT calculations. With the help of exchange-correlation functionals SCAN and HSE06 the issues of phase transitions in CsSnI\({}_{3}\) and their influence on optoelectronic properties of CsSnI\({}_{3}\) are studied, for detailed understanding of the nature of their electronic structure and expedient choice of doping element stabilizing CsSnI\({}_{3}\) under environmental conditions as a promising candidate for solar cells with increased efficiency. **2. Computational Details** The structural, electronic and optical properties of the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\), were investigated based on density functional theory (DFT). Calculations were carried out in the VASP plane wave package [32]. The crystal structures of \(\alpha\)-CsSnI\({}_{3}\) (cubic), \(\beta\)-CsSnI\({}_{3}\) (tetrogonal), \(\gamma\)-CsSnI\({}_{3}\) (orthorombic) and \(\delta\)-CsSnI\({}_{3}\) (orthorhombic-non perovskite) were first fully optimized taking into account the relaxation of lattice parameters and atomic positions. All four modifications of CsSnI\({}_{3}\) were relaxed using the GGA (PBE) [33] and strictly constrained normalized potential (SCAN) functionals [34]. The electronic states Cs[5s\({}^{2}\)5p\({}^{6}\)6s\({}^{1}\)], I[5s\({}^{2}\)5p\({}^{5}\)] and Sn[5s\({}^{2}\)5p\({}^{2}\)] were considered as valence electrons. After performing a series of convergence tests, the kinetic energy cutoff value was set to 450 eV for all four CsSnI\({}_{3}\) phases, and 8\(\times\)8\(\times\)8, 6\(\times\)6\(\times\)8, 5\(\times\)5\(\times\)4, and 5\(\times\)10\(\times\)3 k-points according to the Monkhorst-Pack scheme were chosen for geometric optimization of the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)- phases of CsSnI\({}_{3}\). However, the cutoff energy value for calculations of electronic, thermodynamic and optical properties was increased to 800 eV. Calculations of phonon dispersion and thermodynamic properties were performed using the Phonopy [35] code at smaller k-point values, since such calculations are computationally expensive, especially for larger systems with low symmetry. The forces were estimated on supercells 2\(\times\)2\(\times\)2 (\(\alpha\)), 2\(\times\)1\(\times\)2 (\(\beta\)), 1\(\times\)1\(\times\)2 (\(\gamma\)), 1\(\times\)2\(\times\)1 (\(\delta\)), for which VASP was used as a calculator. Phonopy calculations are performed on a 40-atom supercell using reduced k-point grids (6\(\times\)6\(\times\)6, 6\(\times\)5\(\times\)5, 3\(\times\)2\(\times\)3, and 2\(\times\)4\(\times\)2 for \(\alpha\), \(\beta\), \(\gamma\), and \(\delta\)- phases of the CsSnI\({}_{3}\) compound, respectively), and the phonon frequencies were estimated selected on an interpolated grid of 32\(\times\)32\(\times\)32 q-points (for the \(\alpha\) and \(\beta\) phases) and 24\(\times\)24\(\times\)24 for the two orthorhombic (\(\gamma\) and \(\delta\)) phases. The temperatures of phase transitions in the CsSnI\({}_{3}\) system were determined by subtracting the calculated free energy of the CsSnI\({}_{3}\) phases (DF) transforming into each other, for which DF was taken into account as the sum of Helmhotz free energies from Phonopy calculations with the minimum energy found from VASP calculations. The band gap values of CsSnI\({}_{3}\) were calculated and compared using the exchange-correlation functionals GGA, SCAN and HSE06 [36], however, for a detailed analysis of the electronic structures and optical spectra of CsSnI\({}_{3}\), the hybrid functional HSE06 was used, since this promising functional has proven itself well in recent years and has a leading position among other functionalities for characterizing the electronic properties of materials [37]. The Fermi level shift was estimated by determining the difference in the energy of the most accurate electron in the valence band for each phase, at which the maximum of the valence band of \(\gamma\)-CsSnI\({}_{3}\) was taken as the reference point. ## 3 Results and discussions The relaxed geometric characteristics and crystal lattice constants of the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\) (with GGA and SCAN calculations) are given in Table 1 and compared with the results of experimental measurements. According to Table 1, the SCAN functionality describes the geometry much better than the standard GGA-PBE. The results within this potential are in good agreement with experiment, which speaks to the effectiveness of the use of SCAN for the relaxation of such solid-state systems. An example of this can be observed from a comparison of the calculated X-ray diffraction patterns \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{3}{*}{**Phase**} & \multirow{3}{*}{\begin{tabular}{c} \(\alpha\) \\ (**Cubic**) \\ a=b=c (Å) \\ \end{tabular} } & \(\beta\) & \(\gamma\) & \(\delta\) \\ \cline{2-6} & & **(Cubic)** & **(Tetragonal)** & **(Orthorhombic)** & **(Orth. non-perovskite)** \\ & & **a=b**c (Å) & **a=b**c (Å) & **a, b, c (Å)** & **a, b, c (Å)** \\ \hline \multirow{3}{*}{\begin{tabular}{c} **Lattice** \\ **constants** \\ \end{tabular} } & GGA & 6.261 & 8.789, 6.318 & 8.957, 8.667, 12.503 & 10.621, 4.790, 18.893 \\ \cline{2-6} & SCAN & 6.200 & 8.680, 6.239 & 8.847, 8.533, 12.372 & 10.543, 4.750, 17.882 \\ \cline{2-6} & EXP.[38] & \begin{tabular}{c} 6.206 \\ [38] \\ \end{tabular} & 8.712, 6.191 & 8.688, 8.643, 12.378 & 10.349, 4.763, 17.684 \\ \hline \multicolumn{2}{|c|}{**Space group**} & Pm3m & P4/mbm & Pnam & Pnma \\ \hline \begin{tabular}{c} **Sn-I** \\ **Bond** \\ **(SCAN)** \\ \end{tabular} & Sn-I1 & 3.089 & 3.133 & 3.122 & 3.224 \\ \cline{2-6} & Sn-I2 & & 3.123 & 3.153 & 3.219 \\ \cline{2-6} & Sn-I3 & & & & 2.971 \\ \hline \multicolumn{2}{|c|}{\begin{tabular}{c} **Bond Angles** \\ \end{tabular} } & \multicolumn{3}{c|}{} \\ \hline \begin{tabular}{c} Sn-I1-Sn \\ \end{tabular} & \multirow{2}{*}{180\({}^{o}\)} & \multirow{2}{*}{-} & \(172.3^{o}\) & \multirow{2}{*}{-} \\ \cline{2-6} & & \(167.99^{o}\) & & \(158.1^{o}\) & - \\ \hline \end{tabular} \end{table} Table 1: Relaxed lattice parameters of \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)- modifications of CsSnI\({}_{3}\); The calculated results are compared with the experimental results. we obtained for \(\gamma\)-CsSnI\({}_{3}\) with the results of Yuanyuan Zhou and others [39], from which it can be seen that the results we obtained are similar to the results of experimental measurements (Fig. 1). According to calculations, interatomic distances (especially Sn-I) change significantly depending on the phase formation of CsSnI\({}_{3}\). Along with this, the Sn-I1-Sn and Sn-I2-Sn bond angles also change. Table 2 compares the values of the total energies of the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\), calculated using the GGA functional, from which it is clear that the most stable conformation for CsSnI\({}_{3}\) is the \(\delta\)-modification of this compound. However, \(\gamma\)-CsSnI\({}_{3}\) is the most stable among iodides with a perovskite structure. According to the results given in Table 2, the CsSnI\({}_{3}\) compound stabilizes as it transitions from the cubic phase to the orthorhombic phase or transitions to a non-porovskite stable structure. Similarly, the heats of formation (\(\Delta\)H\({}_{\beta}\)) for the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\) relative to the constituent elements in their standard states were also calculated in order to evaluate the relative enthalpy stability of CsSnI\({}_{3}\) (Table 3). To do this, we carried out additional calculations for the structures of the cell I (Cmca), Sn (I41/amd) and Cs (Im3m). In our case, we calculated the value of \(\Delta\)Hf in relation to the constituent elements using the following equation: \[\Delta H_{f}=E_{tot}^{CsSmI_{3}}-(E_{tot}^{Cs}+E_{tot}^{Sn}+3E_{tot}^{I}) \tag{1}\] \begin{table} \begin{tabular}{|c|c|c|} \hline **System** & **Energy/atom** & \(\Delta\)**E** \\ \hline \(\alpha\)-CsSnI\({}_{3}\) & -2,8198 & 0.0108 \\ \hline \(\beta\)-CsSnI\({}_{3}\) & -2,8171 & 0.0235 \\ \hline \(\gamma\)-CsSnI\({}_{3}\) & -2,8201 & 0.0105 \\ \hline \(\delta\)-CsSnI\({}_{3}\) & -2,8306 & 0 \\ \hline \end{tabular} \end{table} Table 2: GGA-calculated total energies of CsSnI\({}_{3}\) phases. Figure 1: Comparison of theoretical X-ray patterns of \(\gamma\)-CsSnI\({}_{3}\) with X-ray patterns of the orthorhombic phase of CsSnI\({}_{3}\) perovskite [39] obtained by the Bridgman method where \(E_{tot}^{CsSnI_{3}}\), \(E_{tot}^{Cs}\), \(E_{tot}^{Sn}\) and \(E_{tot}^{I}\) are the total energy values of the CsSnI\({}_{3}\) phase and the total energies of the pure components Cs, Sn and I in their respective states. According to the results obtained, as we move from the cubic to the orthorhombic phase, the heat of formation energy decreases, which indicates the relative stability of the orthorhombic phase of the perovskite in environmental conditions. However, the still non-perovskite structure of CsSnI\({}_{3}\) continues to show a trend of maximum stability among all phases of this compound. Higher values of the heat of formation (negative values) obtained for the \(\gamma\)- and \(\delta\)-phase of CsSnI\({}_{3}\) indicate a greater amount of energy that is released for the formation of these phases, which ultimately makes them more stable. To confirm the reliability of the results obtained in Tables 1 and 2, the temperature-dependent values of the Helmholtz (or Gibbs) free energy, as well as the relative entropy of the systems under study as a function of temperature, can be obtained after calculating the phonon frequencies and lattice energy of the equilibrium structure [40]. In Figure 2, the graphs of the temperature dependence of the Helmhols free energy (F) for the \(\alpha\)-, \(\beta\)-, \(\gamma\)-phases relative to the \(\delta\)-phase of CsSnI\({}_{3}\), which claims to be the structure with the lowest energy value at 0 K. Also in Figure 3 (a, b ), demonstrates the temperature dependence curves of entropy (\(\Delta\)S) and heat capacity (C\({}_{\nu}\)) for four phases of CsSnI\({}_{3}\). Figure 4 compares and shows the enthalpy dependence curves (\(\Delta\)H) of the systems under study, obtained based on the expression: \[\Delta\] H =F+\ \[\Delta\] S*T, Figure 2: Temperature-dependent difference in Helmholtz free energy for the \(\alpha\)-, \(\beta\)-, \(\gamma\)-phase of CsSnI\({}_{3}\) relative to its \(\delta\)-phase where T is the absolute temperature. From the results, it can be seen that the \(\beta\) phase has the highest enthalpy and the \(\delta\) phase is naturally the lowest in all temperature ranges. According to the results of Figure 2, at 0 K, the \(\beta\) phase has the highest energy among the perovskite phases, and the energy value for the \(\gamma\) phase is the lowest. In this case, the cubic phase is fixed between these phases. However, according to the graph, it is the tetragonal phase that becomes the most stable at high temperatures. The orthorhombic perovskite phase remains energetically close to the \(\alpha\) phase up to high temperatures. Calculations show that there is energy competition between the non-perovskite phase of CsSnI\({}_{3}\) with the \(\beta\)- and \(\gamma\)-phases of CsSnI\({}_{3}\) in the region of 320-360 K. According to the results in Figure 3 (a, b), the free energy of high-temperature phases in almost all temperature ranges is higher than the free energy \(\delta\)-CsSnI\({}_{3}\), which contradicts the rule of direct dependence of free energy and the stability of materials. The general heat capacity picture also shows a similar trend. Next, by subtracting the calculated free energy of the phases ([\(\alpha\)-\(\beta\)], [\(\beta\)-\(\gamma\)] and [\(\alpha\)-\(\delta\)]), the phase transition temperatures for CsSnI\({}_{3}\) were found. In this case, the free energy was taken into account as the sum of the Helmholtz free energies from Phonopy calculations with the found Figure 4: Temperature-dependent enthalpy vibration of \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\) Figure 3: Variation of the entropy (a) and heat capacity (b) depending on temperature for groups of \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phases of CsSnI\({}_{3}\) minimum energy from VASP calculations. Since CsSnI\({}_{3}\) phases undergo complex phase transitions at different temperatures, phase transition diagrams were drawn in separate coordinate systems and were also additionally combined into one figure (Figure 5) in order to compare phase transition diagrams in the same range of energy differences. According to the results shown in Figure 5, CsSnI\({}_{3}\) crystals are characterized by three cases of phase transitions in certain temperature ranges. The critical temperature points of phase transitions \(\alpha\)\(\simeq\)\(\beta\), \(\beta\)\(\simeq\)\(\gamma\) and \(\alpha\)\(\simeq\)\(\delta\) indicate that the range of stable existence of these phases differs significantly from each other (Figure 5a). At temperatures above 450 K, the tetragonal phase becomes stable and below this temperature transforms into a cubic conformation (Figure 5b). The phase transition between tetragonal and orthorhombic perovskite occurs in the range of 300-320 K (Figure 5c), and at 320 K a transformation occurs between the CsSnI\({}_{3}\) perovskite structure and its non-perovskite analogue, the so-called black-yellow transformation (Figure 5e). The obtained results are similar to the experimental measurements by Koji Yamada and others [41] with the exception of the temperature of phase transitions between the \(\alpha\) and \(\delta\) phases of the perovskite. Calculations also showed the presence of temperature phase transitions between two orthorhombic phases of CsSnI\({}_{3}\) at 360 K, although direct transitions of the \(\alpha\leftrightarrow\gamma\) and \(\gamma\leftrightarrow\delta\) types have not yet been detected in any experiment, except for \(\gamma\rightarrow\delta\) transitions under the influence of moisture [42]. First-principles calculations of phonon dispersion in Figure 6 (a-d) show the absence of negative frequencies for the non-perovskite phase of CsSnI\({}_{3}\), which indicates its relative stability. This is followed by the orthorhombic perovskite phase. Calculations of the density of phonon states also confirm the absence of states in the negative energy region for the non-perovskite phase of CsSnI\({}_{3}\) (Figure 7a). Also noticeable is the moderate contribution of the phonon state for the tetragonal phase (Figure 7b), in contrast to the more noticeable state for the cubic (Figure 7a) and tetragonal phase (Figure 7c). Next, we studied the electronic and optical properties of the systems under study in order to assess the relationship between the thermodynamic and optoelectronic properties of CsSnI\({}_{3}\), as well as demonstrate the influence of stability and phase transitions on the electronic properties, band state, Fermi level shift, absorption and reflection abilities of these compounds, since the Figure 6: Phonon dispersions for: \(\alpha\)-CsSnI\({}_{3}\) (a), \(\beta\)-CsSnI\({}_{3}\) (b), \(\delta\)-CsSnI\({}_{3}\) (c) and \(\gamma\)-CsSnI\({}_{3}\) (d) Figure 7: Phonon state densities for: \(\alpha\)-CsSnI\({}_{3}\) (a), \(\beta\)-CsSnI\({}_{3}\) (b), \(\gamma\)-CsSnI\({}_{3}\) (c) and \(\delta\)-CsSnI\({}_{3}\) (d) assessment The influence of temperature factors on the overall characteristics of the system is critical from the point of view of the use of the material in any device [43, 44]. Thus, using the well-optimized structures of the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)-phase of CsSnI\({}_{3}\), we performed spin-polarized calculations of the band gap of these compounds using the GGA, SCAN and hybrid functional HSE06 (Table 4). The calculated values of the band gap obtained by three different functionals are compared with each other, as well as with the results of machine learning prediction (linear regression method) and data from experiments. According to Table 4, different potentials estimate the band gap differently. In particular, SCAN and GGA (PBE) showed a fairly small band gap compared to HSE06, which is an expensive approach giving results comparable to experiments. However, underestimating the band gap is only a typical error in calculations using these functionals [46]. Moreover, despite the lengthy costs in HSE06 calculations, its continued use is advisable, since the solid-state computing community pays great attention to the problem of correctly predicting the fundamental band gap, since the width of the band gap is one of the main fundamental electrical characteristics of materials. Figure 8 (a-b) shows the dependence of the bandgap width of CsSnI\({}_{3}\) on the phase of existence. According to the results obtained, phase transformations significantly affect the electronic properties of CsSnI\({}_{3}\). As can be seen from Figure 8a, the band gap width decreases as a consequence of the \(\alpha\)\(\rightarrow\)\(\beta\) transition, and then begins to increase during the next phase transition in the \(\beta\)\(\rightarrow\)\(\gamma\) region. Similar trends in changes in the width of the forbidden band, the reason for which is a change in the volume of CsSnI\({}_{3}\) under the influence of their phase transformations, can be \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**System**} & \multicolumn{4}{c|}{**This work**} & \multirow{2}{*}{**Experiment**} \\ \cline{2-2} \cline{4-6} & **GGA** & **SCAN** & **HSE06** & **ML Prediction** & \\ \hline \(\alpha\)-CsSnI\({}_{3}\) & 1.116 & 1.122 & 1.326 & 1.232 & - \\ \hline \(\beta\)-CsSnI\({}_{3}\) & 0.942 & 1.108 & 1.230 & 0.876 & - \\ \hline \(\gamma\)-CsSnI\({}_{3}\) & 1.161 & 1.146 & 1.435 & 1.171 & 1.31 [45] \\ \hline \(\delta\)-CsSnI\({}_{3}\) & 2.178 & 2.488 & 2.990 & 2.753 & - \\ \hline \end{tabular} \end{table} Table 4: Calculated and experimental band gap of \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)- phases CsSnI\({}_{3}\) in eV. Figure 8: Curves of the band gap (a) and heat of formation (b) depending on the phase formation of CsSnI\({}_{3}\) observed from the calculations of GGA, SCAN, HSE06, as well as machine learning predictions for determining the width of the band gap (Figure 8a). The patterns of changes in the band gap are in good agreement with the patterns of changes in the total energy of the CsSnI\({}_{3}\) phase and their heat of formation, given in Figure 8b and Tables 2-3. This clearly indicates a special relationship between the band gap width and the thermal properties of the material and their special influence on the electronic structure [47-49]. For a more detailed understanding of these phenomena, we assessed the Fermi level shift as a consequence of the phase transitions of CsSnI\({}_{3}\) (Figure 9). These shifts are assessed by determining the change in the energy position of the highest electrons in the valence band for each phase. In this case, the maximum of the valence band of \(\gamma\)-CsSnI\({}_{3}\) was taken as the starting point for comparative analysis. According to Figure 9, when yellow CsSnI\({}_{3}\) is heated and transformed into the cubic perovskite phase, the Fermi level falls towards the low-energy energy range (towards the valence band) and the band gap decreases from 2.99 to 1.326 eV. During the \(\alpha\)\(\leftrightarrow\)\(\beta\) transition, the band gap once again decreases to 1.23 eV and the Fermi level mixes by 1.65 eV towards the conduction band (CB), and \(\beta\)\(\leftrightarrow\)\(\gamma\) transitions increase the band gap to 1.435 eV, and once again times shifts the Fermi level by 0.41 eV in the direction of the conduction band. Figure 10 (a) compares the total densities of electronic states of the \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)- phases of CsSnI\({}_{3}\), from which it can be seen that \(\alpha\)-CsSnI\({}_{3}\) is characterized by moderate densities of state in the vicinity of the valence band. However, as CsSnI\({}_{3}\) moves to lower temperature phases, the density of state increases at the threshold where the valence band (VB) and conduction bands (CB) meet. The PDOS diagram shown in Figure 10b shows that the formation of the conduction band of CsSnI\({}_{3}\) is mainly contributed by the s- and p-state of electrons. Also noticeable is a moderate clade of d-orbitals in all phases. It can be seen that the contribution of p-electrons increases sharply with the transition from the perovskite conformation (\(\alpha\)-phase) to its non-perovskite analogue (\(\delta\)). Figure 11 (a-e) summarizes the results of calculations of the optical properties of \(\alpha\)-, \(\beta\)-, \(\gamma\)- and \(\delta\)- phases of CsSnI\({}_{3}\), from which it can be seen that the behavior of the spectra of optical absorption (\(\alpha\)), photoconductivity (\(\sigma\)) and energy loss function (ELF) obey the trend of changes in the band gap due to volume vibration as a result of CsSnI\({}_{3}\) transitions. Figure 9: Schematic diagram of the band gap and Fermi level shift during phase transitions of CsSnI\({}_{3}\) From the spectra shown in Figure 11 (a-c), it is clear that as we move from the high-temperature phase to the lower-temperature phase, absorption and photoconductivity deteriorate in the infrared (IR) range of light wavelengths, and when moving to non-perovskite conformation, the CsSnI\({}_{3}\) compound can absorb only short-wavelengths (high energy) rays of light. High absorption and optical conductivity indicate that all CsSnI\({}_{3}\) crystals with a perovskite structure have ideal spectral characteristics that are well suited for photovoltaic applications, however, moving to a more stable phase, CsSnI\({}_{3}\) is activated only in the ultraviolet range of electromagnetic waves. However, by taking any necessary measures to stabilize CsSnI\({}_{3}\) without significant changes in the bandgap and loss of their absorption capacity, it is possible to obtain unique materials as the main absorption words in new generation solar panels. Figure (e) shows the phase dependence of the absorption coefficient (a), optical conductivity (b) and energy loss function (c), and energy loss function (c). The spectra of the absorption coefficient (a), optical conductivity (b) and energy loss function (c) are shown in Figure (f). The spectra of the absorption coefficient (a), optical conductivity (b) and energy loss function (c) are shown in Figure (g). optical constants using the example of static permittivity (\(\varepsilon\)) and refractive index (n), which are similar to the above results and confirm the relationship between the band gap and phase transitions of CsSnI\({}_{3}\).The obtained results complement the base of scientific works performed in the field of application of perovskite compounds for the development of green energy, and can be used by experimentalists in further studies of crystals and thin films based on CsSnI\({}_{3}\). ## 4 Conclusion Using calculations within the framework of DFT, the issues of structural stability of the CsSnI\({}_{3}\) compound and the influence of phase transitions on their electronic and optical properties are considered. Relaxed structures for four phases of CsSnI\({}_{3}\) were obtained and their structural stability was assessed from the point of view of comparing the total energy, entropy and enthalpy of their formation. Trends in changes in free energy, entropy, enthalpy, heat of formation energy and heat capacity were justified in terms of the pattern of changes in the total energy of the four phases of CsSnI\({}_{3}\) from VASP calculations. The stable phases of CsSnI\({}_{3}\) at 0 K are shown and compared. The critical temperatures of phase transitions are found, including the temperature of the black-yellow transformation of CsSnI\({}_{3}\). The presence of temperature phase transitions between two orthorhombic phases of CsSnI\({}_{3}\) at 360 K was discovered, despite the fact that direct transitions of the \(\alpha\)\(\leftarrow\)\(\gamma\) and \(\gamma\)\(\leftarrow\)\(\delta\) types have not yet been reported in any experiment, except for \(\gamma\)\(\rightarrow\)\(\delta\) transitions under the influence of moisture. Based on the structure of the SCAN obtained after optimization, the bandgap widths for four CsSnI\({}_{3}\) phases were calculated and compared with experimental measurements. Shifts in the Fermi level as a result of phase transformations of CsSnI\({}_{3}\) are estimated. This study will help to deeply understand the features of the thermodynamic properties of CsSnI\({}_{3}\) and list their disadvantages, so that in the future it is advisable to take measures and select alloying elements that stabilize perovskite materials without any gross negative impact on their optoelectronic properties, including the band gap and the ability of good photoabsorption. The results obtained can be useful to experimenters conducting research on the search and creation of materials with desired properties, taking into account the regulation of their band gap and thermodynamic properties. ## Funding The work was performed at the S.U. Umarov Physical-Technical Institute of the National Academy of Sciences of Tajikistan with the support of International Science and Technology Center (ISTC), project TJ-2726.
2306.15997
On locally finite varieties of Heyting algebras
For every $n \in \mathbb{N}$, we construct a variety of Heyting algebras, whose $n$-generated free algebra is finite but whose $(n+1)$-generated free algebra is infinite.
M. Martins, T. Moraschini
2023-06-28T08:16:11Z
http://arxiv.org/abs/2306.15997v1
# On locally finite varieties of Heyting algebras ###### Abstract. For every \(n\in\mathbb{N}\), we construct a variety of Heyting algebras, whose \(n\)-generated free algebra is finite but whose \((n+1)\)-generated free algebra is infinite. ## 1. Introduction A _Heyting algebra_ is a structure \(\langle A;\wedge,\vee,\rightarrow,0,1\rangle\) where \(\langle A;\wedge,\vee,0,1\rangle\) is a bounded distributive lattice and \(\rightarrow\) a binary operation satisfying _residuation law_, that is, the demand that for every \(a,b,c\in A\), \[a\wedge b\leqslant c\Longleftrightarrow a\leqslant b\to c.\] The class of Heyting algebras forms a _variety_ (i.e., an equational class) that we denote by \(\operatorname{\mathsf{HA}}\)[1, 8, 11, 13]. From a logical standpoint, the importance of Heyting algebras is that they algebraize the _intuitionistic propositional calculus_ IPC in the sense of [6]. As a consequence, the axiomatic extensions of IPC (known as _superintuitionistic logics_, or si-logics for short) form a lattice that is dually isomorphic to that of varieties of Heyting algebras. This allows us to study each si-logic through the lenses of its corresponding variety of Heyting algebras which, in turn, is amenable to the methods of universal algebra and duality theory. As a particular instance of this phenomenon, we recall that an si-logic \(\mathsf{L}\) is _locally tabular_ when for every \(n\in\mathbb{N}\) there are only finitely many formulas in variables \(x_{1},\ldots,x_{n}\) up to logical equivalence in \(\mathsf{L}\). On the other hand, a variety is called _locally finite_ when its finitely generated members (equivv. finitely generated free algebras) are finite [7, Thm. II.10.15]. It is well known that an si-logic is locally tabular iff the corresponding variety of Heyting algebras is locally finite. A fascinating problem by Bezhanishvili and Grigolia asks to determine whether it is true that a variety of Heyting algebras is locally finite iff its two-generated free algebra is finite [3, Prob. 2.4]. While this holds in the restrictive context of varieties of Heyting algebras of width two [2], in this paper we establish the following: **Theorem 1.1**.: _For every \(n\geqslant 2\) there exists a variety of Heyting algebras whose \(n\)-generated free algebra is finite, while its \((n+1)\)-generated free algebra is infinite._ Consequently, for every \(n\geqslant 2\) there exits a nonlocally finite variety of Heyting algebra whose free \(n\)-generated algebra is finite. This result was established in the spring of 2020, at a time when the second author was supervising the master thesis [2]. Recently, an alternative proof was independently discovered by Hyttinen and Quadrellaro [12]. This motivated us to share the original proof as well. ## 2. Esakia spaces Let \(\mathbb{X}=\langle X,\leqslant\rangle\) be a poset. We denote the _upset generated_ by a subset \(U\) of \(X\) by \[\uparrow U\coloneqq\{x\in X:\exists u\in U\text{ such that }u\leqslant x\},\] and if \(U=\uparrow U\), then \(U\) is called an _upset_. If \(U=\{x\}\), we simply write \(\uparrow x\) and call it a _principal upset_. The notion of a _downset_ and the arrow operator \(\downarrow\) are defined analogously. If \(x,y\in X\), then \(x\) is said to be an _immediate predecessor_ of \(y\) if \(x<y\) and no point in \(X\) lies between them (i.e., if \(z\in X\) is such that \(x\leqslant z\leqslant y\), then either \(x=z\) or \(y=z\)). If this is the case, we call \(y\) an _immediate successor_ of \(x\). **Definition 2.1**.: A triple \(\mathbb{X}=\langle X,\tau,\leqslant\rangle\) is an _Esakia space_ if it is a compact ordered topological space satisfying the following conditions: 1. \(\downarrow U\) is clopen, for every clopen \(U\); 2. _Priestley separation axiom_: for every \(x,y\in X\), \[x\nleqslant y\text{ implies that there exists a clopen upset }U\text{ such that }x\in U\text{ and }y\notin U.\] **Definition 2.2**.: A map \(f\colon\mathbb{X}\to\mathbb{Y}\) between Esakia spaces is called an _Esakia morphism_ if it it continuous, order preserving, and for every \(x\in X\) and \(y\in Y\), \[\text{if }f(x)\leqslant y,\text{ there exists some }z\geqslant x\text{ such that }f(z)=y.\] When endowed with the discrete topology, every finite poset becomes an Esakia space. In fact, this is the only way to view a finite poset as an Esakia space, because Esakia spaces are Hausdorff. In view of Esakia duality [10, 11], the category \(\mathsf{HA}\) of Heyting algebras with homomorphisms and the category \(\mathsf{ES}\) of Esakia spaces with Esakia morphisms are dually equivalent. We denote the contravariant functors witnessing this duality by \((-)_{*}\colon\mathsf{HA}\to\mathsf{ES}\) and \((-)^{*}\colon\mathsf{ES}\to\mathsf{HA}\). **Definition 2.3**.: Let \(\mathbb{X}\) be an Esakia space. 1. An _\(E\)-subspace_ of \(\mathbb{X}\) is a closed upset equipped with the subspace topology and the restriction of the order. 2. An _\(E\)-partition_ on \(\mathbb{X}\) is an equivalence relation \(R\) on \(X\) such that for all \(x,y,z\in X\): 1. if \(\langle x,y\rangle\in R\) and \(x\leqslant z\), then \(y\leqslant w\) and \(\langle z,w\rangle\in R\) for some \(w\in X\); 2. if \(\langle x,y\rangle\notin R\), then there is an _\(R\)-saturated_ clopen \(U\) (a union of equivalence classes of \(R\)) such that \(x\in U\) and \(y\notin U\). _Remark 2.4_.: It follows from the definition of an Esakia space \(\mathbb{X}\) that its principal upsets are closed. Thus, \(\uparrow x\) can be viewed as an E-subspace of \(\mathbb{X}\), for all \(x\in X\). If \(R\) is an E-partition of the Esakia space \(\mathbb{X}=\langle X,\tau,\leqslant\rangle\), we denote by \(\mathbb{X}/R\) the Esakia space obtained by endowing \(X/R\) with the quotient topology and the partial order \(\sqsubseteq\) defined as follows: \[x/R\sqsubseteq y/R\Longleftrightarrow x^{\prime}\leqslant y^{\prime}\text{ for some }x^{\prime}\in x/R\text{ and }y^{\prime}\in y/R.\] Moreover, the natural map \(f\colon\mathbb{X}\to\mathbb{X}/R\) is a surjective Esakia morphism. Conversely, the kernel \(\mathsf{Ker}(f)\) of every surjective Esakia morphism \(f\colon\mathbb{X}\to\mathbb{Y}\), is an E-partition of \(\mathbb{X}\) such that \(\mathbb{X}/\mathsf{Ker}(f)\cong\mathbb{Y}\)[4, Cor. 2.3.1]. We will rely on the following observations [4, Lems. 3.1.6 and 3.1.7]: **Lemma 2.5**.: _Let \(\mathbb{X}\) be an Esakia space and \(x,y\in X\)._ 1. _If_ \(y\) _is the unique immediate successor of_ \(x\) _and_ \(R\) _is the smallest equivalence relation that identifies_ \(x\) _and_ \(y\)_, then_ \(R\) _is an_ \(E\)_-partition. The natural map_ \(f\colon\mathbb{X}\to\mathbb{X}/R\) _is an Esakia morphism, called an_ \(\alpha\)_-reduction._ 2. _If_ \(x\) _and_ \(y\) _have exactly the same immediate successors and_ \(R\) _is the smallest equivalence relation that identifies_ \(x\) _and_ \(y\)_, then_ \(R\) _is an_ \(E\)_-partition. The natural map_ \(f\colon\mathbb{X}\to\mathbb{X}/R\) _is an Esakia morphism, called a_ \(\beta\)_-reduction._ **Lemma 2.6**.: _If \(f\colon\mathbb{X}\to\mathbb{Y}\) is an Esakia morphism between finite Esakia spaces, then there exists a finite sequence \(f_{1},\ldots,f_{n}\) of \(\alpha\) and \(\beta\)-reductions such that \(f=f_{n}\circ\cdots\circ f_{1}\)._ ## 3. Finitely generated Heyting algebras Finitely generated Heyting algebras can be described in terms of their Esakia duals, as we proceed to explain. **Definition 3.1**.: Given \(n\in\mathbb{N}\), let \(\mathbb{C}_{n}=\langle C_{n};\sqsubseteq\rangle\) be the poset whose universe is the set of sequences of length \(n\) of zeros and ones and whose order is defined as follows: \[\langle m_{1},\ldots,m_{n}\rangle\sqsubseteq\langle k_{1},\ldots,k_{n}\rangle \Longleftrightarrow m_{i}\leqslant k_{i},\text{for every }i\leqslant n.\] The elements of \(\mathbb{C}_{n}\) can be used to color Esakia spaces as follows. Given an Esakia space \(\mathbb{X}\) and a function \(f\colon\mathbb{X}\to\mathbb{C}_{n}\), we say that an element \(x\in X\) is _colored_ by a color \(\vec{c}\in C_{n}\) when \(f(x)=\vec{c}\). The set of elements of \(\mathbb{X}\) colored by \(\vec{c}\) will be denoted by \(\vec{c}(\mathbb{X})\). **Definition 3.2**.: Let \(\mathbb{X}\) be an Esakia space and \(n\in\mathbb{N}\). A function \(f\colon\mathbb{X}\to\mathbb{C}_{n}\) is said to be 1. a _weak_ \(n\)_-coloring_ of \(\mathbb{X}\) if \(f\) is order preserving and \(\vec{c}(\mathbb{X})\) is clopen, for all \(\vec{c}\in C_{n}\); 2. an \(n\)_-coloring_ of \(\mathbb{X}\) if it is a weak \(n\)-coloring such that every E-partition of \(\mathbb{X}\) other than the identity relation identifies two elements of \(\mathbb{X}\) of distinct color. An Esakia space \(\mathbb{X}\) is said to be \(n\)_-colorable_ when there it admits an \(n\)-coloring. Finitely generated Heyting algebras and colorings of Esakia spaces are related as follows [4, Thm. 3.1.5]. **Coloring Theorem 3.3**.: _Let \(\boldsymbol{A}\) be a Heyting algebra and \(n\in\mathbb{N}\). Then \(\boldsymbol{A}\) is \(n\)-generated if and only if \(\boldsymbol{A}_{*}\) is \(n\)-colorable._ A Heyting algebra \(\boldsymbol{A}\) is _subdirectly irreducible_ (SI for short) when it has a second greatest element. This is equivalent to the demand that \(\boldsymbol{A}_{*}\) has a least element that, moreover, is isolated [11, Prop. A.1.2]. When a poset (or, in particular, an Esakia space) \(\mathbb{X}\) has a least element \(x\), we call \(x\) the _root_ of \(\mathbb{X}\), and say that \(\mathbb{X}\) is _rooted_. Thus, an Heyting algebra \(\boldsymbol{A}\) is SI iff \(\boldsymbol{A}_{*}\) has an isolated root. In view of the following observation, in order to determine whether a variety of Heyting algebras is locally finite, it suffices to examine its finite SI members [5, Thm. 4.3]. **Theorem 3.4**.: _A variety \(\mathbb{V}\) of Heyting algebras is locally finite if and only if \(\mathbb{V}\) has, up to isomorphism, only finitely many finite \(n\)-generated SI members, for every \(n\in\mathbb{N}\)._ The following technical observation will be used throughout the paper. In its statement, \(\overline{0}\) stands for the color consisting of \(n\) zeros. **Lemma 3.5**.: _Let \(\{\mathsf{X}_{m}:m\in\mathbb{N}\}\) be a family of finite \(n\)-colorable Esakia spaces such that_ \[|X_{1}|<|X_{2}|<\cdots<|X_{m}|<\cdots\] _and whose antichains are of size \(\leqslant t\) for some \(t\in\mathbb{N}\). Then there are \(k\in\mathbb{N}\) and a family \(\{\mathbb{Z}_{m}:m\in\mathbb{N}\}\) of E-subspaces of spaces in \(\{\mathsf{X}_{m}:m\in\mathbb{N}\}\) such that_ 1. \(|Z_{1}|<|Z_{2}|<\cdots<|Z_{m}|<\cdots\) _and_ 2. _each_ \(\mathbb{Z}_{m}\) _admits an_ \(n\)_-coloring for which_ \(|Z_{m}\smallsetminus\vec{0}(\mathbb{Z}_{m})|\leqslant k\)_._ The proof of this lemma relies on the following easy combinatorial principle that follows, for instance, from Dilworth's Theorem [9]. **Proposition 3.6**.: _For every \(n,m\in\mathbb{N}\), if chains and antichains in a poset \(\mathsf{X}\) are, respectively, of size \(\leqslant n\) and \(\leqslant m\), then \(\mathsf{X}\) has at most \(n\times m\) elements._ Proof of Lemma 3.5.: Fix an \(n\)-coloring \(f_{m}\) on each \(\mathsf{X}_{m}\). The consider the set of colors \[D\coloneqq\{\vec{c}\in\{0,1\}^{n}:\text{for every $k\in\mathbb{N}$ there exists $m_{k}\in\mathbb{N}$ such that $k\leqslant|\vec{c}(\mathsf{X}_{m_{k}})|$}\}.\] Notice that \(D\) is nonempty, because \[|X_{1}|<|X_{2}|<\cdots<|X_{m}|<\cdots\] Furthermore, it is finite. Therefore, when viewed as a subposet of \(\mathsf{C}_{n}\), the set \(D\) has at least a maximal element \(\vec{c}\). For every \(m\in\mathbb{N}\) and \(x\in\vec{c}(\mathsf{X}_{m})\), let \(\mathsf{Y}_{m}^{x}\) be the E-subspace of \(\mathsf{X}_{m}\) with universe \(\uparrow x\). Moreover, let \(g_{m}^{x}\colon\mathsf{Y}_{m}^{x}\to\mathsf{C}_{n}\) be the function defined, for every \(y\in\uparrow x\), as \[g_{m}^{x}(y)=\begin{cases}f_{m}(y)&\text{ if $f_{m}(y)\neq\vec{c}$}\\ \vec{0}&\text{ otherwise.}\end{cases}\] We will use repeatedly the fact that \(\vec{c}\sqsubseteq g_{m}^{x}(z)\), for every element \(z\in Y_{m}^{x}\). To prove this, observe that \(x\leqslant z\), because \(Y_{m}^{x}=\uparrow x\). Moreover, \(f_{m}(x)=\vec{c}\), since \(x\in\vec{c}(\mathsf{X}_{m})\). As \(f_{m}\) is order preserving, we conclude that \(\vec{c}=f_{m}(x)\sqsubseteq f_{m}(z)\), as desired. **Claim 3.7**.: _The map \(g_{m}^{x}\) is an \(n\)-coloring of \(\mathsf{Y}_{m}^{x}\)._ Proof of the Claim.: To prove that \(g_{m}^{x}\) is order preserving, consider \(y,z\in\uparrow x\) such that \(y\leqslant z\). If \(f_{m}(y)=\vec{c}\), then from the definition of \(g_{m}^{x}\) it follows \[g_{m}^{x}(y)=\vec{0}\sqsubseteq g_{m}^{x}(z),\] because \(\vec{0}\) is the minimum of \(\mathsf{C}_{n}\). Then we consider the case where \(f_{m}(y)\neq\vec{c}\). Since \(\vec{c}\sqsubseteq f_{m}(y)\), we get \(\vec{c}\sqsubset f_{m}(y)\). As \(y\leqslant z\) and \(f_{m}\) is order preserving, \(\vec{c}\sqsubset f_{m}(z)\) also holds. From the definition of \(g_{m}^{x}\) it follows that \(g_{m}^{x}(y)=f_{m}(y)\) and \(f_{m}(z)=g_{m}^{x}(z)\). Since \(y\leqslant z\) and \(f_{m}\) is order preserving, we obtain that \[g_{m}^{x}(y)=f_{m}(y)\sqsubseteq f_{m}(z)=g_{m}^{x}(z).\] We conclude that \(g_{m}^{x}\) is order preserving. Furthermore, recall that \(\mathsf{Y}_{m}^{x}\) is finite and, therefore, its topology is discrete. As a consequence, \(\vec{a}(\mathsf{Y}_{m}^{x})\) is clopen, for every color \(\vec{a}\in\{0,1\}^{n}\). Hence, \(g_{m}^{x}\) is a weak \(n\)-coloring of \(\mathsf{Y}_{m}^{x}\). To prove that \(g_{m}^{x}\) is also an \(n\)-coloring, consider an E-partition \(R\) of \(\mathsf{Y}_{m}^{x}\) other than the identity relation on \(Y_{m}^{x}\). Moreover, let \(id_{X_{m}}\) be the identity relation on \(\mathsf{X}_{m}\). We will prove that the union \(S\coloneqq R\cup id_{X_{m}}\) is an E-partition on \(\mathsf{X}_{m}\). To this end, consider \(y,z,w\in X_{m}\) such that \(\langle y,z\rangle\in S\) and \(y\leqslant w\). We need to show that there exists \(u\in X_{m}\) such that \(u\geqslant z\) and \(\langle w,u\rangle\in S\). If \(y=z\), then we are done taking \(u\coloneqq w\). Then we consider the case where \(y\neq z\). From the definition of \(S\) it follows that \(\langle y,z\rangle\in R\). Since \(y\leqslant w\) and \(\Upsilon_{m}^{x}\) is an upset of \(\mathbb{X}_{m}\), the element \(w\) also belongs to \(\Upsilon_{m}^{x}\). Since \(R\) is a an E-partition on \(\Upsilon_{m}^{x}\), there exists \(u\geqslant z\) such that \(\langle w,u\rangle\in R\subseteq S\). Since the topology of \(\mathbb{X}_{m}\) is discrete (because \(\mathbb{X}_{m}\) is finite), this shows that \(S\) is an E-partition on \(\mathbb{X}_{m}\). Now, recall that there exists a pair \(\langle y,z\rangle\in R\) such that \(y\neq z\). As \(R\subseteq S\), the relation \(S\) is an E-partition on \(\mathbb{X}_{m}\) different from the identity relation \(\mathit{id}_{X_{m}}\). Since \(f_{m}\) is an \(n\)-coloring of \(\mathbb{X}_{m}\), there is a pair \(\langle y^{*},z^{*}\rangle\in S\) such that \(f_{m}(y^{*})\neq f_{m}(z^{*})\). Clearly, \(y^{*}\neq z^{*}\). As \(S=R\cup\mathit{id}_{X_{m}}\), we obtain \(\langle y^{*},z^{*}\rangle\in R\). We will prove that \(g_{m}^{x}(y^{*})\neq g_{m}^{x}(z^{*})\). If both \(f_{m}(y^{*})\) and \(f_{m}(z^{*})\) are different from \(\vec{c}\), the definition of \(g_{m}^{x}\) implies that \[g_{m}^{x}(y^{*})=f_{m}(y^{*})\neq f_{m}(z^{*})=g_{m}^{x}(z^{*}),\] as desired. Then we consider the case where \(f_{m}(y^{*})\) or \(f_{m}(z^{*})\) is \(\vec{c}\). By symmetry, we may assume that \(f_{m}(y^{*})=\vec{c}\). By the definition of \(g_{m}^{x}\), this yields \(g_{m}^{x}(y^{*})=\vec{0}\). Moreover, as \(f_{m}(y^{*})\neq f_{m}(z^{*})\), we have \(f_{m}(z^{*})\neq\vec{c}\) and, therefore, \(\vec{c}\sqsubset f_{m}(z^{*})\). By the definition \(g_{m}^{x}\), we obtain \[g_{m}^{x}(y^{*})=\vec{0}\sqsubseteq\vec{c}\sqsubset f_{m}(z^{*})=g_{m}^{x}(z^ {*}).\] Therefore, \(R\) identifies two distinct elements of \(\Upsilon_{m}^{x}\), namely \(y^{*}\) and \(z^{*}\), colored differently by \(g_{m}^{x}\). Hence, we conclude that \(g_{m}^{x}\) is an \(n\)-coloring of \(\Upsilon_{m}^{x}\). The sequence of posets \(\Upsilon_{m}^{x}\) with the colorings \(g_{m}^{x}\) has the following property. **Claim 3.8**.: _There exists \(k\in\mathbb{N}\) such that for all \(m\in\mathbb{N}\) and \(x\in\vec{c}(\mathbb{X}_{m})\),_ \[|\{z\in Y_{m}^{x}:g_{m}^{x}(z)\neq\vec{0}\}|\leqslant k.\] Proof of the Claim.: Let \(M\) be the set of colors in \(\{0,1\}^{n}\) strictly larger than \(\vec{c}\) in \(\mathbb{C}_{n}\). Since \(M\) is finite and, by assumption, \(\vec{c}\) is maximal in \(D\), there exists \(k\in\mathbb{N}\) such that \[|\{z\in X_{m}:f_{m}(z)\in M\}|\leqslant k,\] for all \(m\in\mathbb{N}\). Consequently, for every \(m\in\mathbb{N}\) and \(x\in\vec{c}(\mathbb{X}_{m})\), \[|\{z\in Y_{m}^{x}:g_{m}^{x}(z)\in M\}|\leqslant k. \tag{1}\] Now, consider \(m\in\mathbb{N}\) and \(x\in\vec{c}(\mathbb{X}_{m})\). We will prove that \[\{z\in Y_{m}^{x}:g_{m}^{x}(z)\neq\vec{0}\}\subseteq\{z\in Y_{m}^{x}:g_{m}^{x} (z)\in M\}. \tag{2}\] To this end, consider \(z\in Y_{m}^{x}\) such that \(g_{m}^{x}(z)\neq\vec{0}\). By the definition of \(g_{m}^{x}\), this implies \(f_{m}(z)\neq\vec{c}\) and \(g_{m}^{x}(z)=f_{m}(z)\). Since \(\vec{c}\sqsubseteq f_{m}(z)\), we obtain \(\vec{c}\sqsubset f_{m}(z)=g_{m}^{x}(z)\). Hence, we conclude that \(g_{m}^{x}(z)\in M\). This establishes (2). Lastly, from (1) and (2) it follows that \(|\{z\in Y_{m}^{x}:g_{m}^{x}(z)\neq\vec{0}\}|\leqslant k\), as desired. **Claim 3.9**.: _For every \(k\in\mathbb{N}\) there are \(m\in\mathbb{N}\) and \(x\in\vec{c}(\mathbb{X}_{m})\) such that \(k\leqslant|Y_{m}^{x}|\)._ Proof of the Claim.: Suppose the contrary with a view to contradiction. Then there exists \(k\in\mathbb{N}\) such that \(|Y_{m}^{x}|\leqslant k\), for all \(m\in\mathbb{N}\) and \(x\in\vec{c}(\mathbb{X}_{m})\). Thus, for each \(m\in\mathbb{N}\), the chains of the subposet \(\vec{c}(\mathbb{X}_{m})\) of \(\mathbb{X}_{m}\) must have size \(\leqslant k\). Now, recall that antichains in \(\mathbb{X}_{m}\) have size \(\leqslant t\), by assumption. Therefore, antichains in \(\vec{c}(\mathbb{X}_{m})\) have also size \(\leqslant t\). Using Proposition 3.6, we conclude that \(|\vec{c}(\mathbb{X}_{m})|\leqslant k\times t\), for every \(m\in\mathbb{N}\). But this contradicts the fact that \(\vec{c}\) belongs to \(D\) In view of Claims 3.8 and 3.9, there exists a subset \[\{\mathbb{Z}_{m}:m\in\mathbb{N}\}\subseteq\{\mathbb{Y}_{m}^{x}:m\in\mathbb{N} \text{ and }x\in\vec{c}(\mathbb{X}_{m})\}\] such that 1. \(|Z_{1}|<|Z_{2}|<\cdots<|Z_{m}|<\cdots\) (this is made possible by of Claim 3.9 and the fact that the various \(\mathbb{Y}_{m}^{x}\) are finite) and 2. there exists \(k\in\mathbb{N}\) such that for every \(m\in\mathbb{N}\), \[\text{if }\mathbb{Z}_{m}=\mathbb{Y}_{m}^{x}\text{, then }|\{z\in Z_{m}:g_{m}^{x}(z)\neq\vec{0}\}|\leqslant k.\] (this is made possible by of Claim 3.8). As \(g_{m}^{x}\) is an \(n\)-coloring of \(\mathbb{Y}_{m}^{x}\) by Claim 3.7, the sequence of posets \(\{\mathbb{Z}_{m}:m\in\mathbb{N}\}\) colored with the suitable \(g_{m}^{x}\) (that is, if \(\mathbb{Z}_{m}=\mathbb{Y}_{m}^{x}\), we color \(\mathbb{Z}_{m}\) with \(g_{m}^{x}\)) satisfies the conditions in the statement. ## 4. Introducing the abominations Our first aim is to prove that for every \(n\in\mathbb{N}\) there exists a variety \(\mathsf{K}_{n}\) of Heyting algebras whose \(n\)-generated members are finite, but whose \((n+1)\)-generated ones may be infinite. To this end, we will exhibit a special Esakia space \(\mathbb{X}_{n}\), called the \(n\)_-abomination_, and let \(\mathsf{K}_{n}\) be the variety generated by the algebraic dual of \(\mathbb{X}_{n}\). The next definition, inspired by the one-point compactification of a discrete space, is instrumental in the construction of \(\mathbb{X}_{n}\). **Definition 4.1**.: The _root compactification_ of a poset \(\mathbb{X}\) is the ordered topological space obtained by adding a new minimum \(\bot\) to \(\mathbb{X}\) and declaring open a subset \(U\) of \(X\cup\{\bot\}\) provided that \[\text{if }\bot\in U\text{, then }U\text{ is cofinite.}\] **Proposition 4.2**.: _Let \(\mathbb{X}\) be a poset such that \(\downarrow\!x\) is cofinite for every \(x\in X\). Then root compactification of \(\mathbb{X}\) is an Esakia space._ Proof.: Let \(\mathbb{X}_{\bot}\) be the root compactification of \(\mathbb{X}\) with universe \(X_{\bot}=X\cup\{\bot\}\). The fact that \(\mathbb{X}_{\bot}\) is compact is an immediate consequence of its definition. To prove that it satisfies Priestley separation axiom, consider \(x,y\in X_{\bot}\) such that \(x\nleqslant y\). By assumption the downset \(\downarrow\!y\) of \(y\) in \(\mathbb{X}_{\bot}\) is cofinite. This guarantees that \(\downarrow\!y\) is open. As \(\bot\leqslant y\), the complement \(X_{\bot}\smallsetminus\downarrow\!y\) does not contains \(\bot\) and, therefore, is also open. It follows that \(\downarrow\!y\) is a clopen downset of \(\mathbb{X}_{\bot}\). that contains \(y\) but omits \(x\), as desired. To conclude that \(\mathbb{X}_{\bot}\) is an Esakia space, it only remains to show that downsets of opens are open. To this end, consider an open set \(U\). If \(U\) is empty, so is its downset and, since the empty set is open, we are done. Then we consider the case where \(U\) contains some element \(x\). By assumption, \(\downarrow\!x\) is cofinite, whence \(\downarrow\!U\) is also cofinite. This, in turn, implies that \(\downarrow\!U\) is open. Throughout this section, fix an integer \(n\geqslant 2\). Then consider the set \[T_{n}\coloneqq\{\langle k_{1},k_{2},k_{3}\rangle:k_{1},k_{2},k_{3}\in\mathbb{ N}\text{ are all different and }\leqslant 2^{n+1}-1\}\] and take an enumeration \[T_{n}=\{s_{0},\ldots,s_{t}\}.\] Then consider disjoint sets of distinct elements \[A \coloneqq\{a_{m}:m\in\mathbb{N}\}\] \[B \coloneqq\{b_{m}:m\in\mathbb{N}\}\] \[C \coloneqq\{c_{m,k}:k,m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}\] \[D \coloneqq\{d_{m,k}:k,m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}\] \[E^{a} \coloneqq\{e^{a}_{m,k}:k,m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}\] \[E^{b} \coloneqq\{e^{b}_{m,k}:k,m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}\] and let \[U_{n}\coloneqq A\cup B\cup C\cup D\cup E^{a}\cup E^{b}.\] Moreover, let \(\prec\) be the binary relation on \(U_{n}\) that relates two elements \(x,y\in U_{n}\), in symbols \(x\prec y\), if and only if one of the following conditions holds: 1. There is some \(s_{j}=\langle k_{1},k_{2},k_{3}\rangle\) in \(T_{n}\) and \(m\in\mathbb{N}\) such that \(m\equiv j\bmod t+1\) and \[x=a_{m}\qquad y\in\{c_{m,k_{1}},c_{m,k_{2}}\}\qquad s_{j}=\langle k_{1},k_{2},k_{3}\rangle;\] 2. There is some \(s_{j}=\langle k_{1},k_{2},k_{3}\rangle\) in \(T_{n}\) and \(m\in\mathbb{N}\) such that \(m\equiv j\bmod t+1\) and \[x=b_{m}\qquad y\in\{c_{m,k_{1}},c_{m,k_{3}}\}\qquad s_{j}=\langle k_{1},k_{2},k_{3}\rangle;\] 3. There are \(m,k,j,i\in\mathbb{N}\) such that \(1\leqslant m\) and \(k\neq j\) and \[x=c_{m,k}\qquad y\in\{e^{a}_{m-1,j},e^{b}_{m-1,i}\};\] 4. There are \(m,k,j\in\mathbb{N}\) such that \(k\neq j\) and \[x=d_{m,k}\qquad y=c_{m,j};\] 5. There are \(m,k,j\in\mathbb{N}\) such that \(k\neq j\) and \[x=e^{a}_{m,k}\qquad y\in\{d_{m,j},a_{m}\};\] 6. There are \(m,k,j\in\mathbb{N}\) such that \(k\neq j\) and \[x=e^{b}_{m,k}\qquad y\in\{d_{m,j},b_{m}\}.\] Notice that in Condition (iii) is asymmetric, in the sense that the integer \(j\) is assumed to be different from \(k\), while \(i\) might be equal \(k\). As an exemplification, the top part of the reflexive and transitive closure of \(\prec\) on \(U_{n}\) is depicted in Figure 1, under the assumption that \(s_{0}=\langle 0,1,2\rangle\). **Definition 4.3**.: The _abomination_\(\mathbb{X}_{n}\) is the root compactification of the poset obtained by endowing \(U_{n}\) with the reflexive and transitive closure of \(\prec\). The _covering relation_ of a poset \(\mathbb{Z}\) is \[R\coloneqq\{\langle x,y\rangle\in Z\times Z:x\text{ is an immediate predecessor of }y\}.\] Notice that the covering relation of \(\mathbb{X}_{n}\) is precisely \(\prec\). **Proposition 4.4**.: _For every integer \(n\geqslant 2\), the abomination \(\mathbb{X}_{n}\) is an Esakia space._ Proof.: Let \(\leqslant\) be the reflexive and transitive closure of \(\prec\) on \(U_{n}\). We will rely on the following property of \(\mathbb{X}_{n}\). **Claim 4.5**.: _For all \(m,k,j\in\mathbb{N}\) such that \(k,j\leqslant 2^{n+1}-1\),_ \[\{a_{m+1},b_{m+1},c_{m+1,j},d_{m+1,j}e_{m+1,j}^{a},e_{m+1,j}^{b}\}\subseteq\ \downarrow e_{m,k}^{b}.\] Proof of the Claim.: Consider \(m,k\in\mathbb{N}\) such that \(k\leqslant 2^{n+1}-1\). By Condition (iii) in the definition of \(\prec\), we obtain \[c_{m+1,j}\prec e_{m,k}^{b},\,\text{for all}\ j\leqslant 2^{n+1}-1. \tag{3}\] Moreover, from Conditions (i), (ii), (iv) in the definition of \(\prec\) it follows that \[\begin{split}&\text{for all}\ j\leqslant 2^{n+1}-1\ \text{there are}\ p,q,r\leqslant 2^{n+1}-1\\ &\text{s.t.}\ a_{m+1}\prec c_{m+1,p}\ \text{and}\ b_{m+1}\prec c_{m+1,q}\ \text{and}\ d_{m+1,j}\prec c_{m+1,r}.\end{split} \tag{4}\] Lastly, by Conditions (v) and (vi) in the definition of \(\prec\), we obtain \[e_{m+1,j}^{a}\prec a_{m+1}\ \text{and}\ e_{m+1,j}^{b}\prec b_{m+1},\,\text{for all}\ j \leqslant 2^{n+1}-1. \tag{5}\] Since \(\leqslant\) is the reflexive and transitive closure of \(\prec\), the result follows immediately from Conditions (3), (4) and (5). We will use the Claim to show that principal downsets in \(\langle U_{m};\leqslant\rangle\) are cofinite. Indeed, by the Claim, this is true for principal downsets of the form \(\downarrow e_{m,k}^{b}\). Therefore, it suffices to show that for all \(x\in U_{n}\) there exist \(m,k\in\mathbb{N}\) such that \(e_{m,k}^{b}\leqslant x\). If the element \(x\) has already the form \(e_{m,k}^{b}\), this is obvious. Otherwise, one of the the following cases holds for some \(m,k\in\mathbb{N}\): 1. \(x\) has the form \(b_{m}\); 2. \(x\) has the form \(d_{m,k}\); 3. \(x\) has the form \(c_{m,k}\); Figure 1. The top part of the reflexive and transitive closure of \(\prec\) on \(U_{n}\). 4. \(x\) has the form \(e^{a}_{mk}\); 5. \(x\) has the form \(a_{m}\). (1): Condition (vi) in the definition of \(\prec\) implies that \(e^{b}_{j,k}\leqslant b_{m}=x\) for all \(j\leqslant 2^{n+1}-1\). (2): Let \(j\leqslant 2^{n+1}-1\) be distinct from \(k\). Condition (vi) in the definition of \(\prec\) implies that \(e^{b}_{m,j}\leqslant a_{m,k}=x\). (3): Similarly, let \(j\leqslant 2^{n+1}-1\) be distinct from \(k\). Condition (iv) in the definition of \(\prec\) implies that \(d_{m,j}\leqslant c_{m,k}\). Therefore, the result follows from case (2). (4): Condition (iii) in the definition of \(\prec\) entails that there exists \(j\leqslant 2^{n+1}-1\) distinct from \(k\) such that \(c_{m+1,j}\leqslant e^{a}_{m,k}\). Therefore, the result follows from case (3). (5): Condition (v) in the definition of \(\prec\) implies that \(e^{a}_{m,k}\leqslant a_{m}\). Therefore, the result follows from case (4). Hence, we conclude that principal downsets in \(\langle U_{m};\leqslant\rangle\) are cofinite, as desired. Together with Proposition 4.2, this implies that \(\mathbb{X}_{n}\) is an Esakia space. In order to keep proofs at a reasonable size, from now on we will assume a basic understanding of the order structure of the abominations and, consequently, omit precise references to the conditions in the definition of \(\prec\) in our arguments. We believe, however, that the reader might find helpful to consult Figure 1 when reading them. Let \(n\) be a positive integer. An element \(x\) of a poset \(\mathbb{X}\) has _depth_\(\leqslant n\) if every chain in \(\uparrow x\) has at most \(n\) elements. If, moreover, \(\uparrow x\) contains a chain of \(n\) elements, we say that \(x\) has _depth_\(n\). Similarly, a rooted poset \(\mathbb{X}\) is said to have _width_\(\leqslant n\) when all the antichains in \(\mathbb{X}\) have at most \(n\) elements. If, moreover, \(\mathbb{X}\) has an antichain with \(n\) elements, we say that \(\mathbb{X}\) has _width_\(n\). An arbitrary (possibly nonrooted) poset has _width_\(\leqslant n\) (resp. \(n\)) when so do all its principal upsets. The empty poset is assumed to have width \(0\). **Proposition 4.6**.: _For every integer \(n\geqslant 2\), the \(n\)-abomination \(\mathbb{X}_{n}\) is of width \(2^{n+2}\)._ Proof.: First, observe that for every \(m\in\mathbb{N}\) the following is an antichain of size \(2^{n+2}\) in \(\mathbb{X}_{n}\): \[\{e^{a}_{m,k}:2^{n+1}-1\geqslant k\in\mathbb{N}\}\cup\{e^{b}_{m,k}:2^{n+1}-1 \geqslant k\in\mathbb{N}\}.\] Since \(\mathbb{X}_{n}\) is rooted, this implies that \(\mathbb{X}_{n}\) cannot have width \(\leqslant m\) for some \(m<2^{n+2}\). Therefore, to conclude the proof, it suffices to show that antichains in \(\mathbb{X}_{n}\) have size at most \(2^{n+2}\). To this end, we rely on the following property of antichains in \(\mathbb{X}_{n}\). **Claim 4.7**.: _For every antichain \(Y\) in \(\mathbb{X}_{n}\),_ \[\text{if }Y\cap\{a_{m},b_{m},c_{m,k},d_{m,k}\}\neq\emptyset\text{ for some }m,k\in\mathbb{N},\text{ then }|Y|\leqslant 2^{n+2}. \tag{6}\] Proof of the Claim.: We have four cases, depending of whether \(a_{m},b_{m},c_{m,k}\), or \(d_{m,k}\) belongs to \(Y\). Suppose first that \(d_{m,k}\in Y\). The definition of \(\prec\) implies that \[X_{n}\smallsetminus(\uparrow d_{m,k}\cup\downarrow d_{m,k})=\{a_{m},b_{m},c_{m,k},e^{a}_{m,k},e^{b}_{m,k}\}\cup\{d_{m,j}:k\neq j\leqslant 2^{n+1}-1\}. \tag{7}\] Moreover, since \(d_{m,k}\in Y\) and \(Y\) is an antichain, \[Y\subseteq\{d_{m,k}\}\cup(X_{n}\smallsetminus(\uparrow d_{m,k}\cup\downarrow d_{ m,k})).\] Together with Conditions (7), this implies \(|Y|\leqslant 2^{n+1}+5\). As, by assumption \(n\geqslant 2\), we obtain \(5\leqslant 2^{n+1}\) and, therefore, \[|Y|\leqslant 2^{n+1}+5\leqslant 2^{n+1}+2^{n+1}=2^{n+2}.\] Consequently, we may assume, without loss of generality, that \(Y\) is disjoint from the set \(\{d_{m,j}:j\leqslant 2^{n+1}-1\}\). Then we consider the case where \(c_{m,k}\in Y\). The definition of \(\prec\) implies that \[X_{n}\smallsetminus(\uparrow c_{m,k}\cup\downarrow c_{m,k})\subseteq\ \{a_{m},b_{m},d_{m,k},e_{m-1,k}^{a}\}\cup\{c_{m,j}:k\neq j \leqslant 2^{n+1}-1\}.\] Moreover, since \(c_{m,k}\in Y\) and \(Y\) is an antichain, \(Y\subseteq\{c_{m,k}\}\cup(X_{n}\smallsetminus(\uparrow c_{m,k}\cup\downarrow c_{ m,k}))\). Together with the above display, this implies that \(|Y|\leqslant 2^{n+1}+4\leqslant 2^{n+2}\), as desired. As a consequence, we may assume, without loss of generality, that \(Y\) is disjoint from the set \(\{c_{m,j}:j\leqslant 2^{n+1}-1\}\). Then we consider the case where \(b_{m}\in Y\). The definition of \(\prec\) implies that \[X_{n}\smallsetminus(\uparrow b_{m}\cup\downarrow b_{m}) = \{e_{m,k}^{a}:k\leqslant 2^{n+1}-1\}\cup\{d_{m,k}:k\leqslant 2 ^{n+1}-1\}\cup\{a_{m}\}\cup \tag{8}\] \[\{c_{m,k}:k\leqslant 2^{n+1}-1\text{ and }k\notin\{k_{1},k_{3}\}\},\] where \(k_{1}\) and \(k_{3}\) are the unique positive integers with the following property: the unique positive integer \(j\leqslant 2^{n+1}-1\) such that \(m\equiv j\bmod t+1\) is such that \(s_{j}=\langle k_{1},k_{2},k_{3}\rangle\), for some positive integer \(k_{2}\). Moreover, since \(b_{m}\in Y\) and \(Y\) is an antichain, \[Y\subseteq\{b_{m}\}\cup(X_{n}\smallsetminus(\uparrow b_{m}\cup\downarrow b_{m})).\] Together with Condition (8) and the fact that \(Y\) does not contain elements of the form \(c_{m,k}\) or \(d_{m,k}\), this yields \[Y\subseteq\{a_{m},b_{m}\}\cup\{e_{m,k}^{a}:k\leqslant 2^{n+1}-1\}.\] Hence, we conclude that \(|Y|\leqslant 2^{n+1}+2\leqslant 2^{n+2}\). Lastly, we consider the case where \(a_{m}\in Y\). The definition of \(\prec\) implies that \[X_{n}\smallsetminus(\uparrow a_{m}\cup\downarrow a_{m}) = \{e_{m,k}^{b}:k\leqslant 2^{n+1}-1\}\cup\{d_{m,k}:k\leqslant 2 ^{n+1}-1\}\cup\{b_{m}\}\cup \tag{9}\] \[\{c_{m,k}:k\leqslant 2^{n+1}-1\text{ and }k\notin\{k_{1},k_{2}\}\},\] where \(k_{1}\) and \(k_{2}\) are the unique positive integers with the following property: the unique positive integer \(j\leqslant 2^{n+1}-1\) such that \(m\equiv j\bmod t+1\) is such that \(s_{j}=\langle k_{1},k_{2},k_{3}\rangle\), for some positive integer \(k_{3}\). Moreover, since \(a_{m}\in Y\) and \(Y\) is an antichain, \[Y\subseteq\{a_{m}\}\cup(X_{n}\smallsetminus(\uparrow a_{m}\cup\downarrow a_{m})).\] Together with Condition (8) and the fact that \(Y\) does not contain elements of the form \(c_{m,k}\) or \(d_{m,k}\), this yields \[Y\subseteq\{a_{m},b_{m}\}\cup\{e_{m,k}^{b}:k\leqslant 2^{n+1}-1\}.\] Hence, \(|Y|\leqslant 2^{n+1}+2\leqslant 2^{n+2}\). This concludes the proof of the Claim. We are now ready to prove that every antichain \(Y\) in \(\mathbb{X}_{n}\) has cardinality at most \(2^{n+2}\). First, the Claim guarantees that if \(Y\) contains an element of the form \(a_{m},b_{m},c_{m,k}\), or \(d_{m,k}\), then \(|Y|\leqslant 2^{n+2}\). Furthermore, if \(Y\) contains the minimum \(\bot\) of \(\mathbb{X}_{n}\), then necessarily \(Y=\{\bot\}\) and, therefore, \(|Y|=1\leqslant 2^{n+2}\). Because of this, we may assume that \[Y\subseteq\{e_{m,k}^{a}:m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}\cup\{e_{m,k}^{b}:m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}.\] Let us suppose, with a view to contradiction, that \(e_{m,i}^{a},e_{m^{\prime},j}^{a}\in Y\), for some \(m\neq m^{\prime}\in\mathbb{N}\) and \(i,j\leqslant 2^{n+1}-1\). Without loss of generality, we assume \(m<m^{\prime}\). As \(n\geqslant 2\), we know \(2^{n+1}-1\geqslant 7\), so there are \(k\neq k^{\prime}\leqslant 2^{n+1}-1\), both distinct from \(i\) and \(j\). It follows from the definition of \(\mathbb{X}_{n}\) that \[e^{a}_{m^{\prime},j}\leqslant d_{m^{\prime},k}\leqslant c_{m^{\prime},k^{\prime }}\leqslant e^{a}_{m^{\prime}-1,i}\leqslant d_{m^{\prime}-1,k}\leqslant c_{m^{ \prime}-1,k^{\prime}}\leqslant\cdots\leqslant e^{a}_{m,i},\] contradicting the assumption that \(Y\) is an antichain. An analogous argument shows that if \(e^{b}_{m,i^{\prime}},e^{b}_{m^{\prime},j}\in Y\) for some \(m,m^{\prime}\in\mathbb{N}\) and \(i,j\leqslant 2^{n+1}-1\), then we must have \(m=m^{\prime}\). Suppose now that \(e^{a}_{m,i},e^{b}_{m^{\prime},j}\in Y\), for some \(m\neq m^{\prime}\in\mathbb{N}\) and \(i,j\leqslant 2^{n+1}-1\). If \(m<m^{\prime}\), then the definition of \(\mathbb{X}_{n}\) entails \[e^{b}_{m^{\prime},j}\leqslant d_{m^{\prime},k}\leqslant c_{m^{\prime},k^{ \prime}}\leqslant e^{a}_{m^{\prime}-1,i},\] for some \(k\neq k^{\prime}\leqslant 2^{n+1}-1\), both distinct from \(i\) and \(j\). If \(m=m^{\prime}-1\), then the above display already contradicts the assumption that \(Y\) is an antichain. If not, then we repeat our previous argument, again contradicting the aforementioned assumption. If \(m^{\prime}<m\), we instead use the following inequality to derive a contradiction in a similar manner as above \[e^{a}_{m,i}\leqslant d_{m,k}\leqslant c_{m,k^{\prime}}\leqslant e^{b}_{m-1,j}.\] From this discussion, we conclude that if an antichain \(Y\) is contained in \[\{e^{a}_{m,k}:m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}\cup\{e^{b}_{m, k}:m\in\mathbb{N}\text{ and }k\leqslant 2^{n+1}-1\}.\] then \(Y\) must be contained in \[\{e^{a}_{m,k}:2^{n+1}-1\geqslant k\in\mathbb{N}\}\cup\{e^{b}_{m,k}:2^{n+1}-1 \geqslant k\in\mathbb{N}\}.\] for some \(m\in\mathbb{N}\), and therefore \(|Y|\leqslant 2^{n+2}\), as desired. **Proposition 4.8**.: \(\mathbb{X}_{n}\) _is \((n+1)\)-colorable, for every integer \(n\geqslant 2\)._ Proof.: To prove that \(\mathbb{X}_{n}\) is \((n+1)\)-colorable, notice that it has \(2^{n+1}\) maximal elements, namely \[c_{0,0},\ldots,c_{0,2^{n+1}-1}.\] Having at our disposal \(2^{n+1}\) colors, we can color each maximal with a distinct color. Lastly, we color all the nonmaximal elements by the constant sequence \(\vec{0}\). Because of the definition of the topology and order of \(\mathbb{X}_{n}\), this is a weak \((n+1)\)-coloring of \(\mathbb{X}_{n}\). To prove that it is an \((n+1)\)-coloring, it only remains to show that every E-partition \(R\) on \(\mathbb{X}_{n}\) other than the identity relation identifies a pair of elements with distinct color. Suppose the contrary, with a view to contradiction, i.e., that there is an E-partition \(R\) on \(\mathbb{X}_{n}\) distinct from the identity that does not identify any pair of elements of the distinct color. Observe that there are distinct \(\hat{x},\hat{y}\in X_{n}\smallsetminus\{\bot\}\) such that \(\langle\hat{x},\hat{y}\rangle\in R\). To prove this, recall that there are distinct \(x\) and \(y\) such that \(\langle x,y\rangle\in R\). If \(x\) and \(y\) are different from \(\bot\), we are done. Then suppose that \(y=\bot\), in which case \(x\neq\bot\). By the definition of the order of \(\mathbb{X}_{n}\), there is an element \(\hat{y}>\bot\) such that \(x\nleqslant\hat{y}\). Since \(R\) is an E-partition and \(\langle x,\bot\rangle\in R\), there is \(\hat{x}\geqslant x\) such that \(\langle\hat{x},\hat{y}\rangle\in R\). By construction, \(\hat{x}\) and \(\hat{y}\) are different and other than \(\bot\), as desired. Recall that the proof of Proposition 4.4 established \(\mathbb{X}_{n}\) as an Esakia space whose principal downsets are cofinite. This, together with the fact that \(\hat{x}\) and \(\hat{y}\) are different from \(\bot\), entails that the upset \(\uparrow\{\hat{x},\hat{y}\}\) is closed and finite. Consequently, it forms a finite E-subspace \(\boldsymbol{Y}\) of \(\mathbb{X}_{n}\). Notice that the restriction \(S\coloneqq R\cap(Y\times Y)\) is an E-partition on \(\boldsymbol{Y}\). Furthermore, since \(\boldsymbol{Y}\) is finite, the E-partition \(S\) can be obtained as the kernel of a composition of finitely many \(\alpha\) and \(\beta\)-reductions each of which does not identify any pair of of elements of different color. Consequently, it must be possible to perform on \(\mathbf{Y}\) at least an \(\alpha\) or a \(\beta\)-reduction \(f\) that identifies two distinct elements \(z\) and \(v\) of the same color. First, suppose that \(f\) is a \(\beta\)-reduction. Then \(z\) and \(v\) have the same immediate successors. By the definition of the order of \(\mathbb{X}_{n}\), if two distinct elements have the same immediate successors, they must be maximal. In particular, \(z\) and \(v\) are two distinct maximal elements. But this implies that they have different color, a contradiction. Then \(f\) must be an \(\alpha\)-reduction, in which case we can assume, by symmetry, that \(v\) is the unique immediate successor of \(z\). However, by construction, no element of \(\mathbb{X}_{n}\) has precisely one immediate successor, again a contradiction. Hence, we conclude that \(\mathbb{X}_{n}\) is \((n+1)\)-colorable. **Corollary 4.9**.: _Let \(n\) be an integer \(\geqslant 2\). The algebra \(\mathbb{X}_{n}^{*}\) is infinite and \((n+1)\)-generated. Consequently, the variety \(\mathbb{V}(\mathbb{X}_{n}^{*})\) is not locally finite._ Proof.: The algebra \(\mathbb{X}_{n}^{*}\) is \((n+1)\)-generated by Theorem 3.3 and Proposition 4.8. Furthermore, since \(\mathbb{X}_{n}\) is infinite, so is \(\mathbb{X}_{n}^{*}\). ## 5. A combinatorial lemma For every \(n\in\mathbb{N}\), consider the set \[P_{n}\coloneqq\{y_{m,i}:m,i\in\mathbb{N}\text{ and }i\leqslant 2^{n+1}-1\}.\] We endow \(P_{n}\) with a binary relation \(\prec\) defined as follows, for every \(x,z\in P_{n}\): \[x\prec z\Longleftrightarrow x=y_{m,i}\text{ and }z=y_{m-1,j}\text{, for some }m,i,j\in\mathbb{N}\text{ such that }i\neq j.\] The poset obtained endowing \(P_{n}\) with the reflexive and transitive relation of \(\prec\) is depicted in Figure 2. **Definition 5.1**.: Let \(\mathbf{Y}_{n}\) be the root compactification of the poset obtained endowing \(P_{n}\) with the reflexive and transitive closure of \(\prec\). Notice that \(\prec\) is the covering relation of the poset \(\mathbb{Y}_{n}\) underlying \(\mathbf{Y}_{n}\). Furthermore, as \(\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We set \(\mathbf{V}_{1}\coloneqq\mathbf{Z}\). Furthermore, let \(i\) be a positive integer \(<k\) and suppose we already defined \(\mathbf{V}_{i}\) and \(f_{i}\colon\mathbf{V}_{i}\to\mathbf{V}_{i+1}\). Then let \(f_{i+1}\) be the \(\beta\)-reduction on \(\mathbf{V}_{i+1}\) that identifies \(f_{i}\circ\cdots\circ f_{0}(y_{i+1,0})\) and \(f_{i}\circ\cdots\circ f_{0}(y_{i+1,1})\) and let \(\mathbf{V}_{i+2}\) be the codomain of \(f_{i+1}\). Clearly, \(\mathbf{V}_{0},\ldots,\mathbf{V}_{k+1}\) and \(f_{0},\ldots,f_{k}\) are,respectively, sequences of Esakia spaces and \(\beta\)-reductions satisfying conditions (i) and (iii). Finally, condition (ii) is satisfied, because all the elements of \(\mathbf{V}\) have the same color. For the inductive step, take a positive integer \(n\) and suppose that the statement holds for every nonnegative integer \(<n\). Let also \(\mathbf{V}\) be a finite upset of \(\mathbf{Y}_{n}\) and fix a weak \(n\)-coloring on \(\mathbf{V}\). As in the base case, we can assume that there is at least one \(m\in\mathbb{N}\) such that \(y_{m,0},\ldots,y_{m,2^{n+1}-1}\in V\). Then set \(\mathbf{V}_{0}\coloneqq\mathbf{V}\). First, we perform a sequence of \(\beta\)-reductions Figure 3. The Esakia space \(\mathbf{Y}_{0}\). Figure 2. The reflexive and transitive closure of \(\prec\) on \(P_{n}\). identifying all maximal elements of \(\mathbf{V}\), i.e., all elements in \(\{y_{0,0},\ldots,y_{0,2^{n+1}-1}\}\), that have the same color. This gives us a sequence of Esakia spaces \(\mathbf{V}_{0},\ldots,\mathbf{V}_{p_{0}}\) with \(\beta\)-reductions \(f_{0},\ldots,f_{p_{0}-1}\) satisfying conditions (i) and (ii), as well as (iii) in the restricted case where \(m=0\). Now, notice that the elements of depth \(\geqslant 2\) of \(\mathbf{V}_{p_{0}}\) are the images under \(f_{p_{0}-1}\circ\cdots\circ f_{0}\) of the elements of \(V\) of the form \(y_{i,j}\), for \(i\geqslant 1\). Since \(f_{p_{0}-1}\circ\cdots\circ f_{0}\) respects depth (being a composition of \(\beta\)-reductions), we can assume that the elements of depth \(\geqslant 2\) of \(\mathbf{V}_{p_{0}}\) are precisely the elements of \(V\) of the form \(y_{i,j}\) with \(i\geqslant 1\). Furthermore, notice that \(\mathbf{V}_{p_{0}}\) inherit the weak \(n\)-coloring of \(\mathbf{V}\), because the map \[f_{p_{0}-1}\circ\cdots\circ f_{0}\colon\mathbf{V}\to\mathbf{V}_{p_{0}}\] does not identify elements of distinct color. If it is possible to identify all the elements of depth \(2\) of \(\mathbf{V}_{p_{0}}\) with the same color by means of a series of \(\beta\)-reduction, we do it and obtain a sequence of finite posets \(\mathbf{V}_{p_{0}},\mathbf{V}_{p_{0}+1},\ldots,\mathbf{V}_{p_{1}}\) and of \(\beta\)-reductions \(f_{p_{0}},\ldots,f_{p_{1}-1}\) such that the sequences of Esakia spaces \(\mathbf{V}_{0},\ldots,\mathbf{V}_{p_{1}}\) and \(\beta\)-reductions \(f_{0},\ldots,f_{p_{1}-1}\) satisfy conditions (i) and (ii), as well as (i) in the restricted case where \(m\leqslant 1\). Then, if it is possible, we identify all the elements of depth \(3\) of \(\mathbf{V}_{p_{1}}\) with the same color by means of a series of \(\beta\)-reduction, we do it and obtain a sequence of finite posets \(\mathbf{V}_{p_{1}},\mathbf{V}_{p_{1}+1},\ldots,\mathbf{V}_{p_{2}}\) and of \(\beta\)-reductions \(f_{p_{1}},\ldots,f_{p_{2}-1}\) such that the sequences of Esakia spaces \(\mathbf{V}_{0},\ldots,\mathbf{V}_{p_{2}}\) and \(\beta\)-reductions \(f_{0},\ldots,f_{p_{2}-1}\) satisfy conditions (i) and (ii), as well as (i) in the restricted case where \(m\leqslant 2\). We iterate this process until it is possible and obtain a sequence of Esakia spaces \(\mathbf{V}_{0},\ldots,\mathbf{V}_{p_{k}}\) and \(\beta\)-reductions \(f_{0},\ldots,f_{p_{k}-1}\) that satisfy conditions (i) and (ii), as well as (i) in the restricted case where \(m\leqslant k\). If \(k\) is the largest nonnegative integer \(m\) such that \(y_{m,0},\ldots,y_{m,2^{n+1}-1}\in V\), then we are done. Then we consider the case where \(y_{k+1,0},\ldots,y_{+1,2^{n+1}-1}\in V\). For the sake of simplicity, as in the case of \(\mathbf{V}_{p_{0}}\), we can assume that the elements of depth \(\geqslant k+2\) of \(\mathbf{V}_{p_{k}}\) are precisely the elements of \(\mathbf{V}\) of the form \(y_{i,j}\) with \(i\geqslant k+1\). Consequently, \(y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\in V_{p_{k}}\). By assumption, it is not possible to identify all elements in \(\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\) of the same color by applying a series of \(\beta\)-reductions to \(\mathbf{V}_{p_{k}}\). This means that there are \(i<j\) such that \(y_{k+1,i}\) and \(y_{k+1,i}\) have the same color, but their immediate successors are different in \(\mathbf{V}_{p_{k}}\). Since, by construction, \(k\geqslant 0\), we know that the \(y_{k+1,i}\) and \(y_{k+1,i}\) are not maximal and their immediate successors have depth \(k+1\). By the definition of \(\mathbf{V}_{p_{k}}\) and \(\mathbf{Y}_{n}\), if the immediate successors of \(y_{k+1,i}\) and \(y_{k+1,i}\) in \(\mathbf{V}_{p_{k}}\) are different, then the map \(f_{p_{k}-1}\circ\cdots\circ f_{0}\) diverges on \(y_{k,i}\) and \(y_{k,j}\). In particular, this means that \(y_{k,i}\) and \(y_{k,j}\) have different color. Therefore, at least one of them is colored by a color different from the constant sequence with value one. By symmetry, we can assume that \(y_{k,i}\) is colored by a color \(\vec{c}\leqslant\langle 0,1,1,1\ldots,1\rangle\). We shall see that every of depth \(\geqslant k+2\) of \(\mathbf{V}_{p_{k}}\) is colored by colors \(\leqslant\langle 0,1,1,1\ldots,1\rangle\). By definition of \(\mathbf{Y}_{n}\), it will be enough to prove this for the elements \(y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\). To this end, notice that all the elements in \(\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\smallsetminus\{y_{k+1,i}\}\) are below \(y_{k,i}\). Thus, since the color of \(y_{k,i}\) is \(\leqslant\langle 0,1,1,1\ldots,1\rangle\), so are the colors of the elements in \(\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\smallsetminus\{y_{k+1,i}\}\). Since \(y_{k,i}\) and \(y_{k,j}\) are different and of the same color, we conclude that also the color of \(y_{k,i}\) is \(\leqslant\langle 0,1,1,1\ldots,1\rangle\). Now, let \(\vec{c}_{1},\ldots,\vec{c}_{q}\) be the list of distinct colors that are really used to color elements in \(\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\). As we mentioned, \(\vec{c}_{i}\leqslant\langle 0,1,1,1\ldots,1\rangle\) for every \(i\leqslant q\). For every \(i\leqslant q\), let \(m_{i}\) the the cardinality of the largest set of elements in \[\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\cap\vec{c}_{i}(\mathbf{V}_{p_{k}})\] with the same immediate successors in \(\mathbf{V}_{p_{k}}\). We have two cases: * either \(m_{1}+\cdots+m_{q}\geqslant 2^{n}\), or * \(m_{1}+\cdots+m_{q}<2^{n}\). First, suppose that condition (C.1) holds. In this case, there is a sequence \(M_{1},\ldots,M_{t}\) with \(t\leqslant q\) of subsets of \(\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\) such that each \(M_{i}\) is a set of elements of color \(\vec{c}_{i}\) with the same immediate successors in \(\mathbf{V}_{p_{k}}\) such that \[|M_{1}|+\cdots+|M_{t}|=2^{n}.\] Notice that the subposet of \(\mathbf{Y}_{n}\) with universe \[\{y_{i,j}\in Y_{n}:i\geqslant k+1\text{ and }y_{k+1,j}\in M_{1}\cup\cdots\cup M _{t}\}\] is isomorphic to the poset underlying \(\mathbf{Y}_{n-1}\). Under this identification, let \(\mathbf{V}^{-}\) be the finite E-subspace of \(\mathbf{Y}_{n-1}\) with universe \(Y_{n-1}\cap V\). Notice that the set of maximal elements of \(\mathbf{V}^{-}\) is \(M_{1}\cup\cdots\cup M_{t}\). Furthermore, the weak \(n\)-coloring of \(\mathbf{V}\) restricts to a weak \((n-1)\)-coloring of \(\mathbf{V}^{-}\), because all elements of \(\mathbf{V}^{-}\) are colored with colors \(\leqslant\langle 0,1,1,1\ldots,1\rangle\). Therefore, by induction hypothesis, there is a sequence of Esakia spaces \(\mathbf{V}_{0}^{-},\ldots,\mathbf{V}_{j}^{-}\) with \(\beta\)-reductions \(g_{i}\colon\mathbf{V}_{i}^{-}\to\mathbf{V}_{i+1}^{-}\) for all \(j>i\in\mathbb{N}\) such that \(\mathbf{V}_{0}^{-}=\mathbf{V}^{-}\) and: 1. \(\operatorname{Ker}(g_{j}\circ\cdots\circ g_{0})\) does not identify any pair of elements of distinct color; 2. for every \(m\in\mathbb{N}\) such that \(\{y_{m,j}:y_{k+1,i}\in M_{1}\cup\cdots\cup M_{t}\}\subseteq V^{-}\), there are \(i<j\) such that \(y_{k+1,i},y_{k+1,j}\in M_{1}\cup\cdots\cup M_{t}\) and \(\langle y_{m,i},y_{m,j}\rangle\in\operatorname{Ker}(g_{j}\circ\cdots\circ g_{ 0})\). Now, we can always assume that the \(\beta\)-reductions \(g_{0},\ldots,g_{j}\) are ordered as follows: first we have the \(\beta\)-reductions \(g_{0},\ldots,g_{s_{1}}\) that identify pairs of elements of depth \(1\) of \(\mathbf{V}^{-}\), then those that identify pairs of elements of depth \(2\) of \(\mathbf{V}^{-}\), in symbols \(g_{s_{1}+1},\ldots,g_{s_{2}}\), and so on. For each \(\beta\)-reduction \(g_{i}\colon\mathbf{V}_{i}^{-}\to\mathbf{V}_{i+1}^{-}\), we shall define a \(\beta\)-reduction \[f_{p_{k}+i}\colon\mathbf{V}_{p_{k}+i}\to\mathbf{V}_{p_{k}+i+1}\] as follows. First, \(g_{0}\) identifies a pair \(\langle x,z\rangle\) of elements of the same color and of depth \(1\) in \(\mathbf{V}^{-}\). Since the set of maximal elements of \(\mathbf{V}^{-}\) is \(M_{1}\cup\cdots\cup M_{t}\) and \(x\) and \(z\) have the same color, we get \(x,z\in M_{i}\) for some \(i\). By definition, the elements of \(M_{i}\) have the same immediate successors in \(\mathbf{V}_{p_{k}}\), whence the pair \(\langle x,z\rangle\) can be identifies by applying a \(\beta\)-reduction \(f_{p_{k}}\) to \(\mathbf{V}_{p_{k}}\). Essentially the same argument allows to construct the series of \(\beta\)-reductions \(f_{p_{k}},\ldots,f_{p_{k}+s_{1}}\), where \(g_{s_{1}+1}\) is the first \(\beta\)-reduction that identifies a pair of distinct elements \(y_{k+2,i}\) and \(y_{k+2,j}\) of depth \(2\). Then \(y_{k+2,i}\) and \(y_{k+2,j}\) have the same immediate successors in \(\mathbf{V}_{s_{1}+1}^{-}\). Since \(y_{k+2,i}\prec y_{k+1,j}\) and \(y_{k+2,j}\prec y_{k+1,i}\) in \(\mathbf{V}_{s_{1}+1}^{-}\), we obtain that in \(\mathbf{V}_{s_{1}+1}^{-}\) the elements \(y_{k+1,i}\) and \(y_{k+1,j}\) must have been identified, respectively, with some \(y_{k+1,h_{i}}\) and \(y_{k+1,h_{j}}\) such that \(i\neq h_{i}\) and \(j\neq h_{j}\). Consequently, \(y_{k+1,i}\) and \(y_{k+1,j}\) are also identified, respectively, with \(y_{k+1,h_{i}}\) and \(y_{k+1,h_{j}}\) in \(\mathbf{V}_{p_{k}+s_{1}+1}\). Because of the definition of \(\mathbf{Y}_{n}\), this implies that \(y_{k+2,i}\) and \(y_{k+2,j}\) have the same immediate successors in \(\mathbf{V}_{p_{k}+s_{1}+1}\). Then there exists a \(\beta\)-reduction \(f_{s_{1}+2}\) on \(\mathbf{V}_{s_{1}+1}\) that identifies the pair \(\langle y_{k+2,i},y_{k+2,j}\rangle\). Essentially the same argument allows to construct the series of \(\beta\)-reductions \(f_{p_{k}+s_{1}+1},\ldots,f_{p_{k}+s_{2}}\). Iterating this process, we obtain a sequence of Esakia spaces \(\mathbf{V}_{p_{k}+1},\ldots,\mathbf{V}_{p_{k}+j+1}\) and of \(\beta\)-reductions \(f_{p_{k}},\ldots,f_{p_{k}+j}\). Clearly, the sequences \(\mathbf{V}_{0},\ldots,\mathbf{V}_{p_{k}+j+1}\) and \(f_{0},\ldots,f_{p_{k}+j}\) satisfy condition (i). Together with the definition of the various \(f_{i}\), conditions (1) and (2) imply that these sequences satisfy also (ii) and (iii), as desired. Then we consider case where condition (C.2) holds, i.e., \(m_{1}+\cdots+m_{q}<2^{n}\). Since \[2m_{1}+\cdots+2m_{q}<2^{n+1}\] and the elements of \(\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\) are colored with colors among \(\vec{c}_{1},\ldots,\vec{c}_{q}\), this implies that there exists \(i\leqslant q\) such that \[2m_{i}+1\leqslant|\vec{c}_{i}(\mathbf{V}_{p_{k}})\cap\{y_{k+1,0},\ldots,y_{k+1,2^ {n+1}-1}\}|.\] Recall that \(m_{i}\) is the size of the largest the of elements of \(\vec{c}_{i}(\mathbf{V}_{p_{k}})\cap\{y_{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\) with the same immediate successors. Then, in view of the above display, there are at least three distinct elements \(y_{k+1,a_{1}},y_{k+1,a_{2}},y_{k+1,a_{3}}\in\vec{c}_{i}(\mathbf{V}_{p_{k}})\cap\{y _{k+1,0},\ldots,y_{k+1,2^{n+1}-1}\}\) with different sets of immediate successors. By definition of \(\mathbf{Y}_{n}\), this means that the elements \(y_{k,a_{1}},y_{k,a_{2}}\), and \(y_{k,a_{3}}\) are not identified in \(\mathbf{V}_{p_{k}}\). By the construction of \(\mathbf{V}_{p_{k}}\), this guarantees that \(y_{k,a_{1}},y_{k,a_{2}}\), and \(y_{k,a_{3}}\) have different colors \(d_{1},d_{2}\), and \(d_{3}\). Since \(d_{1},d_{2},d_{3}\) are distinct, we can assume, without loss of generality, every color \(\vec{c}\leqslant d_{1},d_{2},d_{3}\) is \(\leqslant\langle 0,0,1,1,1,\ldots 1\rangle\). Notice that every element in \(\{y_{k+1,m}\colon m\leqslant 2^{n+1}-1\}\smallsetminus\{y_{k+1,a_{1}},y_{k+1,a_{2}},y_{k +1,a_{3}}\}\) is below \(y_{k,a_{1}},y_{k,a_{2}},y_{k,a_{3}}\) and, therefore, of color \(\leqslant\langle 0,0,1,1,1,\ldots 1\rangle\). Furthermore, \(y_{k+1,a_{1}}\leqslant y_{k,a_{2}},y_{k,a_{3}}\) and, therefore, the color of \(y_{k+1,a_{1}}\) is \(\leqslant d_{2},d_{3}\). Similarly, \(y_{k+1,a_{2}}\leqslant y_{k,a_{1}}\), whence the color of \(y_{k+1,a_{2}}\) is \(\leqslant d_{1}\). Since \(y_{k+1,a_{1}},y_{k+1,a_{2}},y_{k+1,a_{3}}\) have the same color, their color is \(\leqslant d_{1},d_{2},d_{3}\) and, therefore, \(\leqslant\langle 0,0,1,1,1,\ldots 1\rangle\). We conclude that every element in \(\mathbf{V}_{p_{k}}\) of the form \(y_{i,j}\) with \(i\geqslant k+1\) is of color \(\leqslant\langle 0,0,1,1,1,\ldots 1\rangle\). Bearing this in mind, if \(m_{1}+\cdots+m_{q}\geqslant 2^{n-1}\), we can conclude the proof by repeating the argument detailed in case (C.1) with the only different that \(n\) should be replaced by \(n-1\) in it. Then we consider the case where \(m_{1}+\cdots+m_{q}<2^{n-1}\). There is \(i\leqslant q\) such that \[2^{2}m_{i}+1\leqslant|\vec{c}_{i}(\mathbf{V}_{p_{k}})\cap\{y_{k+1,0},\ldots,y_{k+ 1,2^{n+1}-1}\}|.\] As a consequence, there are distinct elements \[y_{k+1,a_{1}},\ldots,y_{k+1,a_{5}}\in\vec{c}_{i}(\mathbf{V}_{p_{k}})\cap\{y_{k+1,0 },\ldots,y_{k+1,2^{n+1}-1}\}\] with different sets of immediate successors. By definition of \(\mathbf{Y}_{n}\), this means that the elements \(y_{k,a_{1}},\ldots,y_{k,a_{5}}\) are not identified in \(\mathbf{V}_{p_{k}}\), whence they have different colors \(d_{1},\ldots,d_{5}\). Repeating the argument detailed above paragraph, we obtain that every element in \(\mathbf{V}_{p_{k}}\) of the form \(y_{i,j}\) with \(i\geqslant k+1\) is of color \(\leqslant\langle 0,0,0,1,1,1,\ldots 1\rangle\). Bearing this in mind, if \(m_{1}+\cdots+m_{q}\geqslant 2^{n-2}\), we can conclude the proof by repeating the argument detailed in case (C.1) with the only different that \(n\) should be replaced by \(n-2\) in it. As \(m_{1}+\cdots+m_{q}\geqslant 1=2^{0}=2^{n-n}\), iterating this argument, eventually we will be able to apply the argument detailed for case (C.1) and, therefore, conclude the proof. **Corollary 5.3**.: _Let \(2\leqslant n\in\mathbb{N}\) and let \(\mathbf{Z}\) be a finite \(E\)-subspace of \(\mathbb{X}_{n}\) with an \(E\)-partition \(R\) such that \(\mathbf{Z}/R\) is \(n\)-colorable. For every \(m\in\mathbb{N}\), if \(c_{m,0},\ldots,c_{m,2^{n+1}-1}\in Z\), then there are \(i<j\) such that \(\langle c_{m,i},c_{m,j}\rangle\in R\)._ Proof.: If there is no \(m\in\mathbb{N}\) such that \(c_{m,0},\ldots,c_{m,2^{n+1}-1}\in Z\), the the statement is vacuously true. Then suppose that there is such an integer and let \(m\) be the largest one (it exists, because \(\mathbf{Z}\) is finite). Then let \(\mathbf{V}\) be the finite E-subspace of \(\mathbf{Y}_{n}\) with universe \[\{y_{k,i}\in Y_{n}:k\leqslant 3m\text{ and }i\leqslant 2^{n+1}-1\}.\] Moreover, let \(\delta\colon\mathbf{V}\to\mathbf{Z}\) be the map defined as follows: 1. \(\delta(y_{0,i})=c_{0,i}\), \(\delta(y_{1,i})=d_{0,i}\), and \(\delta(y_{2,i})=e_{0,i}^{a}\), for every \(2^{n+1}-1\geqslant i\in\mathbb{N}\); 2. \(\delta(y_{3,i})=c_{1,i}\), \(\delta(y_{4,i})=d_{1,i}\), and \(\delta(y_{5,i})=e_{1,i}^{a}\), for every \(2^{n+1}-1\geqslant i\in\mathbb{N}\); 3. etc. 4. \(\delta(y_{3m,i})=c_{m,i}\), for every \(2^{n+1}-1\geqslant i\in\mathbb{N}\). Notice that \(\delta\) is a well-defined order embedding. Now, recall that \(\mathbf{Z}/R\) is \(n\)-colorable and fix an \(n\)-coloring on it. This \(n\)-coloring induces a weak \(n\)-coloring \(c\) on \(\mathbf{Z}\) that colors an element \(z\in Z\) by the color of its equivalence class \(z/R\). Furthermore, \(R\) is the largest E-partition on \(\mathbf{Z}\) that does not identify any pair of elements of \(Z\) colored differently by \(c\). In turn, \(c\) induces a weak \(n\)-coloring on \(\mathbf{V}\) that colors an element \(v\in V\) by the color of \(\delta(v)\) in \(\mathbf{Z}\). The fact that this is indeed a weak \(n\)-coloring on \(\mathbf{V}\) follows from the fact that \(\delta\) is an order embedding. By Lemma 5.2 there is a finite sequence \(\mathbf{V}_{0},\ldots,\mathbf{V}_{k}\) of Esakia spaces such that: 1. \(\mathbf{V}_{0}=\mathbf{V}\) and each \(\mathbf{V}_{i+1}\) is obtained by applying a \(\beta\)-reduction \(f_{i}\) to \(\mathbf{V}_{i}\); 2. \(\operatorname{Ker}(f_{k-1}\circ\cdots\circ f_{0})\) does not identify any pair of elements of distinct color of \(\mathbf{V}\); 3. for every \(3m\geqslant p\in\mathbb{N}\), there are \(i<j\) such that \[\langle y_{p,i},y_{p,j}\rangle\in\operatorname{Ker}(f_{k-1}\circ\cdots\circ f_ {0}).\] We shall use them to define a sequence \(\mathbf{Z}_{0},\ldots,\mathbf{Z}_{k}\) of Esakia spaces such that: 1. \(\mathbf{Z}_{0}=\mathbf{Z}\) and each \(\mathbf{Z}_{i+1}\) is obtained by applying a \(\beta\)-reduction \(g_{i}\) to \(\mathbf{Z}_{i}\); 2. \(\operatorname{Ker}(g_{k-1}\circ\cdots\circ g_{0})\) does not identify any pair of elements of distinct color of \(\mathbf{Z}\); 3. for every \(m\geqslant p\in\mathbb{N}\), there are \(i<j\) such that \[\langle c_{p,i},c_{p,j}\rangle\in\operatorname{Ker}(g_{k-1}\circ\cdots\circ g _{0}).\] To this end, recall that \(f_{0}\colon\mathbf{V}\to\mathbf{V}_{1}\) is a \(\beta\)-reduction that identifies two elements \(x\) and \(z\) of the same color. Then \(\delta(x)\) and \(\delta(z)\) are elements of the same color. Moreover, since \(x\) and \(z\) have the same successors, they must be maximal, whence so are \(\delta(x)\) and \(\delta(z)\). Consequently, we can identify \(\delta(x)\) and \(\delta(z)\) by means of a \(\beta\)-reduction \(g_{0}\colon\mathbf{Z}\to\mathbf{Z}_{1}\). We repeat this argument, transforming each \(f_{i}\) into a \(g_{i}\), until we find an \(f_{q}\) that identifies two nonmaximal elements \(x\) and \(z\). Then \[x=f_{q-1}\circ\cdots\circ f_{0}(y_{p,i})\text{ and }z=f_{q-1}\circ\cdots\circ f _{0}(y_{p,j})\] for some \(p,i,j\in\mathbb{N}\) such that \(i\neq j\) and \(p\geqslant 1\). Since \(y_{p,i}\) and \(y_{p,j}\) are of the same color, so are \(\delta(y_{p,i})\) and \(\delta(y_{p,j})\). Furthermore, since \(x\) and \(y\) have the same immediate successors, there are \(h_{i}\neq i\) and \(h_{j}\neq j\) such that \[f_{q-1}\circ\cdots\circ f_{0}(y_{p-1,i}) =f_{q-1}\circ\cdots\circ f_{0}(y_{p-1,h_{i}})\] \[f_{q-1}\circ\cdots\circ f_{0}(y_{p-1,i}) =f_{q-1}\circ\cdots\circ f_{0}(y_{p-1,h_{i}}).\] By definition of \(g_{0},\ldots,g_{q-1}\), we get \[g_{q-1}\circ\cdots\circ g_{0}(\delta(y_{p-1,i})) =g_{q-1}\circ\cdots\circ g_{0}(\delta(y_{p-1,h_{i}})) \tag{10}\] \[g_{q-1}\circ\cdots\circ g_{0}(\delta(y_{p-1,j})) =g_{q-1}\circ\cdots\circ g_{0}(\delta(y_{p-1,h_{j}})). \tag{11}\] Now, there exists \(\hat{p}\in\mathbb{N}\) such that one of the following conditions holds: 1. \(\delta(y_{p,i})=c_{\hat{p},i}\), \(\delta(y_{p,j})=c_{\hat{p},j}\), \(\delta(y_{p-1,i})=e_{\hat{p}-1,i}^{a}\), and \(\delta(y_{p-1,j})=e_{\hat{p}-1,j}^{a}\); 2. \(\delta(y_{p,i})=d_{\hat{p},i}\), \(\delta(y_{p,j})=d_{\hat{p},j}\), \(\delta(y_{p-1,i})=c_{\hat{p},i}\), and \(\delta(y_{p-1,j})=c_{\hat{p},j}\); 3. \(\delta(y_{p,i})=e_{\hat{p},i}^{a}\), \(\delta(y_{p,j})=e_{\hat{p},j}^{a}\), \(\delta(y_{p-1,i})=d_{\hat{p},i}\), and \(\delta(y_{p-1,i})=d_{\hat{p},j}\). We detail only case (C.1), as the other ones are analogous. By (10) the elements \(e_{\hat{p}-1,i}\) and \(e_{\hat{p}-1,h_{i}}\) are identified in \(\mathbf{Z}_{q}\) (formally, they are identifies by the composition \(g_{q-1}\circ\cdots\circ g_{0}\)). Condition (11) yields the same conclusion for \(e_{\hat{p}-1,j}\) and \(e_{\hat{p}-1,h_{j}}\). Since \(h_{i}\neq i\) and \(h_{j}\neq j\), in view of the definition of \(\mathbb{X}_{n}\), this implies that the images of \(c_{\hat{p},i}\) and \(c_{\hat{p},j}\) under \(g_{q-1}\circ\cdots\circ g_{0}\) have the same immediate successors in \(\mathbf{Z}_{q}\). Consequently, we can identify them by means of a \(\beta\)-reduction \(g_{q}\colon\mathbf{Z}_{q}\to\mathbf{Z}_{q+1}\). This concludes the construction of \(g_{q}\). Repeating this argument, we obtain sequences \(\mathbf{Z}_{0},\ldots,\mathbf{Z}_{k}\) and \(g_{0},\ldots,g_{k-1}\) of Esakia spaces and \(\beta\)-reductions. By construction and (iii) they satisfy conditions (iv), (v), and (vi), as desired. Finally, take \[S\coloneqq\operatorname{Ker}(g_{k-1}\circ\cdots\circ g_{0}).\] As \(g_{0},\ldots,g_{k-1}\) is a sequence of \(\beta\)-reductions, \(S\) is an E-partition on \(\mathbf{Z}\). Furthermore, by (v), \(S\) does not identify any pair of elements of distinct color. Because \(R\) is the largest such E-partition of \(\mathbf{Z}\), this yields \(S\subseteq R\). From (vi) we conclude that for every \(p\in\mathbb{N}\), if \(c_{p,0},\ldots,c_{p,2^{n+1}-1}\in Z\), then there are \(i<j\) such that \(\langle c_{p,i},c_{p,j}\rangle\in R\), as desired. ## 6. The main result Our aim is to prove the following: **Theorem 6.1**.: _For every \(n\in\mathbb{N}\) there exists a variety of Heyting algebras whose \(n\)-generated free algebra is finite, while its \((n+1)\)-generated free algebra is infinite._ The next corollary is an immediate consequence of the theorem: **Corollary 6.2**.: _For every \(n\in\mathbb{N}\) there exists a nonlocally finite variety of Heyting algebras whose \(n\)-generated free algebra is finite._ Theorem 6.1 is obvious for the case where \(n=0\), because the variety of all Heyting algebras is nonlocally finite, while its free zero-generated Heyting algebra is finite (being the two-element Boolean algebra). For \(n=1\), it is well known that the variety \(\mathsf{KC}\) of Heyting algebras axiomatized by the _weak excluded middle_\(\neg x\vee\neg\neg x\approx 1\) is nonlocally finite, while its free one-generated algebra is finite. To prove this, it suffices to observe that every subdirectly irreducible one-generated member of \(\mathsf{KC}\) has cardinality \(\leqslant 3\), while the Rieger-Nishimura lattice plus a new bottom element is a two-generated infinite member of \(\mathsf{KC}\). Accordingly, to establish Theorem 6.1, it suffices to exhibit, for every integer \(n\geqslant 2\), a variety of Heyting algebras whose \(n\)-generated free algebra is finite, while its \((n+1)\)-generated free algebra is infinite. In view of Corollary 4.9, the \((n+1)\)-generated free algebra of \(\mathbb{V}(\mathbb{X}_{n}^{*})\) is infinite. Therefore, it suffices to prove the following: **Proposition 6.3**.: _For every integer \(n\geqslant 2\), the \(n\)-generated free algebra of \(\mathbb{V}(\mathbb{X}_{n}^{*})\) is finite._ Proof.: Since the type of Heyting algebras is finite, it suffices to show that there exists a natural number \(m\in\mathbb{N}\) such that the \(n\)-generated subalgebras of \(\mathbb{X}_{n}^{*}\) have cardinality \(\leqslant m\). Suppose the contrary, with a view to contradiction. In view of the correspondence between E-partitions and subalgebras and Theorem 3.3, there is no natural number \(m\in\mathbb{N}\) such that \(|X_{n}/R|\leqslant m\), for every E-partition \(R\) on \(\mathbb{X}_{n}\) such that \(\mathbb{X}_{n}/R\) is \(n\)-colorable. We claim that there are \(k\in\mathbb{N}\) and a sequence \(\{\mathbf{Z}_{m}:m\in\mathbb{N}\}\) of finite E-subspaces of \(\mathbb{X}_{n}\), each with an E-partition \(R_{m}\) such that \(\mathbf{Z}_{m}/R_{m}\) is \(n\)-colorable, \[|Z_{m}/R_{m}\smallsetminus\overline{0}(\mathbf{Z}_{m}/R_{m})|\leqslant k, \tag{12}\] \[|Z_{1}/R_{1}|<|Z_{2}/R_{2}|<|Z_{3}/R_{3}|<\cdots \tag{13}\] We begin by proving that there is a sequence \(\{\mathbf{Y}_{m}:m\in\mathbb{N}\}\) of finite E-subspaces of \(\mathbb{X}_{n}\), each with an E-partition \(S_{m}\) such that \(\mathbf{Y}_{m}/S_{m}\) is \(n\)-colorable and \[|Y_{1}/S_{1}|<|Y_{2}/S_{2}|<|Y_{3}/S_{3}|<\cdots \tag{14}\] If there is an E-partition \(R\) on \(\mathbb{X}_{n}\) such that \(\mathbb{X}_{n}/R\) is infinite and \(n\)-colorable, then (because of the definition of \(\mathbb{X}_{n}\)) there is a sequence \[Y_{1}\subsetneq Y_{2}\subsetneq\cdots\subsetneq Y_{m}\subsetneq\cdots\] of finite upsets of \(\mathbb{X}_{n}\) such that \[|Y_{1}/R|<|Y_{2}/R|<|Y_{3}/R|<\cdots\] Each \(Y_{m}\) is finite and, therefore, closed. Thereby, \(Y_{m}\) is the universe of an E-subspace \(\mathbf{Y}_{m}\) of \(\mathbb{X}_{n}\). Furthermore, \(S_{m}\coloneqq R\cap(Y_{m}\times Y_{m})\) is an E-partition on \(\mathbf{Y}_{m}\) such that \(\mathbf{Y}_{m}/S_{m}\) is \(n\)-colorable and (14) holds. Then we consider the case where there is no E-partiton \(R\) on \(\mathbb{X}_{n}\) such that \(\mathbb{X}_{n}/R\) is infinite and \(n\)-colorable. From the assumption that there is no natural bound on the size of \(\mathbb{X}_{n}/R\), provided that it is \(n\)-colored and \(R\) is an E-partition, it follows that there is a sequence of E-partitions \(\{R_{m}:m\in\mathbb{N}\}\) on \(\mathbb{X}_{n}\) such that each \(\mathbb{X}_{n}/R_{m}\) is finite and \(n\)-colorable and \[|X_{n}/R_{1}|<|X_{n}/R_{2}|<|X_{n}/R_{3}|<\cdots\] Given \(m\in\mathbb{N}\), let \(f\colon\mathbb{X}_{n}\to\mathbb{X}_{n}/R_{m}\) be the natural Esakia surjection. Since \(\mathbb{X}_{n}/R_{m}\) is finite, we can enumerate its minimal elements as \(x_{1}/R_{m},\ldots,x_{k}/R_{m}\). Furthermore, as the topology of \(\mathbb{X}_{n}/R_{m}\) is discrete, the singletons \(\{x_{i}/R_{m}\}\) are open in \(\mathbb{X}_{n}/R_{m}\). Therefore, \(f^{-1}[\{x_{i}/R_{m}\}]\) is open in \(\mathbb{X}_{n}\). Because of the definition of the topology of \(\mathbb{X}_{n}\), this implies that \(f^{-1}[\{x_{i}/R_{m}\}]\) contains an element \(z_{i}\) other than \(\bot\). Since, in Esakia spaces, principal upsets are closed, \[\mathbf{Y}_{m}\coloneqq\uparrow z_{1}\cup\cdots\cup\uparrow z_{k}\] is an E-subspace of \(\mathbb{X}_{n}\). Moreover, \(Y_{m}\) is finite, because each \(z_{i}\) is different from \(\bot\). Lastly, the restriction of \(f\) to \(\mathbf{Y}_{m}\) is still a surjective Esakia morphism from \(\mathbf{Y}_{m}\) to \(\mathbb{X}_{n}/R_{m}\). Therefore, \(S_{m}\coloneqq R_{m}\cap(Y_{m}\times Y_{m})\) is an E-partition of \(\mathbf{Y}_{m}\) such that \(\mathbf{Y}_{m}/S_{m}\) is \(n\)-colorable and (14) holds. Recall from Lemma 4.6 that the size of antichains in the various \(\mathbf{Y}_{m}\) is bounded by \(2^{n+2}\). Thus, we can apply Lemma 3.5 to the family \(\{\mathbf{Y}_{m}/S_{m}:m\in\mathbb{N}\}\) of \(n\)-colorable spaces, obtaining a natural number \(k\) and a family \(\{\mathbf{W}_{m}:m\in\mathbb{N}\}\) of \(n\)-colorable E-subspaces of spaces in the family \(\{\mathbf{Y}_{m}/S_{m}:m\in\mathbb{N}\}\) such that \(|W_{m}\smallsetminus\vec{0}(\mathbf{W}_{m})|\leqslant k\), for all \(m\in\mathbb{N}\), and \[|W_{1}|<|W_{2}|<\cdots<|W_{m}|<\cdots\] For every \(m\in\mathbb{N}\), let \(Z_{m}\) be the union of the equivalence classes in \(W_{m}\). Notice that \(Z_{m}\) is a finite upset of \(\mathbb{X}_{n}\) and, therefore, the universe of an E-subspace \(\mathbf{Z}_{m}\) of \(\mathbb{X}_{n}\). Furthermore, \(R_{m}\coloneqq S_{m}\cap(Z_{m}\times Z_{m})\) is an E-partition on \(\mathbf{Z}_{m}\) such that \(\mathbf{Z}_{m}/R_{m}\cong\mathbf{W}_{m}\). This concludes the proof of the claim. Now, recall from the definition of \(\langle U_{n};\prec\rangle\) that \(T_{n}=\{s_{0},\ldots,s_{t}\}\). To conclude the proof, it suffices to show that for every \(m\in\mathbb{N}\), \[|Z_{m}/R_{m}|\leqslant k+1+(3+t)(2^{n+3}+2). \tag{15}\] This is because the above display is in contradiction with (13). To this end, fix an arbitrary \(m\in\mathbb{N}\) and an \(n\)-coloring on \(\mathbf{Z}_{m}/R_{m}\) which satisfies (12). For every \(p\in\mathbb{N}\), let \(V_{p}\) be the subset of \(U_{n}\) consisting of all elements of the form \(a_{p},b_{p},c_{p,i},d_{p,i},e^{a}_{p,i},e^{b}_{p,i}\), where \(i\) ranges over all natural numbers. Clearly, \[U_{n}=\bigcup_{p\in\mathbb{N}}V_{p}.\] Now, if there is no \(q\in\mathbb{N}\) such that \[\Big{(}(Z_{m}\cap V_{q})/R_{m}\Big{)}\cap\overline{0}(\mathbf{Z}_{m}/R_{m})\neq \emptyset, \tag{16}\] then all the elements of \(\mathbf{Z}_{m}/R_{m}\) are colored with colors other than \(\overline{0}\). Hence, by (12), \(|Z_{m}/R_{m}|\leqslant k\) and we are done. Then, suppose that (16) holds for some \(q\in\mathbb{N}\). We can further assume that \(q\) is the least natural number validating it. Bearing this in mind, it is clear that \[(\bigcup_{q-1\geqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}\cap\overline{0} (\mathbf{Z}_{m}/R_{m})=\emptyset,\] hence it now follows from (12) that \[|(\bigcup_{q-1\geqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}|\leqslant k. \tag{17}\] Since each \(V_{i}\) has cardinality \(2^{n+3}+2\), from (17) it follows \[|(\bigcup_{q+t+2\geqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}| \leqslant|(\bigcup_{q-1\geqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/ R_{m}|+|V_{q}|+\cdots+|V_{q+t+2}|\] \[\leqslant k+|V_{q}|+\cdots+|V_{q+t+2}|\] \[=k+(3+t)(2^{n+3}+2).\] Therefore, in order to prove (15), it suffices to show that \[|(\bigcup_{q+t+3\leqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}|\leqslant 1. \tag{18}\] To this end, recall that at least one element of \((Z_{m}\cap V_{q})/R_{m}\) is colored by \(\overline{0}\). By the construction of \(\mathbb{X}_{n}\), every element of the form \(d_{q+1,i}\) is below every element of \(V_{q}\). Consequently, every element of the form \(d_{q+1,i}/R_{m}\) has color \(\overline{0}\) in \(\mathbf{Z}_{m}/R_{m}\). Furthermore, every element in \(V_{p}\), for \(p\geqslant q+2\), is below every element of the form \(d_{q+1,i}\) in \(\mathbb{X}_{n}\), whence \[(\bigcup_{q+2\leqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}\subseteq\overline {0}(\mathbf{Z}_{m}/R_{m}). \tag{19}\] We have two cases: either \(V_{q+t+2}\subseteq Z_{m}\) or \(V_{q+t+2}\nsubseteq Z_{m}\). First suppose that \(V_{q+t+2}\nsubseteq Z_{m}\). Since \(Z_{m}\) is an upset of \(\mathbb{X}_{n}\) and \[V_{q+t+2}\subseteq\uparrow\{c_{q+t+3,i},c_{q+t+3,j}\},\text{ for every }i<j,\] we conclude that \(Z_{m}\) contains at most one element of the form \(c_{q+t+3,i}\). As every element in \(V_{q+t+3}\) is either of the form \(c_{q+t+3,i}\) or is below at least two such elements, we conclude that \(Z_{m}\cap V_{q+t+3}\) is either empty or a singleton of the form \(\{c_{q+t+3,j}\}\). Together with the fact that \(Z_{m}\) is an upset of \(\mathbb{X}_{n}\) and the definition of \(\mathbb{X}_{n}\), this implies \[|\bigcup_{q+t+3\leqslant p\in\mathbb{N}}V_{p}\cap Z_{m}|\leqslant 1\] and, therefore, (18). Then we consider the case where \(V_{q+t+2}\subseteq Z_{m}\). Since \(Z_{m}\) is an upset of \(\mathbb{X}_{n}\), this implies \(V_{p}\subseteq Z_{m}\) for every \(q+t+2\geqslant p\in\mathbb{N}\). In particular, \(V_{q+2}\subseteq Z_{m}\). Since \(Z_{m}/R_{m}\) is \(n\)-colorable, we can apply Corollary 5.3 obtaining that there are \(i<j\) such that \(\langle c_{q+2,i},c_{q+2,j}\rangle\in R_{m}\). We shall prove that \[\langle c_{p,i},c_{p,j}\rangle\in R_{m},\text{ for every }q+2\leqslant p \leqslant q+t+2. \tag{20}\] The base case, where \(p=q+2\), was already established. Then suppose the above display holds for some \(q+2\leqslant p<q+2+t\). Recall from (19) that the element \(c_{p,i}/R_{m}=c_{p,j}/R_{m}\) is colored by \(\overline{0}\) in \(\mathbf{Z}_{m}/R_{m}\). Since \(\langle c_{p,i},c_{p,j}\rangle\in R_{m}\), the elements \(d_{p,i}/R_{m}\) and \(d_{p,j}/R_{m}\) have the same immediate successors in \(\mathbf{Z}_{m}/R_{m}\). Therefore, since \(\mathbf{Z}_{m}/R_{m}\) is finite, there is a \(\beta\)-reduction that identifies only \(d_{p,i}/R_{m}\) and \(d_{p,j}/R_{m}\). As these elements are colored by \(\overline{0}\), by (19), this \(\beta\)-reduction does not identify any pair of elements of distinct color. Together with the fact that \(\mathbf{Z}_{m}/R_{m}\) is \(n\)-colorable, this implies that \(\langle d_{p,i},d_{p,j}\rangle\in R_{m}\). Repeating this argument, one shows that also \(\langle e_{p,i}^{a},e_{p,j}^{a}\rangle\in R_{m}\) and, finally, \(\langle c_{p+1,i},c_{p+1,j}\rangle\in R_{m}\), thus establishing (20). Now, recall that \(T_{n}=\{s_{0},\ldots,s_{t}\}\). Choose an element \(s_{g}\in T_{n}\) of the form \(\langle h,i,j\rangle\) (this is possible, because \(i\neq j\)). Then let \(v\) be the unique integer in the interval \([q+2,q+2+t]\) such that \(v\equiv g\mod t+1\). Then 1. the immediate successors of \(a_{v}\) in \(\mathbb{X}_{n}\) are \(c_{v,h}\) and \(c_{v,i}\); 2. the immediate successors of \(b_{v}\) in \(\mathbb{X}_{n}\) are \(c_{v,h}\) and \(c_{v,j}\). Since \(\langle c_{v,i},c_{v,j}\rangle\in R_{m}\), the elements \(a_{v}/R_{m}\) and \(b_{v}/R_{m}\) have the same immediate successors in \(\mathbf{Z}_{m}/R_{m}\). Therefore, since \(\mathbf{Z}_{m}/R_{m}\) is finite, there is a \(\beta\)-reduction that identifies only \(a_{v}/R_{m}\) and \(b_{v}/R_{m}\). As these elements are colored by \(\overline{0}\), by (19), this \(\beta\)-reduction does not identify any pair of elements of distinct color. Together with the fact that \(\mathbf{Z}_{m}/R_{m}\) is \(n\)-colorable, this implies that \(\langle a_{v},b_{v}\rangle\in R_{m}\). Because of this, the elements of the form \(e_{v,p}^{a}/R_{m}\) and \(e_{v,p}^{b}/R_{m}\) have the same immediate successors in \(\mathbf{Z}_{m}/R_{m}\). Again, there is a \(\beta\)-reduction that identifies only \(e_{v,p}^{a}/R_{m}\) and \(e_{v,p}^{b}/R_{m}\). As these elements are colored by \(\overline{0}\), by (19), this \(\beta\)-reduction does not identify any pair of elements of distinct color. Together with the fact that \(\mathbf{Z}_{m}/R_{m}\) is \(n\)-colorable, this implies that \(\langle e_{v,p}^{a},e_{v,p}^{b}\rangle\in R_{m}\). Thus, \[\langle e_{v,p}^{a},e_{v,p}^{b}\rangle\in R_{m},\text{ for all }2^{n+1}-1 \geqslant p\in\mathbb{N}\] Since every element of the form \(c_{v+1,u}\) is below every element of the form \(e_{v,p}^{b}\) in \(\mathbb{X}_{n}\), the above display guarantees that every element of the form \(c_{v+1,u}/R_{m}\) is below every element of the form \(e_{v,p}^{a}/R_{m}=e_{v,p}^{b}/R_{m}\). As a consequence, the elements of the following sequence have the same immediate successors in \(\mathbf{Z}_{m}/R_{m}\): \[c_{v+1,0}/R_{m},\ldots,c_{v+1,2^{n+1}-1}/R_{m}.\] Notice that, however, that the above notation contains a minor abuse of notation, \(Z_{m}\) need not include the whole \(\{c_{v+1,0},\ldots,c_{v+1,2^{n+1}-1}\}\). Repeating once more the argument involving \(\beta\)-reductions, we obtain that \[c_{v+1,0}/R_{m}=\cdots=c_{v+1,2^{n+1}-1}/R_{m}.\] We fix the notation \(x\coloneqq c_{v+1,0}/R_{m}\). In view of the above display, either \(a_{v+1}/R_{m}=x\), or \(x\) is the only immediate successor of \(a_{v+1}/R_{m}\). In the latter case, we could perform a proper \(\alpha\)-reduction on \(\mathbf{Z}_{m}/R_{m}\), identifying the elements \(a_{v+1}/R_{m}\) and \(x\) that, moreover, have the same color by (19). But this would contradict the fact that \(\mathbf{Z}_{m}/R_{m}\) is \(n\)-colorable. Then we conclude that \(a_{v+1}/R_{m}=x\). A similar argument shows that \(b_{v+1}/R_{m}=x\) and \(d_{v+1,u}/R_{m}=x\) for all \(2^{n+1}-1\geqslant u\in\mathbb{N}\). Since \[a_{v+1}/R_{m}=b_{v+1}/R_{m}=d_{v+1,u}/R_{m}=x,\,\text{for all}\,2^{n+1}-1 \geqslant u\in\mathbb{N},\] either every element of the form \(e_{v+1,u}^{a}/R_{m}\) is equal to \(x\), or \(x\) is the only immediate successor of \(e_{v+1,u}^{a}/R_{m}\). Repeating the same argument, involving \(\alpha\)-reductions, we obtain \(e_{v+1,u}^{a}/R_{m}=x\). Similarly, every element of the form \(e_{v+1,u}^{b}/R_{m}\) is equal to \(x\). Thus, we conclude that \((V_{v+1}\cap Z_{m})/R_{m}\subseteq\{x\}\). Iterating this argument, we obtain \[(\bigcup_{v+1\leqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}\subseteq\{x\}\] and, therefore, \[|(\bigcup_{v+1\leqslant p\in\mathbb{N}}V_{p}\cap Z_{m})/R_{m}|\leqslant 1.\] Since \(v\leqslant q+t+2\), this implies (18), thus concluding the proof. **Acknowledgements.** The first author was supported by the research grant PREDOCS-UB 2021 funded by the University of Barcelona. The second author was supported by the _Beatriz Galindo_ grant BEAGAL18/00040 funded by the Ministry of Science and Innovation of Spain. All the authors were supported by the MSCA-RISE-Marie Sklodowska-Curie Research and Innovation Staff Exchange (RISE) project MOSAIC 101007627 funded by Horizon 2020 of the European Union.
2307.13017
The fate of supersymmetry in quantum field theories
We analyze the significance of supersymmetry in two topological models and the standard model (SM). We conclude that the two topological field theory models favor hidden supersymmetry. The SM superpartners, instead, have not been found.
Risto Raitio
2023-07-24T17:05:21Z
http://arxiv.org/abs/2307.13017v2
# The fate of supersymmetry in topological quantum field theories ###### Abstract We analyze the role of supersymmetry in nature. We extend our previous model of particles and cosmology beyond its critical energy scale at about \(10^{16}\) Gev. We assume that there are three main phases in the evolving universe. The first is topological gravity phase, the second a brief Chern-Simons phase, and the third the standard model (SM) gauge phase. In our scenario supersymmetry (SUSY) appears in all phases but in the third phase confined in topological preons, which form quarks and leptons. The confined SUSY (cSUSY) is supported by the lack of observation of squarks and sleptons. cSUSY also provides a natural mechanism for matter-antimatter asymmetry. The possible relationship of this tentative scenario to quantum gravity and the role of UV-completeness are disclosed. _Keywords:_ Topological field theory, Supersymmetry, Chern-Simons model, Baryon asymmetry ###### Contents * 1 Introduction * 2 Three phases of the universe * 3 Topological models in phase I * 3.1 BRST formalism * 3.2 Witten's topological gravity * 3.3 Fang and Gu's topological gravity * 3.4 Flatness * 4 Topological early phases versus inflation * 5 Chern-Simons model in phase O * 6 Baryon asymmetry in phase II * 7 Conclusions * A Chern-particle correspondence ## 1 Introduction String theory has been under active study for about 50 years. The beauty of it has not so far been realized in phenomenological success. In spite of that, stringy features like dualities have been introduced with success in field theoretic model building, together with topological concepts. The alleged UV-completeness of string theory is another motivator for active research at present. We use in this note supersymmetry, T-duality, topological models and aim towards UV-completeness. With these properties we extend our previous scenario of the universe beyond the critical scale \(\Lambda_{cr}\sim 10^{16-13}\) GeV up to Planck scale. Matter in the present scenario goes through two phase transitions between Planck time universe and the present baryon asymmetric one. The mechanisms of these phase transitions are defined. We propose our model of topological, supersymmetric matter as an attempt to look for the answer to the ontic question. The article is organized as follows. In section 2 we consider some general features, like the three different phases of the universe, the two phase transitions and motivation for preons (called here therons). To indicate the nature of problem of phase I matter, two models of topological gravity are briefly reviewed in section 3, namely those of Witten, and Fang and Gu. Comparison of the present scenario and standard model inflation is made in section 4. In section 5 transition from topological phase is discussed, based on Chern-Simons (CS) matter creating mass, and metric spacetime, by Higgs mechanism. CS matter finally confines itself into ordinary visible and dark matter. A process for creating baryon asymmetry in the universe is recapped from our previous article in section 6. Conclusions and outlook are given in section 7, with a philosophical paragraph about the relevance of UV-completion. An appendix with table 3 of CS particle - SM particle correspondence is provided. - The nature of this note is mainly to collect into single coherent form physical ideas from different articles (including our own). ## 2 Three phases of the universe The common view is that as we go far enough back in time in the contracting universe we will reach a point, defined here as time t = t\({}_{0}\), or just t = 0, (see figure 1) where the degrees of freedom that our universe is made of may disappear and get replaced by other light degrees of freedom [1]. This kind of idea appears also in [2, 3] where at energy scales \(\Lambda_{cr}>10^{16}\) GeV new degrees of freedom replace standard model particles, before phase I own objects become dominating. We assume in this note that the form of matter we obsrve depends upon Figure 1: Transition from phase I to II in our universe proceeds by the conversion of matter made up from the degrees of freedom of a dual frame (blue) to those of our T-dual frame (red). Time t = 0 corresponds to energy scale \(\Lambda_{cr}\). The small green area around \(t\sim 0\) is the new second topological phase O, to be discussed in section 5. Figure 1, 2, and 4 are from [1] with permission. the environment, basically on the temperature, or energy scale. E.g. nuclei consist of nucleons in ordinary laboratory conditions. When bombarded with enough energy the nucleons get unbound and free. On the next level quarks are liberated inside nucleons. We go one step further by assuming that quarks, and leptons, consist of preons above \(\Lambda_{cr}\), before phase I enters. There may be some need to disclose arguments for preons in general. To implement our personal view of supersymmetry, standard model [2] and baryon asymmetry [3], we split quarks and leptons in three pointlike constituents, called in this note chernons (synonym for preon1 or superon). Of the many preon models in the literature there are two of them which are like ours. One of them was a gauge theory proposed by Harari [5], and simultaneously by Shupe [6]. The model of Finkelstein [7] was developed from a different basis, namely by the quantum group SLq(2) and knot theory in the form of plane projections of e.g. of trefoils of figure 3 where the three outer loops "visualize" preons. This model turned out to agree with the model of Harari and Shupe [5, 6]. The major difference between the above models and our model [8, 2] is that ours has its basis in unbroken global supersymmetry where superpartners are in the model initially, not as new sparticles to be found in the future. Footnote 1: The term was coined by Pati and Salam in 1974 [4]. The era \(t<0\), or phase I, is a topological phase between, say \(\Lambda_{cr}\) and \(l_{\rm Pl}\). In the T-dual [9] second phase \(t>0\), or \(E<\Lambda_{cr}\), there exist the standard model matter, dark matter and dark energy [1]. We make a proposal about what may happen in the transition between the two phases I and II. What ends the topological phase I, and what is the destiny of supersymmetry in phase II? Briefly, we propose that between the two phases I and II there is a brief interpolating phase O for transfering supersymmetry to phase II, though in confined form, in close analogy with SU(3) color confinement. Our assumption, or rather prediction, is that no SM sparticles, like squaks and sleptons, exist in nature because SUSY is in confined state. At present, there is no experimental evidence for SM sparticles, after a long search. We start with SUSY in phase I and want keep it towards phase II, where it is a priori not guaranteed to exist. Then a SUSY conserving intermediate process in phase O is needed at \(t\sim 0\) when both derivatives of \(\rho_{m}^{I}\) and \(\rho_{m}^{II}\) are non-zero, see figure 1 green area. In this process topological objects (called chernons in section 5) take over supersymmetry of page I. In the next transition, phase O to phase II, chernons form composite states, i.e. quarks and leptons, by a Chern-Simons interaction. SUSY suffers now confinement inside quarks and leptons, as supersymmetric chernons. This resembles QCD color being hidden inside hadrons or plasma cooling down to atomic matter. Physics in phase II after reheating is well described by a thermal distribution of SM matter (and the dark components). The notion of time is common to both phases of the universe. This leads to energy being common to both phases. In addition there are weak long range correlations that originate from phase I modes that are non-local in phase II. This yields proper initial conditions for Friedmann-Lemaitre-Robertson-Walker (FLRW) metric cosmology. The horizon problem is solved simply because the locality, relevant in our universe in phase II, is not natural in phase I. The light modes of phase I are non-local as viewed from phase II. A known example are the winding modes of the string gas cosmology [10]. Fluctuations visible in phase II are not part of the degrees of freedom of phase I. How does phase I look from the perspective of phase II [1]? In phase I there should not be any position dependent observables. Let us assume the state in phase I is given by a state \(|I\rangle\). We would expect \(n\)-point correlations of physical observables in this state \[\langle I|\mathcal{O}^{i_{1}}(x_{1})\ldots\mathcal{O}^{i_{n}}(x_{n})|I \rangle=A^{i_{1},\ldots,i_{n}}\] to be position independent when all \(\partial_{j}A^{i_{1},\ldots,i_{n}}=0\). This is a key feature of a topological quantum field theory. We are thus led to view phase I as a topological phase as viewed from the perspective of frame II. It is curious that the reverse is also true: phase II can be viewed from the perspective of frame I as a topological theory [1]. This is illustrated in Figure 2. ## 3 Topological models in phase I ### BRST formalism In topological field theories observables must be a measure of global features. Consequently, there are no propagating signals. This property is achieved in the Becchi-Rouet-Stora-Tyutin (BRST) [11, 12] formalism by the presence of a Grassmann odd charge operator \(Q\). This operator \(Q\) is nilpotent, hermitian, and it commutes with the Hamiltonian, \([H,Q]=0\). The action of the charge operator on fields \(\Phi\) is given by \[\delta\Phi=i\epsilon[Q,\Phi] \tag{3.1}\] where \(\epsilon\) is a Grassmann parameter, a supernumber that anticommutes with all other Grassmann variables. \(Q\) is also the Noether charge for the BRST Figure 2: The degrees of freedom making up phase I are absent in a low energy description of phase II. Therefore the former appears topological from the point of view of the latter. This relation is also true with the roles of phase I and II interchanged. symmetry. The action (see (3.4)) combines together bosonic and fermionic fields in a way similar to the pairing in supersymmetric theories. Physical states in the Hilbert space are \(Q\)-cohomology classes: these states are \(Q\)-closed (i.e. \(|\psi\rangle\) satisfying \(Q|\psi\rangle=0\)) modulo \(Q\)-exact (i.e. \(|\psi\rangle\) such that \(|\psi\rangle=Q|\chi\rangle\) for some \(|\chi\rangle\)). This latter requirement implies that the fermionic partners of bosonic fields are in fact ghosts so that all degrees of freedom cancel in the BRST sense. If we assume that the vacuum is \(Q\)-invariant, then \(Q\)-exact operators have a vanishing expectation value \(\langle[Q,{\cal O}]\rangle=0\). In topological field theories, the energy-momentum tensor (given by the variation of the action with respect to the metric) is \(Q\)-exact, i.e. \(T_{\alpha\beta}=\{Q,\lambda_{\alpha\beta}\}\) for some \(\lambda\). This implies that the partition function is invariant under metric variations \[\delta Z =\int{\cal D}\Phi e^{-S}\left(-\delta S\right)=-\int{\cal D}\Phi e ^{-S}\{Q,\int\sqrt{g}\delta g^{\alpha\beta}\lambda_{\alpha\beta}\}\] \[=-\langle\{Q,\int\sqrt{g}\delta g^{\alpha\beta}\lambda_{\alpha \beta}\}\rangle=0\] provided the integration measure is BRST invariant. Another way to illuminate background independence in a topological theory in general is based on calculating Wilson loops in 3D Chern-Simons (CS) theory [13].2 Wilson loops give a natural class of gauge invariant observables that do not require a choice of metric. Let C be an oriented closed curve in M. Intrinsically C is simply a circle, but the topological classification of embeddings of a circle in M may be complicated, as we can imagine in figure 3. Let R be an irreducible representation of G. One then defines the Wilson loop \(W_{R}(C)\) to be the following functional of the connection \(A_{i}\). One computes the holonomy of \(A_{i}\) around C, getting an element of G that is well-defined up to conjugacy, and then one takes the trace of this element in the representation R. Thus, the definition is Footnote 2: CS theory is discussd later in section 5 \[W_{R}(C)={\rm Tr}_{\rm R}{\rm P}\exp\int_{\rm C}{\rm A}_{i}{\rm dx}^{\rm i} \tag{3.2}\] The crucial property of this definition is that there is no need to introduce a metric, so general covariance is maintained. Consider the partition function \(Z\), defined as \[Z=\int{\cal D}{\cal A}\exp(i{\cal L})\prod_{i}W_{Ri}(C_{i}) \tag{3.3}\] Figure 3: A trefoil knot in 3D space. The curve has orientation, clockwise or anticlockwise. where \({\cal D}{\cal A}\) represents Feynman integral over all gauge orbits, the \(C_{i}\) are non-intersecting knots and \(R_{i}\) representation assigned to \(C_{i}\). The partition function Z is thus automatically independent of any background metric. However, there is still a question of whether the theory contains local excitations. ### Witten's topological gravity Witten's theory [14] is defined as follows.3 If one ignores the ghosts, then the dynamics of gravity would be governed by a self-dual Weyl action Footnote 3: It can be obtained by applying the Batalin-Vilkovisky (BV) formalism [15, 16, 17] to the topological action \(W\wedge W\) where \(W\) is the Weyl tensor [18]. \[S_{g}=\int d^{4}x\sqrt{g}\frac{1}{2}(W+\star W)^{2} \tag{3.4}\] where \(\star\) is the Hodge dual. This action is scale invariant classically but would generally have a conformal anomaly at the quantum level. In addition, conformal symmetry is broken in Witten's topological gravity by a vev of a scalar field (denoted as \(\Phi\)) that is required for the action to be non-degenerate. Despite the fact that the usual Weyl tensor square gravity has ghosts and is non-unitary, this topological theory is unitary [14] as the non-unitary correlations are not allowed observables of the topological theory. We also note that the Einstein-Hilbert term \({\cal R}\) is not generated in Witten's topological gravity because it is forbidden by the BRST symmetry. To see this, one would have to review in more detail the field content and the BRST transformations. Witten's topological gravity includes the metric, or tetrad, bosonic fields \(C_{A\dot{A}}\) and \(B_{A\dot{A}}\) and fermionic fields \(\lambda_{A\dot{A}}\), \(\psi_{AB\dot{A}\dot{B}}\) and \(\chi_{ABCD}\), where the dotted and undotted indices are the \(SU(2)_{L}\times SU(2)_{R}\) spinor indices in four dimensions. The transformations of these fields are summarized in table 1. They \begin{table} \begin{tabular}{|c||c|c|} \hline field & ght & \([Q,\text{field}\}\) \\ \hline \hline \(C_{A\dot{A}}\) & 2 & \(\psi_{AB,\dot{A}\dot{B}}C^{BB}\) \\ \hline \(\psi_{AB,\dot{A}\dot{B}}\) & 1 & \(-\frac{i}{4}\big{(}e^{\alpha}_{A\dot{A}}D_{\alpha}C_{B\dot{B}}+e^{\alpha}_{B \dot{A}}D_{\alpha}C_{A\dot{B}}+e^{\alpha}_{A\dot{B}}D_{\alpha}C_{B\dot{A}}+e^{ \alpha}_{B\dot{B}}D_{\alpha}C_{A\dot{A}}\big{)}\) \\ \hline \(e_{\alpha A\dot{A}}\) & 0 & \(e^{BB}_{\alpha}\psi_{AB,\dot{A}\dot{B}}\) \\ \hline \(W_{ABCD}\) & 0 & \(\frac{1}{6}(\psi_{AB,}^{AB}R_{CD,A\dot{B}}-e^{\alpha}_{C\dot{C}}e^{\beta}_{D \dot{D}}D_{\alpha}D_{\beta}\psi_{AB,}^{CD})\) \\ & & \(+\) 5 permutations of \(A,B,C,D\) \\ \hline \(\chi_{ABCD}\) & \(-1\) & \(-iW_{ABCD}\) \\ \hline \(\lambda_{A\dot{A}}\) & \(-1\) & \(-iC^{\alpha}D_{\alpha}B_{A\dot{A}}+``(\psi\psi+eDC)B"-\frac{i}{4}B_{A\dot{A}}e ^{\beta}_{X\dot{X}}D_{\beta}C^{XX}\) \\ \hline \(B_{A\dot{A}}\) & \(-2\) & \(\lambda_{A\dot{A}}\) \\ \hline \end{tabular} \end{table} Table 1: Field content of Witten’s topological gravity [14]. Second column is ghost number. determine the conditions the bosonic backgrounds must satisfy in order to have a BRST-invariant vacuum. As in supersymmetry, these conditions are obtained by requiring that the variations of the fermionic fields vanish. Included in the variations is the condition \(\delta\chi_{ABCD}=W_{ABCD}=0\), which implies that the universe in phase I must be conformally half-flat. Using the variation of the fermionic field \(\chi_{ABCD}\) one can show that the Einstein-Hilbert term \(\mathcal{R}\) does not appear among the manifestly \(Q\)-exact terms. In order to give the fields a conventional kinetic term it was proposed in [14] that topological gravity is coupled, in addition to the fields discussed above, to topological matter and a topological invariant field \(\Phi\) couples to some of the fields in the topological theory whose vev \(\langle\Phi\rangle=v_{0}^{2}\) will give rise to the desired kinetic term. This term breaks scale invariance, but we will be assuming that the vev \(v_{0}\) is sufficiently small, as not to break the scaling symmetry of the topological theory. Gravitational theories of such a topological nature have an intriguing physical interpretation [19]. They are believed to be confined phases of gravity where general covariance is unbroken. Once the metric acquires an expectation value (i.e. there is a background spacetime) then this symmetry is spontaneously broken and local gravitational excitations, gravitons, emerge. Here an analogy can again be made to QCD with an unbroken local symmetry and no massless gauge bosons. Finally, there is the question of observables in topological gravity. As in all topological theories, these would be position independent expectation values of operators in \(Q\)-cohomology. In addition, the absence of spin-2 excitations implies the absence of tensor modes in cosmological observables. ### Fang and Gu's topological gravity We consider another topological theory by Fang and Gu [20, 21]. The TQFT approach can not be easily generalized into 3+1D because consistency with Einstein's gravity in 3+1D contains propagating a mode, the graviton, therefore it is obviously not a case for TQFT in the usual sense. Secondly, there is no Chern-Simons like action in 3+1D. Fang and Gu have shown that Einstein gravity might emerge by adding a topological mass term of the 2-form gauge field. Physically, such a phenomenological theory might describe a loop condensing phase, i.e. flux lines in the context of gauge theory. Due to the recent developments in the classification of topological phases of quantum matter in higher dimensions [22, 23, 24, 25], new types of TQFT have been discovered in 3+1D to describe the so-called three-loop-braiding statistics. It is argued that such types of TQFT are closely related to Einstein gravity and that gravitational field will disappear at extremely high energy scale. 3+1D quantum gravity (QG) would be controlled by a TQFT renormalization group fixed point. At intermediate energy scales, Einstein gravity and classical space time would emerge via loop (flux lines) condensation of the underlying TQFT.4 Footnote 4: The uncondensed loop-like excitation are a natural candidate of dark matter. Such kind of dark matter will not contribute scalar curvature but will be a direct source of torsion. Normal matter, like Dirac fermions, will not contribute to torsion. Let us begin with the topological gravity theory in 3+1D [26]. Consider the following topological invariant action: \[S_{top} = \frac{k_{1}}{4\pi}\int\varepsilon_{abcd}R^{ab}\wedge e^{c}\wedge e ^{d}+\frac{k_{2}}{2\pi}\int B_{ab}\wedge R^{ab}\] (3.5) \[+\frac{k_{3}}{2\pi}\int\widetilde{B}_{a}\wedge T^{a},\] where \(e\) is the tetrad field, \(R\) is the curvature tensor, \(T\) is the torsion tensor and \(B,\widetilde{B}\) are 2-form gauge fields. Like in the CS theory, the values of \(k_{i}\) are quantized. Without loss of generality, the following values can be chosen \(k_{1}=k_{2}=2\) and \(k_{3}=1\) for convenience. The above action is invariant under the following (twisted) 1-form and 2-form gauge transformations, respectively: \[e^{a} \rightarrow e^{a}+Df^{a}\] \[B_{ab} \rightarrow B_{ab}-\frac{k_{3}}{2k_{2}}\left(\widetilde{B}_{a}f_{b}- \widetilde{B}_{b}f_{a}\right)\] \[\widetilde{B}_{a} \rightarrow \widetilde{B}_{a}-\frac{k_{1}}{k_{3}}\varepsilon_{abcd}f^{b}R^{ cd}, \tag{3.6}\] and \[B_{ab} \rightarrow B_{ab}+D\xi_{ab}, \tag{3.7}\] \[\widetilde{B}_{a} \rightarrow \widetilde{B}_{a}+D\tilde{\xi}_{a}\] \[B_{ab} \rightarrow B_{ab}-\frac{k_{3}}{2k_{2}}\left(\tilde{\xi}_{a}\wedge e_{b}- \tilde{\xi}_{b}\wedge e_{a}\right). \tag{3.8}\] Such an action can be regarded as the non-Abelian generalization of \(AAdA+BF\) type TQFT [27, 28, 29] of the Poincare gauge group. Physically, it has been shown that such kind of TQFT describes the three-loop-braiding statistics [30, 31]. As a TQFT, the action Eq. (3.5) is a super-renormalizable theory. The coefficient quantization and canonical quantization of such a theory are discussed in [21]. SUSY generalization of 3 + 1D topological gravity is discussed in [20]. One needs to introduce the gauge connection of super Poincare group and write the action as \(\int sTr[A\wedge A\wedge(dA+A\wedge A)]+\int sTr(B\wedge F)\). For the \(N=1\) case, one can express \(A\), \(B\) and \(F\) as follows \[A_{\mu}\equiv\frac{1}{2}\omega_{\mu}^{ab}M_{ab}+e_{\mu}^{a}P_{a} +\bar{\psi}_{\mu\alpha}Q^{\alpha}\] \[B_{\mu\nu}\equiv\frac{1}{2}B_{\mu\nu}{}^{ab}M_{ab}+\tilde{B}_{ \mu\nu}^{a}P_{a}+\mathfrak{B}_{\mu\nu\alpha}Q^{\alpha}\] \[F_{\mu\nu}\equiv\frac{1}{2}R_{\mu\nu}{}^{ab}M_{ab}+T_{\mu\nu}^{a }P_{a}+\bar{R}_{\mu\nu\alpha}Q^{\alpha} \tag{3.9}\] Here \(R_{\mu\nu\alpha}\) is the super curvature tensor defined as \(R_{\mu\nu\alpha}=D_{\mu}\bar{\psi}_{\nu\alpha}-D_{\nu}\bar{\psi}_{\mu\alpha}\) where \(D_{\mu}\) is the covariant derivative for spinor fields. Fermionic loops (flux lines) cannot be condensed. Therefore supersymmetry breaking happens at very high energy scale when bosonic loops condense and classical space-time emerges. Although the total action S is super-renormalizable, it does not imply a UV-complete quantum gravity theory due to explicit breaking of 2-form gauge symmetries by the \(S_{\theta}=-\frac{\theta}{2\pi}\int B_{ab}\wedge B^{ab}\) term. The algebraic tensor 2-category theory [32, 32] may provide an equivalent UV-complete description for a topological quantum gravity theory in 3+1D. ### Flatness We are assuming no observables of frame I will distinguish positions, so the metric should be homogeneous, i.e. a constant curvature metric. We note that the time direction is picked out as an invariant concept in both phases. We would like to determine the consequences of this for the geometry in phase I as viewed from the frame II perspective. The most general metric with these symmetries is \[ds^{2}=-dt^{2}+a^{2}(t)\left[\frac{dr^{2}}{(1-kr^{2})}+r^{2}d\Omega^{2}\right] \tag{3.10}\] where \(k=+1,0,-1\) for positive, flat or negative curvature spaces. However, as discussed above, the solutions to BRST [11, 12] invariant configurations in 4D topological gravity are conformally flat self-dual geometries, which have \[W_{ABCD}=0. \tag{3.11}\] This condition by itself allows all three possibilities above. We will view time as a continuous element between phase I and phase II. Thus, a natural assumption is that the metric can be expressed as a flat metric up to a conformal factor that is only dependent on time, which is the only duality invariant coordinate. This is equivalent to having an FLRW metric (3.10) with \(k=0\) \[ds^{2}=a^{2}(\eta)(-d\eta^{2}+dx^{i}dx^{i}) \tag{3.12}\] Moreover in phase II, since the metric should smoothly connect, we learn that at the beginning of the FLRW cosmology, the universe is spatially flat, which is proper for phase O. ## 4 Topological early phases versus inflation In this section we compare and contrast the three phase topological scenario with the inflationary scenario. There are a number of common features in the two approaches as can be seen in Fig. 4. The end result for both is the FLRW scenario. Both of them involve a kind of phase transition. In the case of inflation the transition is marked by the end of inflation and reheating as the inflaton settles to the minimum of the potential. In the case of the topological scenario the phase transition takes place by a topology and symmetry change process [34, 35] followed immediately by confinement of the new topological objects, discussed in the next section 5. In both scenarios we have a nearly homogeneous thermal initial condition for FLRW in phase II. In both scenarios the homogeneity of space is described by a novel phenomenon: in the inflationary scenario by the exponential expansion of the space and in the topological phase by the fact that gravity is described by a topological theory. In the inflationary scenario the fluctuations of the inflaton field leads to scalar fluctuation, whereas in the topological phase which involves only global/zero modes and only through scale anomalies do we get fluctuations in the otherwise thermal background. Detailed properties and predictions of the topological inflation are presented in in [1]. Briefly said, processes take place as well as in other successful models. After reheating everything goes as in the standard model of cosmology. ## 5 Chern-Simons model in phase O The initial 4D topological universe in phase I transforms first to Chern-Simons topological phase O and finally dynamically by attractive chernon interactions to present universe in phase II. Chernon interactions are 2+1 dimensional inside a 3+1D world. The topological space of phase I makes phase transition into metric spacetime generated by the newly formed chernon masses (see (5.3)). The boson and fermion states correspond each other in phases I and O. A summary of the three phases and their properties is given in table 2. Chern-Simons-Maxwell (CSM) models have been studied in condensed matter physics, e.g. [36, 37, 38]. In this note we apply the CSM model in particle physics phenomenology at high energy in the early universe. We construct the visible matter of two fermionic chernons: (i) one charged \(m^{-}\), (ii) one neutral \(m_{V}^{0}\), V = R, G, B, carrying QCD color, and the photon. The Wess-Zumino [39] type action [2] is supersymmetric as well as C symmetric. The chernons have zero (or very small) mass. Weak interactions operate below \(\Lambda_{cr}\) between quarks and leptons, just as in SM. The chernon baryon (B) and Figure 4: Comparison between the inflationary and topological paradigms for the early universe. The topological scenario replaces the period of accelerated expansion by a topological phase to explain homogeneity, isotropy, flatness and near scale invariance. In both paradigms, the universe for \(t>0\) is well described by the standard Big Bang cosmology. lepton (L) numbers are zero. Given these quantum numbers, quarks consist of three chernons, as indicated in table 3.5 Footnote 5: There are more combinations of states like those containing an \(m^{+}m^{-}\) pair. This state annihilates immediately into other chernons, which form later leptons and quarks. In [38] a 2+1 dimensional Chern-Simons (CS) action [40, 13] was used to derive chernon-chernon interaction, which turns out to trigger the second phase transition between O and II. In 2+1 dimensions, a fermionic field has its spin polarization fixed up by the sign of mass [41]. The model includes two positive-energy spinors (two spinor families) and a complex scalar \(\varphi\). The fermions obey Dirac equation, each one with one polarization state according to the sign of the mass parameter. The vacuum expectation value \(v\) of the scalar field \(\varphi\) is given by: \[\langle\varphi^{*}\varphi\rangle=v^{2}=-\zeta/\left(2\lambda\right)+\left[ \left(\zeta/\left(2\lambda\right)\right)^{2}-\mu^{2}/\lambda\right]^{1/2} \tag{5.1}\] The condition for its minimum is \(\mu^{2}+\frac{\zeta}{2}v^{2}+\lambda v^{4}=0\). After the spontaneous symmetry breaking, the scalar complex field can be parametrized by \(\varphi=v+H+i\theta\), where \(H\) represents the Higgs scalar field and \(\theta\) the would-be Goldstone boson. For manifest renormalizability one adopts the 't Hooft gauge by adding the gauge fixing term \(S_{R_{\xi}}^{gt}=\int d^{3}x[-\frac{1}{2\xi}(\partial^{\mu}A_{\mu}-\sqrt{2}\xi M _{A}\theta)^{2}]\) to the broken action. Keeping only the bilinear and the Yukawa interaction terms one has the following action \begin{table} \begin{tabular}{|l||l|l|l|} \hline Ph. & Particles & Dimension & Symmetry \\ \hline I & Witten theory & 4D topol. & \(SU(2)_{L}\times SU(2)_{R}\); SUSY \\ O & chernons & 3D top. \(\in\) 4D & \(SU(3)[\times SU(2)]\times U(1)\); SUSY \\ II & SM particles & metric space & \(SU(3)\times SU(2)\times U(1)\); SUSY \\ \hline \end{tabular} \end{table} Table 2: Development of the universe from phase I to phase O and finally to phase II. Particles of Witten’s theory are in table 1. The phase O’s role is to retain supersymmetry, create SM matter, spacetime metric and baryon asymmetry in the universe. The term [\(\times SU(2)\)] indicates appearance of weak interaction ”automatically” between u- and d-quarks as well as between e and \(\nu\). \[S^{\rm SSB}_{\rm CS-QED} =\int d^{3}x\biggl{\{}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\frac{1}{2}M_{ A}^{2}A^{\mu}A_{\mu}\] \[\quad-\frac{1}{2\xi}(\partial^{\mu}A_{\mu})^{2}+\overline{\psi}_{ +}(i\not{\partial}-m_{eff})\psi_{+}\] \[\quad+\overline{\psi}_{-}(i\not{\partial}+m_{eff})\psi_{-}+\frac {1}{2}\theta\epsilon^{\mu\nu\alpha}A_{\mu}\partial_{v}A_{\alpha}\] \[\quad+\partial^{\mu}H\partial_{\mu}H-M_{H}^{2}H^{2}+\partial^{\mu }\theta\partial_{\mu}\theta-M_{\theta}^{2}\theta^{2}\] \[\quad-2yv(\overline{\psi}_{+}\psi_{+}-\overline{\psi}_{-}\psi_{ -})H-e_{3}\left(\overline{\psi}_{+}\not{A}\psi_{+}+\overline{\psi}_{-}\not{A }\psi_{-}\right)\biggr{\}} \tag{5.2}\] where the mass parameters \[M_{A}^{2}=2v^{2}e_{3}^{2},\ \ m_{eff}=m_{ch}+yv^{2},\ \ M_{H}^{2}=2v^{2}(\zeta+2 \lambda v^{2}),\ \ M_{\theta}^{2}=\xi M_{A}^{2} \tag{5.3}\] depend on the SSB mechanism. The Proca mass \(M_{A}^{2}\) represents the mass acquired by the photon through the Higgs mechanism. The Higgs mass, \(M_{H}^{2}\), is associated with the real scalar field. The Higgs mechanism also contributes to the chernon mass \(m_{ch}\), resulting in an effective mass \(m_{eff}\). There are two photon mass-terms in (5.2), the Proca and the topological one. The chernon-chernon scattering amplitude in the non-relativistic approximation is obtained by calculating the t-channel exchange diagrams of the Higgs scalar and the massive gauge field. The propagators of the two exchanged particles and the vertex factors are calculated from the action (5.2) [38]. The gauge invariant effective potential for the scattering considered is obtained in [42, 43] \[V_{\rm MCS}(r)=\frac{e^{2}}{2\pi}\left[1-\frac{\theta}{m_{ch}}\right]K_{0}( \theta r)+\frac{1}{m_{ch}r^{2}}\left\{l-\frac{e^{2}}{2\pi\theta}[1-\theta rK_ {1}(\theta r)]\right\}^{2} \tag{5.4}\] where \(K_{0}(x)\) and \(K_{1}(x)\) are the modified Bessel functions and \(l\) is the angular momentum (\(l=0\) in this note). In (5.4) the first term [ ] corresponds to the electromagnetic potential, the second one \(\{\ \}^{2}\) contains the centrifugal barrier \(\left(l/mr^{2}\right)\), the Aharonov-Bohm term and the two photon exchange term. One sees from (5.4) the first term may be positive or negative while the second term is always positive. The function \(K_{0}(x)\) diverges as \(x\to 0\) and approaches zero for \(x\rightarrow\infty\) and \(K_{1}(x)\) has qualitatively similar behavior. For our scenario we need negative potential between equal charge chernons. Being embarrassed of having no data points for several parameters in (5.4) we can give one relation between these parameter values for a binding potential. We must require the condition6 Footnote 6: For applications to condensed matter physics, one must require \(\theta\ll m_{e}\), and the scattering potential given by (5.4) then comes out positive [38]. \[\theta\gg m_{ch} \tag{5.5}\] The potential (5.4) also depends on \(v^{2}\), the vacuum expectation value, and on \(y\), the parameter that measures the coupling between fermions and Higgs scalar. Being a free parameter, \(v^{2}\) indicates the energy scale of the spontaneous breakdown of the \(U(1)\) local symmetry. ## 6 Baryon asymmetry in phase II We now examine the potential (5.4) in the early universe. Consider large number of groups of twelve cherrons each group consisting of four \(m^{+}\), four \(m^{-}\) and four \(m^{0}\) particles [3]. Any bunch may form only electron and proton (hydrogen atoms H), only positron and antiproton (\(\bar{\rm H}\)) or some combination of both H and \(\bar{\rm H}\) atoms. This is achieved by arranging the chernons appropriately (mod 3) using table 3. This way the transition from matter-antimatter symmetric universe to matter-antimatter asymmetric one happens straightforwardly. Because the Yukawa force (5.4) is the strongest force the light \(e^{-}\), \(e^{+}\) and the neutrinos combine first from three chernons at the very onset of inflation. To obey condition \(B-L=0\) of baryon-lepton balance and to sustain charge conservation, for one electron made of three chernons, nine other chernons have to be created simultaneously, these form a proton.7 Correspondingly for positrons. One neutrino requires a neutron to be created. The \(m^{0}\) carries in addition color enhancing neutrino formation. This makes neutrinos different from other leptons and the quarks. Footnote 7: Note that instead of particle-antiparticle charge symmetry we form effectively \(e^{-}p^{+}\) charge symmetry to get baryon asymmetry. Later, when the protons were formed, because chernons had the freedom to choose whether they are constituents of H or \(\bar{\rm H}\) there are regions of space of various sizes dominated by H or \(\bar{\rm H}\) atoms. Since the universe is the largest statistical system it is expected that there is only a very slight excesses of H atoms (or \(\bar{\rm H}\) atoms which only means a charge sign redefinition) which remain after the equal amounts of H and \(\bar{\rm H}\) atoms have annihilated. The ratio \(n_{B}/n_{\gamma}\) is thus predicted to be \(\ll 1\). ## 7 Conclusions The treatment of topology phase O, SUSY transfer to it, birth of matric spacetime, and SUSY confinement in phase II of the universe are the main points in this note. In order to explore aspects of the early universe in more detail we need a more precise description of phases I and O. Here we have extended our previous preon/chernon model to scales above \(\Lambda_{cr}\) up to \(M_{\rm Pl}\). In that purpose we have considered two models for 4D topological gravity in phase I, one proposed some time ago by Witten [14] and the other more recently by Fang and Gu [20, 21]. The latter seems to show more potential in the three phase evolutionary scenario, including QG tentatively as an effective field theory. There are three possibilities for the fate of supersymmetry: no SUSY at all, highly broken SM SUSY, and confined SUSY (in chernons or in some other way). We consider the first case unlikely. The second case has been studied thoroughly with certain success but the sparticles are still missing. The third case, described above, agrees with the standard model particle spectrum (1st generation) and provides an answer to matter-antimatter asymmetry by the mechanism presented in [3] and recapped in section 6. We conclude it is premature to consider supersymmetry nonexistent. Finally, a word of philosophical caution from the article of Karen Crowther and Niels Linnemann [44]. We cite them: "There is no requirement that QG be valid to arbitrarily high-energy scales (or to the shortest length scales), and thus, UV-completion cannot be taken as a criterion of theory acceptance. Instead, the necessary requirement is more modest: that the theory be _UV-better_ (than what we have now)--i.e., that it be valid at the Planck scale. UV-completion only makes sense as criterion within approaches whose goal is a ToE--yet, most approaches to QG do not have this aim." The problem with "Everything" is that we do not know what surprises future experiments will reveal of the universe. Our goal is an "all-inclusive" model of the known universe rather than ToE, preferably UV-complete. ## Appendix A Chernon-particle correspondence The table 3 gives the chernon content of SM matter and a proposal for dark matter.
2305.02763
VendorLink: An NLP approach for Identifying & Linking Vendor Migrants & Potential Aliases on Darknet Markets
The anonymity on the Darknet allows vendors to stay undetected by using multiple vendor aliases or frequently migrating between markets. Consequently, illegal markets and their connections are challenging to uncover on the Darknet. To identify relationships between illegal markets and their vendors, we propose VendorLink, an NLP-based approach that examines writing patterns to verify, identify, and link unique vendor accounts across text advertisements (ads) on seven public Darknet markets. In contrast to existing literature, VendorLink utilizes the strength of supervised pre-training to perform closed-set vendor verification, open-set vendor identification, and low-resource market adaption tasks. Through VendorLink, we uncover (i) 15 migrants and 71 potential aliases in the Alphabay-Dreams-Silk dataset, (ii) 17 migrants and 3 potential aliases in the Valhalla-Berlusconi dataset, and (iii) 75 migrants and 10 potential aliases in the Traderoute-Agora dataset. Altogether, our approach can help Law Enforcement Agencies (LEA) make more informed decisions by verifying and identifying migrating vendors and their potential aliases on existing and Low-Resource (LR) emerging Darknet markets.
Vageesh Saxena, Nils Rethmeier, Gijs Van Dijck, Gerasimos Spanakis
2023-05-04T12:04:33Z
http://arxiv.org/abs/2305.02763v1
VendorLink: An NLP approach for Identifying & Linking Vendor Migrants & Potential Aliases on Darknet Markets ###### Abstract The anonymity on the Darknet allows vendors to stay undetected by using multiple vendor aliases or frequently migrating between markets. Consequently, illegal markets and their connections are challenging to uncover on the Darknet. To identify relationships between illegal markets and their vendors, we propose VendorLink, an NLP-based approach that examines writing patterns to verify, identify, and link unique vendor accounts across text advertisements (ads) on seven public Darknet markets. In contrast to existing literature, VendorLink utilizes the strength of supervised pre-training to perform closed-set vendor verification, open-set vendor identification, and low-resource market adaption tasks. Through VendorLink, we uncover (i) 15 migrants and 71 potential aliases in the Alphabay-Dreams-Silk dataset, (ii) 17 migrants and 3 potential aliases in the Valhalla-Berlusconi dataset, and (iii) 75 migrants and 10 potential aliases in the Traderoute-Agora dataset. Altogether, our approach can help Law Enforcement Agencies (LEA) make more informed decisions by verifying and identifying migrating vendors and their potential aliases on existing and Low-Resource (LR) emerging Darknet markets. 1 Footnote 1: Our code implementation is publicly available at [https://github.com/maastrichtlawtech/VendorLink.git](https://github.com/maastrichtlawtech/VendorLink.git) ## 1 Introduction Conventional search engines index surface-web websites that only constitute 4% of the entire internet (Georgiev, 2021). The remaining comprises 90% Deep Web (not indexed) and 6% Darknet, which uses advanced anonymity enhancing protocols (Georgiev, 2021). While the former serves legitimate purposes requiring anonymity, the latter is also used for illegal activities such as financial fraud (ENISA, 2018), child exploitation (Bruggen and Blokland, 2021), and trading of illicit weapons (Weimann, 2016; Persi Paoli et al., 2017), prohibited drugs, and chemicals (Kruithof et al., 2016). Given the Darknet's scope, size, and anonymity, it is difficult for LEA to uncover connections between illegal marketplaces (Vogt, 2017). While manual detection of such connections is a time-consuming and resource-extensive process, the recent success of online scrapers (Fu et al., 2010; Hayes et al., 2018) and monitoring systems (Schafer et al., 2019; Godawatte et al., 2019) has enabled researchers and LEA to analyze (Easttom, 2018; Faizan and Khan, 2019; Goodison et al., 2019; Davies, 2020) and automatically identify (Al Nabki et al., 2017; Ghosh et al., 2017; Ubbink et al., 2019; He et al., 2019) Darknet contents. This research proposes a vendor verification and identification approach to help LEA make better decisions by linking vendors, offloading manual labor, and generating similarity-based analyses. In contrast to the existing Darknet literature (He et al., 2015; Ekambaranathan, 2018; Tai et al., 2019; Kumar et al., 2020; Manolache et al., 2022), VendorLink, as illustrated in Figure 1, emphasizes the following contributions to the problem of verifying and identifying vendors on Darknet markets: (i) Closed-Set Vendor Verification Task:Due to limited resources, LEA prioritizes investigating Darknet vendors based on the size and nature of their trade. Thus, Darknet vendors often distribute their business across multiple markets to stay undetected. Likewise, some vendors relocate and resume their business in other markets after a market seizes (Booij et al., 2021). We refer to these
2307.03204
A Scalable Approach to Performing Multiplication and Matrix Dot-Products in Unary
Stochastic computing is a paradigm in which logical operations are performed on randomly generated bit streams. Complex arithmetic operations can be executed by simple logic circuits, resulting in a much smaller area footprint compared to conventional binary counterparts. However, the random or pseudorandom sources required for generating the bit streams are costly in terms of area and offset the advantages. Additionally, due to the inherent randomness, the computation lacks precision, limiting the applicability of this paradigm. Importantly, achieving reasonable accuracy in stochastic computing involves high latency. Recently, deterministic approaches to stochastic computing have been proposed, demonstrating that randomness is not a requirement. By structuring the computation deterministically, exact results can be obtained, and the latency greatly reduced. The bit stream generated adheres to a "unary" encoding, retaining the non-positional nature of the bits while discarding the random bit generation of traditional stochastic computing. This deterministic approach overcomes many drawbacks of stochastic computing, although the latency increases quadratically with each level of logic, becoming unmanageable beyond a few levels. In this paper, we present a method for approximating the results of the deterministic method while maintaining low latency at each level. This improvement comes at the cost of additional logic, but we demonstrate that the increase in area scales with the square root of n, where n represents the equivalent number of binary bits of precision. Our new approach is general, efficient, composable, and applicable to all arithmetic operations performed with stochastic logic. We show that this approach outperforms other stochastic designs for matrix multiplication (dot-product), which is an integral step in nearly all machine learning algorithms.
Yadu Kiran, Marc Riedel
2023-07-05T23:20:28Z
http://arxiv.org/abs/2307.03204v1
# A Scalable Approach to Performing Multiplication and Matrix Dot-Products in Unary ###### Abstract Stochastic computing is a paradigm in which logical operations are performed on randomly generated bit streams. Complex arithmetic operations can be executed by simple logic circuits, resulting in a much smaller area footprint compared to conventional binary counterparts. However, the random or pseudorandom sources required for generating the bit streams are costly in terms of area and offset the advantages. Additionally, due to the inherent randomness, the computation lacks precision, limiting the applicability of this paradigm. Importantly, achieving reasonable accuracy in stochastic computing involves high latency. Recently, deterministic approaches to stochastic computing have been proposed, demonstrating that randomness is _not_ a requirement. By structuring the computation deterministically, exact results can be obtained, and the latency greatly reduced. The bit stream generated adheres to a "unary" encoding, retaining the non-positional nature of the bits while discarding the random bit generation of traditional stochastic computing. This deterministic approach overcomes many drawbacks of stochastic computing, although the latency increases quadratically with each level of logic, becoming unmanageable beyond a few levels. In this paper, we present a method for _approximating_ the results of the deterministic method while maintaining low latency at each level. This improvement comes at the cost of additional logic, but we demonstrate that the increase in area scales with \(\sqrt{n}\), where \(n\) represents the equivalent number of binary bits of precision. Our new approach is general, efficient, composable, and applicable to all arithmetic operations performed with stochastic logic. We show that this approach outperforms other stochastic designs for matrix multiplication (dot-product), which is an integral step in nearly all machine learning algorithms. ## 1 Introduction In stochastic computing, randomly generated streams of 0's and 1's are used to represent fractional numbers. The number represented by a bit stream corresponds to the probability of observing a 1 in the bit-stream at any given point in time. The advantage of this representation is that complex operations can be performed with simple logic, owing to the non-positional nature of the bits. For instance, multiplication can be performed with a single AND gate, and scaled addition can be performed with a single multiplexer. The simplicity and scalability of these operations make computing in this domain very appealing for applications that handle large amounts of data, especially in the wake of Moore's Law slowing down. Machine learning models are one such application that ticks all the boxes. The drawbacks of the conventional stochastic model are as follows: 1) the latency is high, and 2) due to randomness, the accuracy is low. Latency and accuracy are related parameters: to achieve acceptable accuracy, high latency is required (1). Recently, a "deterministic" approach to stochastic computing has been proposed (2) that uses all the same structures as stochastic logic but on deterministically generated bit streams. Deterministic approaches incur lower area costs since they generate bit streams with counters instead of expensive pseudo-random sources such as linear feedback shift registers (LFSRs). Most importantly, the latency is reduced by a factor of approximately \(\frac{1}{2^{n}}\), where \(n\) is the equivalent number of bits of precision. However, the latency is still an issue as it increases quadratically for each level of logic. Any operation involving two \(2^{n}\)-bit input bit streams will produce a resulting bit stream of length \(2^{2n}\) bits. This is a mathematical requirement: for an operation such as multiplication, the range of values of the product scales with the range of values of the operands. However, most computing systems operate on constant precision operands and products. Since this is not sufficient to represent the \(2^{2n}\) output in full precision, we will have approximation errors. Our primary goal is to minimize this error. Recent papers have discussed techniques for approximating the deterministic computation with quasirandom bit streams, such as Sobol sequences (3, 4, 5, 6). Unfortunately, the area cost of these implementations is high: the logic to generate the quasirandom bit streams is complex and grows quickly as the number of bit streams increases, in most cases completely offsetting the benefits. In this paper, we present a scalable deterministic approach that maintains constant bit stream lengths and approximates the results. This approach has much lower area cost than the quasirandom sequence approach. We structure the computation by _directly_ pairing up corresponding bits from the input bit streams using only simple structures such as counters. Not only does our approach achieve a high degree of accuracy for the given bits of precision, but it also maintains the length of the bit streams. This property lends _composability_ to our technique, allowing multiple operations to be chained together. Maintaining a constant bit stream length comes at the cost of additional logic, but we demonstrate that the increase in area scales with \(\sqrt{n}\), where \(n\) is the number of binary bits of precision. The new approach is general, efficient, and applicable to all arithmetic operations performed with stochastic logic. It outperforms other state-of-the-art stochastic techniques in both accuracy and circuit complexity. We also evaluate our approach with matrix dot-product, an integral set in machine learning algorithms. We demonstrate that our approach is a good fit for machine learning, as it allows one to increase the precision of the inputs while preserving the bit-length/latency at the output. As the bit streams are no longer random, the term "stochastic" would be an oxymoron. The bit streams generated for any particular operand follow a "unary" encoding, where all the 1's are clustered together, followed by all the 0's (or vice versa). Hence, we shall refer to this approach as "unary" computing in this paper. This paper is structured as follows: Section 2 provides a brief overview and background of stochastic computing. Section 3 presents our new approach. Section 4 provides the mathematical reasoning behind our design. Section 5 details the gate-level implementation. Section 6 evaluates our method and compares and contrasts it with prior stochastic approaches. Finally, Section 7 outlines the implications of this work. ## 2 Background Information ### Introduction to Stochastic Computation The paradigm of stochastic logic (sometimes called stochastic "computing") operates on non-positional representations of numbers (7). Bit streams represent fractional numbers: a real number \(x\) in the unit interval (i.e., \(0\leq x\leq 1\)) corresponds to a bit stream \(X(t)\) of length \(L\), where \(t=1,2,...,L\). If the bit stream is randomized, then for precision equivalent to conventional binary with precision \(n\), the length of the bit stream \(L\) must be \(2^{2n}\)(8). The probability that each bit in the stream is 1 is denoted by \(P(X=1)=x\). Below is an illustration of how the value \(\frac{5}{8}\) can be represented with bit streams. Note that the representation is not unique, as demonstrated by the four possibilities in the figure. There also exists a bipolar format which can be used to natively represent negative numbers, but for the sake of simplicity, we shall restrict our discussions to the unipolar format. Although, the concepts which we discuss can also be applied to the bipolar format as well. In general, with a stochastic representation, the position of the 1's and 0's do not matter. \[\begin{array}{c}\includegraphics[width=28.452756pt]{100.eps}\end{array}\] Common arithmetic operations that operate on probabilities can be mapped efficiently to logical operations on unary bit-streams. \(\bullet\)**Multiplication**. Consider a two-input AND gate whose inputs are two independent bit streams \(X_{1}(t)\) and \(X_{2}(t)\), as shown in Fig. 1(a). The output bit stream \(Y\), is given by \[y =P(Y=1)=P(X_{1}=1\text{ and }X_{2}=1)\] \[=P(X_{1}=1)P(X_{2}=1)=x_{1}x_{2}.\] \(\bullet\)**Scaled Addition**. Consider a two-input multiplexer whose inputs are two independent stochastic bit streams \(X_{1}\) and \(X_{2}\), and its selecting input is a stochastic bit stream \(S\), as shown in Fig. 1(b). The output bit stream \(Y\), is given by \[y =P(Y=1)\] \[=P(S=1)P(X_{1}=1)+P(S=0)P(X_{2}=1)\] \[=sx_{1}+(1-s)x_{2}.\] Figure 1: Complex functions such as exponentiation, absolute value, square roots, and hyperbolic tangent can each be computed with a small number of gates [(9, 10)]. ### The Deterministic Approach to Stochastic Computing In conventional stochastic logic, the bit streams are generated from a random source such as a linear feedback shift register (LFSR). The computations performed on these randomly generated bit streams are not always accurate. The figure below demonstrates a worst-case scenario where multiplying two input bit-streams corresponding to probabilities \(\frac{3}{5}\) and \(\frac{2}{5}\), results in an output of probability \(\frac{0}{5}\). \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/100110}\\ \includegraphics[width=142.26378pt]{images/1001001}\\ \includegraphics[width=142.26378pt]{images/25}\end{array}\] Consider instead a _unary encoding_, one in which all the 1's appear consecutively at the start, followed by all the 0's (or vice-versa), as shown below. This is also referred by some as "Thermometer encoding". \[\frac{3}{4}\!\Rightarrow\!1110\quad\frac{5}{8}\!\Rightarrow\!11111000\] This encoding is not a requirement, but rather a consequence of the circuit used to generate deterministic bit streams, shown in Fig. 2. For a computation involving \(n\)-bit precision operands, the setup involves an \(n\)-bit register, counter, and comparator. The register stores the corresponding binary value of the input operand. The bit stream is generated by comparing the value of the counter to the value stored in the register. The counter runs from 0 to \(2^{n}-1\) sequentially, so the resulting bit-stream inherits a thermometer encoding. A "deterministic" approach to stochastic computation was proposed, where the computation is performed on bit-streams which are generated deterministically, resulting in a unary encoding (2). By deterministically generating bit streams, all stochastic operations can be implemented efficiently by maintaining the following property: _every bit of one operand must be matched up against every bit of the other operand(s) exactly once_. Performing a multiply operation on unary bit-streams using the deterministic approach involves matching every bit of the first operand, with every bit of the second operand once. This is analogous to a Convolution operation, as illustrated below. Holding a bit of one input operand constant, the operation is repeated for each of the bits of the other input operand. The particular approach is known as _clock-division_, due to the division of the clock signal in the circuit for generating the input bit streams. Fig. 3 illustrates the Multiply operation on two operands (\(\frac{3}{4}\) and \(\frac{1}{4}\)) performed stochastically and deterministically. It is evident that the deterministic method achieves perfect accuracy. However, for each level of logic, the bit stream lengths increase. For a multiply operation involving two streams of \(2^{n}\) bits each, the output bit stream is \(2^{2n}\) bits. This is a mathematical requirement in order to represent the full range of values. However, for large values of \(n\), the bit stream lengths become prohibitive. For most applications, one has to maintain a constant bit stream length across all the levels of logic, and hence, an approximation is inevitable (11). We discuss how to do this in Section 3. For an operation such as multiplication, two copies of the circuit in Fig. 2 are used for generating the bit streams of the input operands. As shown in Fig. 4, the counter of the second input operand counts up only when the counter of the first input operand rolls over \(2^{n}-1\). This can be achieved by connecting the AND of all the output lines of the first counter to the clock input of the second counter.
2308.13692
Enhanced Spin Hall Ratio in Two-Dimensional Semiconductors
The conversion efficiency from charge current to spin current via spin Hall effect is evaluated by the spin Hall ratio (SHR). Through state-of-the-art $ab~initio$ calculations involving both charge conductivity and spin Hall conductivity, we report the SHRs of the III-V monolayer family, revealing an ultrahigh ratio of 0.58 in the hole-doped GaAs monolayer. In order to find more promising 2D materials, a descriptor for high SHR is proposed and applied to a high-throughput database, which provides the fully-relativistic band structures and Wannier Hamiltonians of 216 exfoliable monolayer semiconductors and has been released to the community. Among potential candidates for high SHR, the MXene monolayer Sc$_2$CCl$_2$ is identified with the proposed descriptor and confirmed by computation, demonstrating the descriptor validity for high SHR materials discovery.
Jiaqi Zhou, Samuel Poncé, Jean-Christophe Charlier
2023-08-25T22:21:27Z
http://arxiv.org/abs/2308.13692v2
# Enhanced Spin Hall Ratio in Two-Dimensional III-V Semiconductors ###### Abstract Spin Hall effect plays a critical role in spintronics since it can convert charge current to spin current. Using state-of-the-art _ab initio_ calculations including quadrupole and spin-orbit coupling, the charge and spin transports have been investigated in pristine and doped two-dimensional III-V semiconductors. Valence bands induce a strong scattering which limits charge conductivity in the hole-doped system, where spin Hall conductivity is enhanced by the spin-orbit splitting, yielding an ultrahigh spin Hall ratio \(\xi\approx\) 0.9 in GaAs monolayer at room temperature. Introduction -The strength of Hall effect can be denoted by \(\beta\) = tan(\(\theta_{\text{H}}\)) = \(E_{\text{H}}\)/\(E\) where \(\theta_{\text{H}}\) is the Hall angle, \(E_{\text{H}}\) is the transverse Hall field, and \(E\) is the longitudinal electric field [1]. Correspondingly, the strength of spin Hall effect (SHE) can be given by the spin Hall ratio (SHR) as \(\xi\) = tan(\(\theta_{\text{SH}}\)) = \(\frac{2e}{\hbar}\left|\frac{J_{\text{s}}}{J_{\text{c}}}\right|\) where \(\theta_{\text{SH}}\) is the spin Hall angle, \(J_{\text{s}}\) is the transverse spin Hall current density, and \(J_{\text{c}}\) is the longitudinal charge current density. SHR is often used as a proxy to indicate the charge-to-spin conversion efficiency which is crucial for low-power-consumption spintronic applications [2; 3]. Indeed, when \(\theta_{\text{SH}}\) is small, the first-order Taylor polynomial gives \(\xi\approx\theta_{\text{SH}}\), which is a good approximation for the bulk semiconductors and metals where \(\xi\sim\) 0.01 [4; 5; 6]. Recently, enhanced SHR has been revealed in various two-dimensional (2D) van der Waals materials with strong spin-orbit coupling (SOC). Huge SHRs over 10 are reported in topological insulators [7; 8] while large SHR \(\sim\) 0.5 in MoTe\({}_{2}\) and WTe\({}_{2}\) Weyl semimetals have also been theoretically and experimentally identified [9; 10; 11; 12]. Besides, the MoS\({}_{2}\) monolayer can exhibit \(\xi=0.14\) induced by the Rashba-Edelstein effect [13]. Noted that large \(\xi\) will break the approximation \(\xi\approx\theta_{\text{SH}}\) and therefore the spin Hall ratio rather than the spin Hall angle should be used to denote the ratio of spin current to charge current. In addition to the SHR enhancement, 2D materials also provide a platform for unconventional properties which are intertwined with their underlying structural symmetry [14]. 2D semiconductors composed of heavy atoms are promising for SHE: Strong SOC can induce a large spin Hall conductivity (SHC), and the broken-symmetry structure enables the unconventional spin Hall current [15; 16]. In the case of semiconductors, doping is another degree of freedom to effectively control the transport behavior. Although charge transport and SHC have been separately investigated in 2D materials [17; 18; 19; 20; 21], the study of SHR remains elusive due to the entanglement of electron-phonon interaction [22; 23] and SOC [2; 3]. In this Letter, we report the spin Hall ratio in III-V monolayers (MX, M=Ga, In, and X = P, As, Sb) using density functional theory [24], density functional perturbation theory [25], and Wannier functions [26]. The electron-phonon coupling, quadrupole correction [27], Berry connection [28], and SOC are considered in the room-temperature calculations. The broken-inversion symmetry and the strong SOC induce a Rashba splitting in the conduction bands, making the band edge analogous to a single valley at the center of Brillouin zone. The weak intravalley scattering results in exceptional electron mobilities over 1000 cm\({}^{2}\)/Vs along with high conductivities over 50 \(e^{2}\)/\(h\) in the electron-doped systems, where a universal SHC of \(-0.5\) (\(\hbar/2e\))\(e^{2}\)/\(h\) has been identified as a hallmark of Rashba system. In contrast, the hole-doped mobilities are significantly suppressed by the strong intervalley scattering, while high SHCs over 2 (\(\hbar/2e\))\(e^{2}\)/\(h\) occur due to the strong spin-orbit splitting, yielding an ultrahigh SHR of \(\xi\approx\) 0.9 in GaAs monolayer. Methods and models -We compute the charge transport properties by solving the iterative Boltzmann transport equation [29] and the spin Hall conductivity using the Kubo formula [30] as implemented in the EPW [31; 32], Wannier90 [33], Quantum ESPRESSO [34], and Abinit[35] codes considering SOC, 2D Coulomb truncation [36], and gauge-covariant quadrupolar contributions [37]. Additional details are provided in Section S1 of the supplementary information (SI) [38]. Pristine III-V monolayers are semiconductors which crystallize in a low-buckled honeycomb structure [39]. Details of the relaxed atomic structures, effective masses, Rashba constants, electrostatic properties, densities of states, doping levels, and electron and phonon dispersions are given in Section S2 of SI [38]. For reproducibility, all information including input and output files, software, pseudopotentials, and additional details are provided on Materials Cloud Archive [40]. Charge transport -The phonon-limited charge conductivity in doped 2D semiconductor is calculated as [29] \[\sigma_{\alpha\beta}=\frac{-e}{S^{\text{nc}}}\sum_{n}\int\,\frac{\mathrm{d}^{2} \mathbf{k}}{\Omega^{\text{BZ}}}v_{n\mathbf{k}\alpha}\partial_{E_{S}}f_{n \mathbf{k}}, \tag{1}\] where \(\alpha\) and \(\beta\) are Cartesian directions, \(S^{\text{nc}}\) is the unit cell area, \(\Omega^{\text{BZ}}\) is the first Brillouin zone area, and \(v_{n\mathbf{k}\alpha}=\hbar^{-1}\partial\varepsilon_{n\mathbf{k}}/\partial k\alpha\) is the band velocity, \(n\) is the band index. The linear variation of the electronic occupation function \(f_{n\mathbf{k}}\) in response to \(\mathbf{E}\), \(\partial_{E_{S}}f_{n\mathbf{k}}\), can be obtained by solving the Boltzmann transport equation with the scattering lifetime, see details in Eq. (S1) of SI [38]. The scattering rate, which is the inverse of scattering lifetime, is given as \[\tau_{n\mathbf{k}}^{-1} =\frac{2\pi}{\hbar}\sum_{m\nu}\int\frac{\mathrm{d}^{2}\mathbf{q} }{\Omega^{\text{BZ}}}|g_{mn\nu}(\mathbf{k},\mathbf{q})|^{2}\] \[\times\big{[}(n_{\mathbf{q}\nu}+1-f_{m\mathbf{k}+\mathbf{q}}^{0}) \delta(\varepsilon_{n\mathbf{k}}-\varepsilon_{m\mathbf{k}+\mathbf{q}}-\hbar \omega_{\mathbf{q}\nu})\] \[\qquad\quad+(n_{\mathbf{q}\nu}+f_{m\mathbf{k}+\mathbf{q}}^{0}) \delta(\varepsilon_{n\mathbf{k}}-\varepsilon_{m\mathbf{k}+\mathbf{q}}+\hbar \omega_{\mathbf{q}\nu})\big{]}, \tag{2}\] where \(g_{mn\nu}(\mathbf{k},\mathbf{q})\) is the electron-phonon matrix element with phonon \(\omega_{\mathbf{q}\nu}\), \(\varepsilon_{n\mathbf{k}}\) and \(\varepsilon_{m\mathbf{k}+\mathbf{q}}\) are eigenvalues, \(n_{\mathbf{q}\nu}\) is the Bose-Einstein distribution. The drift mobility of pristine semiconductor is related to the charge conductivity as \(\mu_{\alpha\beta}=\sigma_{\alpha\beta}/(en^{c})\) when the carrier density \(n^{\text{c}}\) is very small such that ionized impurity scattering can be neglected. The carrier mobilities are calculated considering quadrupole correction and Berry connection [37; 28]. Due to crystal symmetry, \(\mu=\mu_{xx}=\mu_{yy}\), \(\sigma=\sigma_{xx}=\sigma_{yy}\) in all the III-V monolayers. Note that \(\mu\) and \(\sigma\) are separately calculated since the applied heavy dopings in this work break the linear relation between them [41]. The room-temperature mobilities of the pristine monolayers are presented in Fig. 1. All the materials exhibit high electron mobilities which are inversely proportional to their effective masses. It should be noted that \(\mu^{\text{c}}\) of GaSb can reach up to 1470 cm\({}^{2}\)/Vs, an exceptional value for a 2D semiconductor [21]. A much larger variation in the hole mobility is observed with values ranging from 10 cm\({}^{2}\)/Vs in phosphides to 953 cm\({}^{2}\)/Vs in antimonides. Interestingly, the two arsenides present quite different mobilities, while similar values are observed in the two phosphides and antimonides, respectively. The transport behaviors can be intelligibly interpreted within the self-energy relaxation time approximation [29], where the mobility is inversely proportional to the scattering rate \(\tau^{-1}\) and directly proportional to carrier velocity \(v\). The electronic structures, \(\mathbf{k}\)-resolved scattering rates, and \(\mathbf{k}\)-resolved velocities of GaSb monolayer are presented as an example of the III-V monolayers. Figures 2(a)-(c) show that in the Fermi surface window of 0.3 eV for the electron mobility, the conduction band minimum (CBM) presents a Rashba splitting which is analogous to a single valley in the Brillouin zone. Hence, only intravalley scattering close to the \(\Gamma\) point is allowed, leading to low scattering rates under 100 THz. Besides, the sharp contour of bands produces high electron velocities around \(\Gamma\). Both factors enable GaSb to present a high \(\mu^{\text{c}}\), which is also found in other III-V monolayers where CBMs always locate at \(\Gamma\). In contrast, Figs. 2(d)-(f) show that the hole motion enables not only intravalley scatterings but also strong intervalley scatterings with a rate reaching up to 500 THz around K points. Moreover, Figure 2: Electronic structures, \(\mathbf{k}\)-resolved scattering rates \(\sum_{n}\tau_{n\mathbf{k}}^{-1}\), and \(\mathbf{k}\)-resolved velocities \(\sum_{n}v_{n\mathbf{k}}\) for (a)-(c) electron mobility and (d)-(f) hole mobility of pristine GaSb semiconductor, as well as for charge conductivities of (g)-(i) electron-doped and (j)-(l) hole-doped GaSb systems. Relevant Fermi surface windows are denoted by vertical arrows. Figure 1: Drift mobilities of pristine semiconductors and charge conductivities of doped systems for all the monolayers at 300 K. \(\mu^{\text{c}}\) and \(\mu^{\text{h}}\) denote (a) electron and (b) hole mobilities of pristine semiconductors with square markers (left axis), \(\sigma^{\text{e}}\) and \(\sigma^{\text{h}}\) indicate the conductivities of (a) electron-doped and (b) hole-doped systems with circle markers (right axis). the positions of valence band maximum (VBM) change with materials between \(\Gamma\) and K points: VBMs of antimonides are found at \(\Gamma\), thus a relatively high \(\mu\)h can be achieved due to the suppressed intervalley scattering. The transition happens in arsenides where the VBMs of InAs and GaAs are located at \(\Gamma\) and K points, respectively. Therefore, the intervalley scattering around K reduces the hole mobility of GaAs, similarly in phosphides. The mode and spectral decompositions of scattering rates show that most of the scatterings are induced by the low-frequency out-of-plane acoustic mode in GaAs, while InAs is a Frohlich-activated material where the dominant scattering originates from the high-frequency longitudinal optical mode. The \(\mathbf{k}\)-resolved scattering rates and velocities as well as spectral decompositions of all the materials are presented in Section S3 of SI [38]. Overall, despite belonging to the same III-V family, the hole mobility of these materials can vary significantly based on their electronic and vibrational properties. Doping is a practical approach to tuning the transport properties of semiconductors [42]. Efficient screening is induced by heavy doping which turns semiconductors into metallic systems where SHE can occur. Considering the density of states, an electron doping of 1 \(\times\) 10\({}^{13}\) cm\({}^{-2}\) and a hole doping of 2 \(\times\) 10\({}^{13}\) cm\({}^{-2}\) are respectively applied to all the materials. The main impact of such doping is the shift of Fermi energy (E\({}_{\text{F}}\)) by a few hundred meV, leaving the crystal structure and electronic bands nearly unaffected. Instead, the heavy doping yields a small phonon softening of the optical modes close to the zone center, suppressing the finite 2D slope in the long-wavelength limit. The charge conductivities of doped systems are shown in Fig. 1. The trend in doped conductivities is the same as that in pristine mobilities, except for the electron conduction in GaSb. The anomalous behavior of GaSb can be understood through the scattering and velocity mechanisms. Figures 2(g)-(i) show that the electron doping induces a shift of 0.38 eV of E\({}_{\text{F}}\), making eigenstates around M and K points closer to the Fermi surface, thus enabling a stronger scattering reaching up to 1400 THz. In addition, velocities around M and K points are limited by the large effective masses. Both factors limit the electron motion and reduce \(\sigma^{\text{e}}\). The analyses above demonstrate that heavy doping can fundamentally alter the transport mechanisms, illustrating the necessity to reconsider linear charge transport relation in these cases. Spin Hall conductivity -SHC of 2D material, with spin current along \(x\), electric field along \(y\), and spin orientation along \(\alpha\) direction (\(\alpha=y\) or \(z\)), is calculated using Kubo formula [30]: \[\sigma_{\alpha}=\frac{\hbar}{2e}\frac{e^{2}}{\hbar}\int_{\text{BZ}}\frac{ \text{d}^{2}\mathbf{k}}{(2\pi)^{2}}\Omega_{\alpha}(\mathbf{k}), \tag{3}\] where \(\Omega_{\alpha}(\mathbf{k})=\sum_{n}f_{\text{nk}}\Omega_{\alpha,n}(\mathbf{k})\) is the spin Berry curvature, with the band-resolved spin Berry curvature as \[\Omega_{\alpha,n}(\mathbf{k})=\hbar^{2}\sum_{m\neq n}\frac{-2\,\text{Im}[ \langle n\text{k}\hat{y}_{\alpha}\text{l}m\mathbf{k}\rangle\langle m\text{k} \rangle\langle m\text{k}\rangle]}{(\varepsilon_{n\mathbf{k}}-\varepsilon_{m \mathbf{k}})^{2}}, \tag{4}\] where \(\hat{j}_{\alpha}=\frac{1}{2}\{\hat{\sigma}_{\alpha}\hat{v}_{x}+\hat{v}_{x}\hat{ \sigma}_{\alpha}\}\) is the spin current operator, \(\hat{\sigma}_{\alpha}\) is the Pauli operator, \(\hat{v}_{x}\) and \(\hat{v}_{y}\) are velocity operators. Neumann's principle illustrates that the symmetries of physical property must include all the symmetries of the crystal [14]. Namely, the broken symmetries can remove the restrictions on SHC tensor [43; 44]. Apart from the conventional SHC \(\sigma_{z}\), the lifted mirror symmetry \(\mathcal{M}_{z}\) enables another unconventional tensor element \(\sigma_{y}\), while \(\sigma_{x}\) is prohibited by the preserved \(\mathcal{M}_{x}\). The room-temperature \(\sigma_{y}\) and \(\sigma_{z}\) for all the pristine materials are presented in Fig. 3(a), and SHCs of doped systems are presented in Section S4 of SI [38], demonstrating that different from the case of charge conductivity, the major effect of doping on SHC is the shift of E\({}_{\text{F}}\). Due to the similarity, SHCs in electron- and hole-doped systems are respectively marked by e-E\({}_{\text{F}}\) and h-E\({}_{\text{F}}\) in Fig. 3(a). The electron-doped InP presents \(\sigma_{y}^{\text{e}}\approx\sigma_{z}^{\text{e}}\), resulting in the canted spin orientation in \(yz\)-plane with a canted angle of 41\({}^{\circ}\), as shown in Fig. 3(b). In arsenides and antimonides, \(\sigma_{z}^{\text{e}}\) is much larger than \(\sigma_{y}^{\text{e}}\). The conduction bands of arsenides and antimonides produce a universal \(\sigma_{z}^{\text{e}}=\) -0.5 (\(\hbar/2e\))e\({}^{2}\)/\(h\) around e-E\({}_{\text{F}}\), consistent with the literature which reports a universal SHC of \(e\)/8\(\pi\) in the Rashba model [45]. Note that the difference between universal SHC values is caused by different constants used in Kubo formula. GaAs and InAs Figure 3: (a) Energy-dependent spin Hall conductivities of all the pristine monolayers at 300 K. CBM and VBM of semiconductors are denoted by horizontal dashed lines, and the VBM is set as Fermi energy. Fermi energies of electron-doped and hole-doped systems are marked as e-E\({}_{\text{F}}\) and h-E\({}_{\text{F}}\) by horizontal solid lines. (b) Diagram of the spin Hall current with canted spin in the \(yz\)-plane. (c) Spin Berry curvature of GaSb at h-E\({}_{\text{F}}\). have the most robust \(\sigma_{z}^{\rm e}\) over a large energy range since their conduction bands present a deep valley at the \(\Gamma\) point, while the interference of non-Rashba bands will break the universal SHC as observed in phosphides. We use a Rashba Hamiltonian to show that the universal SHC is robust against temperature once both Rashba bands are fully occupied, see details in Section S5 of SI [38]. Spin canted angles for all the materials can be found in Table S6 of SI [38]. In hole-doped systems, the strong spin-orbit splitting enhances both \(o_{y}^{\rm h}\) and \(\sigma_{z}^{\rm h}\). In most case, \(\sigma_{z}^{\rm h}\) still dominates in SHC, and GaSb presents the largest \(\sigma_{z}^{\rm n}=2.2\) (\(h\)/2\(e\))\(e^{2}\)/\(h\). For comparison, MoS\({}_{2}\) monolayer exhibits \(\sigma_{z}^{\rm n}\approx 0.2\) (\(h\)/2\(e\))\(e^{2}\)/\(h\), which is the only allowed SHC tensor component due to the constraint of high symmetry [17]. Figure 3(c) illustrates that in GaSb the spin Berry curvatures \(\Omega_{2}\) predominantly originate from the region around \(\Gamma\) where spin-orbit splitting occurs and h-E\({}_{\rm F}\) locates inside the spin-orbit-splitting gap, see Fig. 2(j). In addition, the K point also makes contribution since h-E\({}_{\rm F}\) locates inside the splitting gap at K. The sign-invariant \(\Omega_{2}\) results in a large SHC which is an integral of \(\Omega_{2}\) over all the \(\mathbf{k}\)-points. Spin textures and \(\Omega_{\alpha}\) decompositions of all the materials are presented in Section S6 of SI [38]. The discussions above manifest that the broken symmetry leads to the unconventional SHE, and doping can produce robust and large SHCs in III-V monolayers. Spin Hall ratio -After obtaining the charge conductivities \(\sigma\) and spin Hall conductivities \(\sigma_{\alpha}\), the spin Hall ratio \(\xi_{\alpha}=\frac{2e}{h}\left|\frac{\sigma_{\alpha}}{\sigma}\right|\) can be discussed in the doped systems. Figure 4(a) shows that in the electron-doped systems, both \(\xi_{y}^{\rm e}\) and \(\xi_{z}^{\rm e}\) are less than 0.01 due to the high charge conductivity. In the case of hole doping, a few promising candidates are identified in Fig. 4(b). For the spin-\(y\) component, we find \(\xi_{y}^{\rm h}\approx 0.1\) in most materials while the out-of-plane spin-\(z\) component varies much more between materials. Large \(\sigma_{z}^{\rm h}\) are found in antimonides which also possess fairly large \(\sigma^{\rm h}\), thus limiting \(\xi_{z}^{\rm h}\approx 0.2\). In contrast, hole-doped arsenides are perfect candidates with high SHCs and low charge transports, yielding exceptional \(\xi_{z}^{\rm h}\approx 0.9\) and 0.5 in GaAs and InAs, respectively. Compared with heavy metals where \(\xi\approx 0.01\) and where only conventional SHE is allowed, the hole-doped GaAs and other III-V monolayers exhibit great potential for low-power-consumption spintronic applications, since both large SHR and canted spin are crucial to realize field-free magnetization switching [16]. Experimental feasibility -Charge-to-spin conversion has been realized in MoS\({}_{2}\) and WSe\({}_{2}\) monolayers grown by chemical vapor deposition [13]. Since the first synthesis of 2D AlN layers by metal organic deposition [46], many efforts have been devoted to the synthesis of other 2D III-V materials [47; 48]. For example, GaSb films can be grown via a seeded lateral epitaxy, and the free-standing crystalline GaSb can be exfoliated from these films [49]. Moreover, 2D InAs flakes with high crystalline quality have been synthesized through van der Waals epitaxy with a thickness down to 4.8 nm [50]. Due to chemical similarity in one family, we expect similar techniques can be applied to the other III-V monolayers. Finally, the doping levels proposed in this work can be achieved via the advanced technique of electron beam in excess of \(\pm 1\times 10^{13}\) cm\({}^{-2}\)[51]. The doped state persists even after removing the electron beam and back-gate voltage, and the process is reversible and repeatable [51]. In conclusion, we compute the drift mobility, charge conductivity, spin Hall conductivity, and spin Hall ratio in III-V monolayers. Rashba splitting explains the exceptional electron mobilities in pristine semiconductors ranging from 461 to 1470 cm\({}^{2}\)/Vs, along with the high conductivities in the electron-doped regimes. The hole mobilities are much lower due to the intervalley scatterings, causing negligible charge conductivities in the hole-doped cases. For spin transport, the electron-doped systems exhibit a universal SHC, and the hole-doped systems present large values over 2 (\(\hbar/2e\))\(e^{2}\)/\(h\) thanks to the strong spin-orbit splitting. Consequently, efficient charge-to-spin conversions can be realized in hole-doped systems, and an ultrahigh SHR \(\xi_{z}^{\rm h}\approx 0.9\) has been found in GaAs monolayer. Moreover, the broken symmetry of III-V monolayers allows for unconventional SHE where the spins are canted in the \(yz\)-plane. This work highlights the fascinating charge and spin transport characteristics of III-V monolayers and demonstrates their applicability in electronic and spintronic devices. The interplay between electronic and vibrational properties presented in this study could be used as a surrogate model to predict transport properties with enhanced spin Hall ratio via high-throughput calculations and machine learning. Acknowledgment -The authors would like to thank Xi Dai, Matteo Giantomassi, Junfeng Qiao and Matthieu J. Verstraete for fruitful discussions. S. P. acknowledges the support from the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS). We acknowledge financial support from the European Union's Horizon 2020 Research Project and Innovation Program-Graphene Flagship Core3 (No. 881603), from the Federation Wallonie-Bruxelles through the ARC Grant (No. 21/26-116) and the EOS project "CONNECT" (No. Figure 4: Spin Hall ratios of all the monolayers doped by (a) electron and (b) hole. The subscripts indicate the results of two SHC tensor elements, \(\sigma_{y}\) and \(\sigma_{z}\). 40007563), and from the Belgium F.R.S.-FNRS through the research project (No. T.029.22F). Computational resources have been provided by the PRACE award granting access to MareNostrum4 at Barcelona Supercomputing Center (BSC), Spain and Discoverer in SofiaTech, Bulgaria (OptoSpin project ID. 2020225411), and by the Consortium des Equipements de Calcul Intensif (CECI), funded by the F.R.S.-FNRS under Grant No. 2.5020.11 and by the Walloon Region, as well as computational resources awarded on the Belgian share of the EuroHPC LUMI supercomputer.
2303.11491
Completely Positive Map for Noisy Driven Quantum Systems Derived by Keldysh Expansion
Accurate modeling of decoherence errors in quantum processors is crucial for analyzing and improving gate fidelities. To increase the accuracy beyond that of the Lindblad dynamical map, several generalizations have been proposed, and the exploration of simpler and more systematic frameworks is still ongoing. In this paper, we introduce a decoherence model based on the Keldysh formalism. This formalism allows us to include non-periodic drives and correlated quantum noise in our model. In addition to its wide range of applications, our method is also numerically simple, and yields a CPTP map. These features allow us to integrate the Keldysh map with quantum-optimal-control techniques. We demonstrate that this strategy generates pulses that mitigate correlated quantum noise in qubit state-transfer and gate operations.
Ziwen Huang, Yunwei Lu, Anna Grassellino, Alexander Romanenko, Jens Koch, Shaojiang Zhu
2023-03-20T23:05:24Z
http://arxiv.org/abs/2303.11491v4
# Completely Positive Map for Noisy Driven Quantum Systems Derived by Keldysh Expansion ###### Abstract Accurate modeling of decoherence errors in quantum processors is crucial for analyzing and improving gate fidelities. To increase the accuracy beyond that of the Lindblad dynamical map, several generalizations have been proposed, and the exploration of simpler and more systematic frameworks is still ongoing. In this paper, we introduce a decoherence model based on the Keldysh formalism. This formalism allows us to include non-periodic drives and correlated quantum noise in our model. In addition to its wide range of application, our method is also numerically simple, and yields a CPTP map. These features allow us to integrate the Keldysh map with quantum-optimal-control techniques. We demonstrate that this strategy generates pulses that mitigate correlated quantum noise in qubit state-transfer and gate operations. ## I Introduction The ubiquity of decoherence errors in current quantum computing platforms poses a bottleneck for performing error-correctable quantum computation [1]. Further reducing these errors relies on the accurate modeling of them, which is challenging due to the presence of complicated drive and noise background. For example, if the quantum system is strongly driven, or the noise (quantum and classical) is correlated, the widely-used Lindblad master equation is generally not applicable [2; 3; 4; 5; 6; 7; 8]. To obtain more accurate predictions, recent research is exploring generalizations of the Lindblad master equation, where the constant damping operators and rates in the original form are replaced by time-dependent ones [2; 3; 4; 9; 10; 11; 12]. Impresively, after a more careful treatment of the bath degrees of freedom than that in the Lindblad formalism, the master equation not only becomes compatible with drives and correlated noise, but also maintain the property of generating completely positive and trace-preserving (CPTP) maps [2; 3; 12]. In addition to this route, a formalism based on filter functions has been developed to model errors caused by correlated noise. This formalism can predict the sensitivity of the driven system to noise at different frequencies [6; 7; 13; 14; 15; 16; 17; 18]. As a comparison, the filter-function method usually requires fewer integrals, and has a clearer physical picture of how noise at difference frequencies contributes differently. However, this method mostly focuses on classical (or dephasing) noise, and does not always guarantee the CPTP character of the map. In this paper, we present a decoherence model which combines the advantages of the two routes mentioned above, and is tailored for optimizing gate operations. Our method belongs to the filter-function category, while the Keldysh technique [9; 19] used here extends the scope of the formalism in Ref. [6; 14] to quantum noise. Furthermore, the map derived by this method is guaranteed to be CPTP, owing to a special secular approximation to the filter functions. Such approximation also significantly simplifies the calculation, which further allows us to explore error-mitigation strategies by integrating our method with the technique of quantum optimal control [7; 20]. Using a few examples, we show that such a combination can generate pulses that suppress decoherence errors induced by correlated quantum noise. The paper is structured as follows. In Sec. II, we outline the derivation of the Keldysh map. The main results are summarized in Eqs. (18) and (19). In Sec. III, we apply our method to a variety of quantum systems, which not only reproduces some familiar results, but also extends the prediction of decoherence errors to several less familiar situations. In Sec. IV, we integrate the Keldysh method with the quantum-optimal-control technique, and demonstrate improvement of gate and state-transfer fidelities via the optimization of drive pulses. ## II Deriving Keldysh maps ### Formal Keldysh expansion We start by deriving the formal map for the qubit density matrix using the Keldysh expansion. The Hamiltonian of the full system is \[\hat{H}(t)=\hat{H}_{s}(t)+\hat{H}_{B}+\epsilon\hat{H}_{I}, \tag{1}\] where \(\hat{H}_{s}(t),\hat{H}_{B},\hat{H}_{I}\) denote the Hamiltonians for a driven quantum system, bath, and interaction. The system Hamiltonian \(\hat{H}_{s}(t)=\hat{H}_{s0}+\hat{H}_{d}(t)\) comprises the static Hamiltonian \(\hat{H}_{s0}\) and the drive operator \(\hat{H}_{d}(t)\). For simplicity, we specify the system-bath interaction by \(\hat{H}_{I}=\hat{x}\hat{\eta}\), where \(\hat{x}\) and \(\hat{\eta}\) are the system and bath operators, respectively [21]. The small dimensionless parameter \(\epsilon\) is used to keep track of the order. In Eq. (1), we assume that the interaction is weak and can be treated perturbatively. To conveniently perform the perturbative calculation, we first move to the interaction picture with the unperturbed propagator \(\hat{U}_{0}(t)=\hat{U}_{s}(t)\otimes\hat{U}_{B}(t)\), where the partial propagator for the system is \(\hat{U}_{s}(t)=\mathcal{T}\exp[-i\int_{0}^{t}dt^{\prime}\hat{H}_{s}(t^{\prime })]\) and bath \(\hat{U}_{B}(t)=\exp[-i\hat{H}_{B}t]\). In this rotating frame, the reduced qubit density matrix at time \(\tau\) is: \[\tilde{\rho}_{s}(\tau)=\text{Tr}_{B}\big{\{}\tilde{U}_{I}(\tau)\,\tilde{\rho} _{s}(0)\otimes\tilde{\rho}_{B}(0)\,\tilde{U}_{I}^{\dagger}(\tau)\big{\}}. \tag{2}\] Here, the interaction-picture propagator is given by \(\tilde{U}_{I}(\tau)=\mathcal{T}\exp[-i\int_{0}^{\tau}dte\tilde{H}_{I}(t)]\) and \(\tilde{H}_{I}(t)=\tilde{U}_{0}^{\dagger}(t)\hat{H}_{I}\tilde{U}_{0}(t)\) is the system-bath coupling term in the interaction picture. We use \(\tilde{\rho}_{s}(0)\) and \(\tilde{\rho}_{B}(0)\) to denote the initial partial density matrices for the system and bath. Note that in this work, we assume that there is no entanglement between the system and bath initially, and the bath is prepared in its thermal equilibrium \(\hat{\rho}_{B,\text{eq}}\). To evaluate this formal expression, we expand \(\tilde{U}_{I}(\tau)=\sum_{\nu}\tilde{U}_{I}^{(\nu)}(\tau)\) as a Dyson series, where the \(\nu\)th term \(\tilde{U}_{I}^{(\nu)}(\tau)\) is given by \[\tilde{U}_{I}^{(\nu)}(\tau) = (-i)^{\nu}\epsilon^{\nu}\int_{0}^{\tau}dt_{1}\tilde{H}_{I}(t_{1}) \int_{0}^{t_{1}}dt_{2}\tilde{H}_{I}(t_{2}) \tag{3}\] \[\cdots\times\int_{0}^{t_{\nu-1}}dt_{\nu}\tilde{H}_{I}(t_{\nu}).\] Inserting this into Eq. (2), we further expand the qubit density matrix as \[\tilde{\rho}_{s}(\tau)=\sum_{\nu^{\prime},\nu^{\prime\prime}}\text{Tr}_{B}\left\{ \tilde{U}_{I}^{(\nu^{\prime})}(\tau)\,\tilde{\rho}_{s}(0)\otimes\tilde{\rho}_{ B}(0)\,\tilde{U}_{I}^{(\nu^{\prime\prime})\dagger}(\tau)\right\}. \tag{4}\] To simplify this expression, we define the \(\nu\)th-order map and the sum map as \[\mathbf{\Pi}^{(\nu)}(\tau)\mathbf{\cdot}\equiv\sum_{\nu^{\prime}+\nu^{ \prime\prime}=\nu}\text{Tr}_{B}\left\{\tilde{U}_{I}^{(\nu^{\prime})}(\tau) [\mathbf{\cdot}\otimes\tilde{\rho}_{B}(0)]\,\tilde{U}_{I}^{(\nu^{\prime\prime}) \dagger}(\tau)\right\},\] \[\mathbf{\Pi}(\tau)=\sum_{\nu\in\mathbb{N}}\mathbf{\Pi}^{(\nu)}(\tau) \tag{5}\] which casts Eq. (4) into \[\tilde{\rho}_{s}(\tau)=\mathbf{\Pi}(\tau)\tilde{\rho}_{s}(0). \tag{6}\] Above, \(\mathbf{\Pi}^{(\nu)}(\tau)\) only contains terms of order \(\epsilon^{\nu}\). For \(\nu=0\), we have \(\mathbf{\Pi}^{(0)}(t)=\mathbf{I}_{s}\) (i.e., the superoperator-identity acting on density matrices \(\mathbf{I}_{s}\tilde{\rho}_{s}=\tilde{\rho}_{s}\)), while higher-order terms describe the decoherence effects due to the system-bath coupling. Although we can in principle use Eq. (5) to calculate the map \(\mathbf{\Pi}(\tau)\) to arbitrary order, it is usually not the most convenient quantity to extract physical measurables from, according to the discussion in Refs. [9; 22; 23]. Instead, we follow the Keldysh theory and define the self-energy \[\mathbf{\Sigma}(\tau)\equiv\ln[\mathbf{\Pi}(\tau)], \tag{7}\] where redundant terms in higher-order expansions can be conveniently identified, and the derivation of quantities such as relaxation rates is easier [9; 22; 23]. Below, we will focus on \(\mathbf{\Sigma}(\tau)\). Similar to \(\mathbf{\Pi}(\tau)\), the self-energy \(\mathbf{\Sigma}(\tau)\) can be expanded in powers of \(\mathbf{\epsilon}\) by \(\mathbf{\Sigma}(\tau)=\sum_{\nu}\mathbf{\Sigma}^{(\nu)}(\tau)\), where \(\mathbf{\Sigma}^{(\nu)}(\tau)\) can be derived from \(\mathbf{\Pi}^{(\nu)}(\tau)\) by a Taylor expansion. For example, the lowest two orders are related by \[\mathbf{\Sigma}^{(1)}(\tau)=\mathbf{\Pi}^{(1)}(\tau),\quad\mathbf{\Sigma}^{(2)}(\tau)=\bm {\Pi}^{(2)}(\tau)-\frac{1}{2}\Big{[}\mathbf{\Pi}^{(1)}(\tau)\Big{]}^{2}.\] These relations can be further simplified, if we assume that the noise has a zero mean, \(\text{Tr}_{B}\{\tilde{\eta}(t)\tilde{\rho}_{B}(0)\}=0\). In that case, the first-order map \(\mathbf{\Pi}^{(1)}(\tau)\) vanishes, resulting in the following simplified relations \[\mathbf{\Sigma}^{(1)}(\tau)=0,\quad\mathbf{\Sigma}^{(2)}(\tau)=\mathbf{\Pi}^{(2)}(\tau). \tag{8}\] ### Second-order truncation For most experiments involving gate operations and state transfer, it is usually sufficient to estimate the decoherence error up to leading order. In the following, we focus on the leading-order self-energy \(\mathbf{\Sigma}^{(2)}(\tau)\). Explicitly, the second-order self-energy takes the form \[\mathbf{\Sigma}^{(2)}(\tau)\tilde{\rho}_{s}(0)=\text{Tr}_{B}\Big{\{} \tilde{U}_{I}^{(2)}(\tau)\tilde{\rho}_{s}(0)\otimes\tilde{\rho}_{B}(0)\\ +\tilde{\rho}_{s}(0)\otimes\tilde{\rho}_{B}(0)\tilde{U}_{I}^{(2) \dagger}(\tau)\\ +\tilde{U}_{I}^{(1)}(\tau)\tilde{\rho}_{s}(0)\otimes\tilde{\rho}_{ B}(0)\tilde{U}_{I}^{(1)\dagger}(\tau)\Big{\}}. \tag{9}\] With the knowledge of the noise spectrum, we can further expand the right-hand side of Eq. (9). For example, the first term can be expressed as \[\text{Tr}_{B}\Big{\{}\tilde{U}_{I}^{(2)}(\tau)\tilde{\rho}_{s}(0) \Big{\}} =(-i)^{2}\!\!\int_{0}^{\tau}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt_{2}\, \tilde{x}(t_{1})\tilde{x}(t_{2})\tilde{\rho}_{s}(0)\\ \times\epsilon^{2}\text{Tr}_{B}\{\tilde{\eta}(t_{1})\tilde{\eta}(t_ {2})\tilde{\rho}_{B,\text{eq}}\},\] \[=-\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}S_{B}(\omega)\int_{0}^ {\tau}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt_{2}\\ \times\tilde{x}(t_{1})\tilde{x}(t_{2})\tilde{\rho}_{s}(0)e^{-i \omega(t_{1}-t_{2})}, \tag{10}\] and the other two can be derived similarly [see full expansion in Appendix A]. In the equation above, \(S_{B}(\omega)\equiv\epsilon^{2}\int_{-\infty}^{\infty}dt\text{Tr}_{B}\{\hat{\rho} _{B,\text{eq}}\tilde{\eta}(t)\tilde{\eta}(0)\}\exp(i\omega t)\) is the noise spectrum, and the interaction-picture operators are derived as \(\tilde{x}(t)=\tilde{U}_{S}^{\dagger}(t)\tilde{x}\tilde{U}_{S}(t)\) and \(\tilde{\eta}(t)=\tilde{U}_{B}^{\dagger}(t)\tilde{\eta}\tilde{U}_{B}(t)\). Given the information of the noise spectrum \(S_{B}(\omega)\) and system propagator \(\tilde{U}_{s}(t)\), we can use Eqs. (9) and (10) to calculate the approximated dynamical map, i.e., \[\mathbf{\Pi}(\tau)\approx\exp\Big{[}\mathbf{\Sigma}^{(2)}(\tau)\Big{]}. \tag{11}\] This calculation is reminiscent of the filter-function method shown in Refs. [6; 7; 13; 14; 15]. For example, the double integral \[\int_{0}^{t}\int_{0}^{t_{1}}dt_{1}dt_{2}\tilde{x}(t_{1})\tilde{x}(t_{2})\exp[-i \omega(t_{1}-t_{2})]\] is closely related to the filter functions studied there. (See Appendix B for a detailed discussion of the connection to these theories.) For comparison, the time ordering of the two coupling operators \(\tilde{\eta}(t_{1})\) and \(\tilde{\eta}(t_{2})\) in the Keldysh expansion resolves the asymmetric noise spectrum for a non-classical noise source. Therefore, Eq. (11) extends the filter-function formalism beyond classical noise. However, there are two problems associated with Eq. (11). First, this map is not necessarily CPTP; second, evaluating the triple integral in Eq. (10) is usually not numerically efficient. In the next section, we show that appropriate approximations can solve both problems. ### Fourier expansion and secular approximation Our strategy for solving the aforementioned problems is based on the Fourier expansion of \[\tilde{x}(t)=\sum_{k}\tilde{x}_{k}\exp(-ik\omega_{p}t), \tag{12}\] where we define the fundamental frequency \(\omega_{p}=2\pi/\tau\). Inserting it into Eqs. (10) and (9), we find \[\mathbf{\Sigma}^{(2)}(\tau)\tilde{\rho}_{s}(0) = -\sum_{kk^{\prime}}\tilde{x}_{k}\tilde{x}_{k^{\prime}}\tilde{\rho }_{s}(0)\!\!\int_{-\infty}^{\infty}\!\!\!\frac{d\omega}{2\pi}S_{B}(\omega)I_{-k,k^{\prime}}(\omega) \tag{13}\] \[-\sum_{kk^{\prime}}\tilde{\rho}_{s}(0)\tilde{x}_{k^{\prime}} \tilde{x}_{k}\!\!\int_{-\infty}^{\infty}\!\!\!\frac{d\omega}{2\pi}S_{B}(\omega )I_{k,-k^{\prime}}^{*}(\omega)\] \[+\!\sum_{kk^{\prime}}\tilde{x}_{k}\tilde{\rho}_{s}(0)\tilde{x}_{ k^{\prime}}\] \[\times\int_{-\infty}^{\infty}\!\!\!\frac{d\omega}{2\pi}S_{B}( \omega)\big{[}I_{k,-k^{\prime}}^{*}(\omega)+I_{-k^{\prime},k}(\omega)\big{]}.\] Here, \(I_{k,k^{\prime}}(\omega)\) is the filter function defined by \[I_{k,k^{\prime}}(\omega)\equiv\int_{0}^{\tau}\!\!dt_{1}\!\!\int_{0}^{t_{1}}\! \!\!dt_{2}\,e^{i(k\omega_{p}-\omega)t_{1}-i(k^{\prime}\omega_{p}-\omega)t_{2}}. \tag{14}\] (The analytical evaluation of this integral is discussed in Appendix A.) For \(k=k^{\prime}\) and \(k\neq k^{\prime}\), the filter functions \(I_{k,k^{\prime}}(\omega)\) behave differently, which is worth careful inspection. _Diagonal filter functions.-_ For \(k=k^{\prime}\), the filter functions \(I_{k,k^{\prime}}(\omega)\) can be cast into the following form \[I_{k,k}(\omega)=K^{R}(\omega-k\omega_{p})+iK^{I}(\omega-k\omega_{p}), \tag{15}\] where the real and imaginary parts are given by: \[K^{R}(\omega)=\frac{\tau^{2}}{2}\text{sinc}^{2}\left(\frac{\omega\tau}{2} \right),\ \ K^{I}(\omega)=-\frac{\tau}{\omega}[1-\text{sinc}(\omega\tau)]. \tag{16}\] We illustrate the behavior of \(K^{R}(\omega)\) (solid line) and \(K^{I}(\omega)\) (dashed line) in Fig. 1. Panel (a) shows the real and imaginary parts of \(I_{0,0}(\omega)\), respectively. Visibly, the function \(K^{R}(\omega)\) has a predominant peak located at \(\omega=0\). According to Eq. (16), the width of this peak is \(\sim 2\pi/\tau\). Different from the real part, \(\text{Im}\{I_{0,0}(\omega)\}=K^{I}(u)\) flips its sign at \(\omega=0\), showing both a peak and a valley. Compared to \(K^{R}(\omega)\lesssim 2|\omega|^{-2}\) in the limit \(|\omega|\gg\omega_{p}\), \(K^{I}(\omega)\) decays more slowly as \(|K^{I}(\omega)|\sim\tau|\omega|^{-1}\). _Off-diagonal filter functions.-_ The off-diagonal elements \(I_{k,k^{\prime}}(\omega)\) (\(k\neq k^{\prime}\)) have three distinctive behaviors: (1) their amplitudes are smaller, and decrease as \(|I_{k,k^{\prime}}(\omega)|\lesssim\tau^{2}/(2\pi|k-k^{\prime}|)\) for larger \(|k-k^{\prime}|\) (see Appendix A). In the limit of \(|\omega-k\omega_{p}|\), \(|\omega-k^{\prime}\omega_{p}|\gg|k-k^{\prime}|\omega_{p}\), they have a fast \(|\omega-k\omega_{p}|^{-2}\) decay. (2) The peaks (valleys) are spread over a wider frequency range; the width of this frequency range is approximately \(|k-k^{\prime}|\omega_{p}\). (3) The off-diagonal filter functions elements have net-zero integrals, namely \[\int_{-\infty}^{\infty}\!\!\frac{d\omega}{2\pi}I_{k,k^{\prime}}( \omega) =\int_{0}^{\tau}\!\!dt_{1}\!\!\int_{0}^{t_{1}}\!\!\!dt_{2}\,e^{ik \omega_{p}t_{1}-ik^{\prime}\omega_{p}t_{2}}\delta(t_{1}-t_{2})\] \[=\frac{1}{2}\int_{0}^{\tau}dt_{1}e^{-i(-k+k^{\prime})\,\omega_{p} t_{1}}=\frac{1}{2}\tau\delta_{k,k^{\prime}}. \tag{17}\] All these properties can be observed in Fig. 1 (a)-(c) for different values of \(|k-k^{\prime}|\). Based on these three features, we arrive at the following conclusion: if variations of \(S_{B}(\omega)\) are insignificant over the frequency scale of a few \(\omega_{p}\), the off-diagonal elements of \(\phi_{k,k^{\prime}}\equiv\int_{-\infty}^{\infty}(d\omega/2\pi)I_{k,k^{\prime}}( \omega)S_{B}(\omega)\) (\(k-k\neq 0\)) have negligible amplitude. We justify this claim in three steps. First, for large \(|k-k^{\prime}|\), the amplitudes of \(I_{k,k^{\prime}}(\omega)\) are small, rendering a negligible \(\phi_{k,k^{\prime}}\). Second, for terms with small but nonzero \(|k-k^{\prime}|\), the slow-varying \(S_{B}(\omega)\) allows us to treat it as quasi-constant. Third, using the property of net-zero area in Eq. (17), the integral \(\phi_{k,k^{\prime}}\) vanishes for small but non-zero \(|k-k^{\prime}|\). Since \(\phi_{k,k^{\prime}}\) are the coefficients of terms shown in Eq. (13), we conclude that all off-diagonal terms are less important than the diagonal ones in that expansion, if the spectrum is sufficiently smooth at the resolution determined by \(\omega_{p}\). After neglecting the terms with off-diagonal filter functions, we simplify Eq. (13) to Figure 1: The filter functions \(I_{k,k^{\prime}}(\omega)\). The real and imaginary parts are shown as solid and dashed curves, respectively. From (a) to (c), \(|k-k^{\prime}|\) is chosen as 0, 1 and 4, respectively. \[\Sigma^{(2)}(\tau)\tilde{\rho}_{s}(0)\approx\Sigma^{(2)}_{\text{CP}} (\tau)\tilde{\rho}_{s}(0) = \sum_{k\in\mathbb{Z}}\left[\tilde{x}_{k}\tilde{\rho}_{s}(0)\tilde{x }_{k}^{\dagger}-\frac{1}{2}\tilde{x}_{k}^{\dagger}\tilde{x}_{k}\tilde{\rho}_{s }(0)-\frac{1}{2}\tilde{\rho}_{s}(0)\tilde{x}_{k}^{\dagger}\tilde{x}_{k}\right] \left[\,\int_{-\infty}^{\infty}\!\!\frac{d\omega}{2\pi}S_{B}(\omega)2K^{R}( \omega-k\omega_{p})\,\right] \tag{18}\] \[-i\sum_{k\in\mathbb{Z}}\left[\tilde{x}_{k}^{\dagger}\tilde{x}_{k} \tilde{\rho}_{s}(0)-\tilde{\rho}_{s}(0)\tilde{x}_{k}^{\dagger}\tilde{x}_{k} \right]\left[\,\int_{-\infty}^{\infty}\!\!\frac{d\omega}{2\pi}S_{B}(\omega)K^{ I}(\omega-k\omega_{p})\,\right]\] where \(\Sigma^{(2)}_{\text{CP}}(\tau)\) is the simplified second-order self-energy. This step resembles the secular approximation performed in the derivation of the Lindblad master equation - in both cases, small off-diagonal terms are neglected. Since the coefficient \(\int_{-\infty}^{\infty}(d\omega/2\pi)S_{B}(\omega)2K^{R}(\omega-k\omega_{p})\) is strictly positive, the self-energy \(\Sigma^{(2)}_{\text{CP}}(\tau)\) has the form of a Lindbladian (up to an extra time dimension) [24]. Then, according to Ref. [24], the exponential of \(\Sigma^{(2)}_{\text{CP}}\) yields a CPTP map \[\Pi(\tau)\approx\exp\left[\Sigma^{(2)}_{\text{CP}}(\tau)\,\right]. \tag{19}\] In the following, we refer to \(\tilde{x}_{k}=\left[\int_{0}^{\tau}dt\tilde{x}(t)\exp(ik\omega_{p}t)\right]/\tau\) and \(\omega_{k}=k\omega_{p}\) as the _filter operator_ and its corresponding _filter frequency_, respectively. According to the justification of the secular approximation, the map Eq. (19) tends to be more accurate for smoother noise spectrum spectra. However, we observe that even for an \(S_{B}(\omega)\) that exhibits strong peaks, the magnitude of the terms in Eq. (13) with diagonal filter functions can still dominant those with the off-diagonal ones. As a result, the CPTP map (19) is found to qualitatively agree with the full map for many common noise spectra, including those showing strong peaks. This is illustrated in Sec. III.C for the example of \(1/f\) noise. Therefore, although we find it challenging to quantify the magnitude of the approximation error for arbitrary noise spectra, we still adopt Eq. (19) for an estimation if an extreme spectrum is considered; then, the agreement between the CPTP and full maps can be checked for validation. We append two remarks to compare our method with several existing ones. First, although our method is not a differential equation, the derived map is reminiscent of the dynamical map generated by the Lindblad equation [24; 25]. Specifically, the first and second lines in Eq. (18) resemble the damping terms and Lamb shift in the master equation, respectively. A difference is that, our map considers noise contributions from the frequency set \(\{k\omega_{p}|k\in\mathbb{Z}\}\), while the Lindblad master equation only includes noise at system transition frequencies. For a more intuitive comparison, we illustrate the decoherence channels for an undriven and driven qubit in Fig. 2 (a) and (b), respectively (see a detailed description of the qubit in the caption). While three damping operators fully describe the decoherence processes in an undriven qubit according to the Lindblad master equation, more damping terms are relevant for a driven one according to Eq. (18). In fact, we show in Sec. III.A that the Lindblad map is a special case of the dynamical maps derived by our method. Second, the form of Eq. (18) is also reminiscent of the coarse-grained master equation [2; 3]. We understand this similarity as follows: the second-order Keldysh expansion used here is comparable to the coarse-graining step detailed in Refs. [2; 3]. Differently, our framework focuses on a single map of the reduced density matrix from \(t=0\) to \(\tau\), rather than its full evolution during \(t\in[0,\tau]\). Therefore, our method tends to require less computational resources due to fewer computational steps, if the map is only needed for one final time \(\tau\). Figure 2: A cartoon comparing the decoherence channels of an undriven and driven qubit. The qubit Hamiltonian is given by \(\hat{H}_{s}(t)=\omega_{q}\hat{\sigma}_{z}/2+\hat{H}_{d}(t)\), where \(\hat{H}_{d}(t)\) describes an arbitrary drive on the qubit. This qubit is coupled to the bath via the operator \(\hat{x}=v_{z}\hat{\sigma}_{z}+v_{x}\hat{\sigma}_{x}\), and the bath spectrum is denoted by \(S_{B}(\omega)\). Panels (a) and (b) illustrate the qubit decoherence channels if the qubit is undriven or driven, respectively. In (a), the three terms in the decomposition \(\tilde{x}(t)=\sum_{k}\upsilon_{x}\hat{\sigma}^{\pm}\exp(\pm i\omega_{q}t)+ \upsilon_{x}\hat{\sigma}_{z}\) result in the damping operators \(\mathbb{D}[\hat{\sigma}^{\pm}]\) and \(\mathbb{D}[\hat{\sigma}_{z}]\), which describe the excitation, decay, and pure-dephasing channels of the qubit, respectively [25]. In (b), more damping channels are relevant, since the decomposition (12) contains more frequency components. For such a general case, the decoherence channels are summarized in the first line of the self-energy (18). ### Total decoherence error Using the map (19), we can conveniently derive the gate error for a noisy quantum processor. Following Ref. [26], this error is expressed as \[E_{\text{gate}}=1-\frac{1}{N_{s}^{2}}\operatorname{Tr}\Big{\{}\mathcal{V}_{\text {tg}}^{\dagger}\mathcal{V}_{s}(\tau)\boldsymbol{\Pi}(\tau)\Big{\}}. \tag{20}\] Above, \(N_{s}\) is the dimension of the system Hilbert space, \(\mathcal{V}_{\text{tg}}\equiv\hat{U}_{\text{tg}}\otimes\hat{U}_{\text{tg}}^{\dagger}\) denotes the target superoperator, where \(\hat{U}_{\text{tg}}\) is the target unitary, and \(\mathcal{V}_{s}(\tau)\) is the closed-system map defined by \(\mathcal{V}_{s}(\tau)\equiv\hat{U}_{s}(\tau)\otimes\hat{U}_{s}^{\dagger}(\tau)\). If we only focus on the decoherence contribution to \(E_{\text{gate}}\), we can neglect the possible coherent errors by setting \(U_{\text{tg}}=U_{s}(\tau)\), and reduce Eq. (20) to \[E_{\text{dh}}= 1-\frac{1}{N_{s}^{2}}\operatorname{Tr}\{\boldsymbol{\Pi}(\tau)\}\] \[\approx \frac{1}{N_{s}}\sum_{k}\Big{(}\operatorname{Tr}_{q}\{\tilde{x}_{ k}^{\dagger}\tilde{x}_{k}\}-\frac{1}{N_{s}}\big{|}\operatorname{Tr}_{q}\{ \tilde{x}_{k}\}\big{|}^{2}\Big{)}\operatorname{Re}\{2\phi_{k,k}\}. \tag{21}\] Note that the second line is a leading-order approximation of \(E_{\text{dh}}\) of order \(\epsilon^{2}\). Up to this order, only the real part of \(\phi_{k,k}\) contributes. We interpret the sum in the second line of Eq. (21) as follows. The total decoherence error \(E_{\text{dh}}\) is a sum of contributions by noise from frequency bands indexed by \(k\). The \(k\)th band has the approximate bandwidth \(\sim\omega_{p}\) and is centered at \(\omega_{k}\) and [see the filter function in Fig. 1 (a)]. The total noise amplitude over this bandwidth is given by the integral \(2\operatorname{Re}\{\phi_{k,k}\}\). The driven qubit, however, is not equally sensitive to noise from all these frequency bands - according to Eq. (21), we can quantify the sensitivity by the filter strength \[M_{k}\equiv\operatorname{Tr}_{q}\big{\{}\tilde{x}_{k}^{\dagger}\tilde{x}_{k} \big{\}}-\frac{1}{N_{s}}\big{|}\operatorname{Tr}_{q}\{\tilde{x}_{k}\}\big{|}^ {2}, \tag{22}\] which satisfies the conservation rule \(\sum_{k}M_{k}=\operatorname{Tr}_{q}\big{\{}\tilde{x}^{2}\big{\}}-\big{|} \operatorname{Tr}_{q}\{\tilde{x}\}\big{|}^{2}/N_{s}\) for a time-independent coupling operator \(\tilde{x}\). The conservation rule implies that, if only white noise is present, i.e., the noise spectrum \(S_{B}(\omega)=\gamma\) is a constant over frequency, the decoherence error is \[E_{\text{dh}}\approx\frac{1}{N_{s}}\Big{(}\operatorname{Tr}_{q}\big{\{}\tilde {x}^{2}\big{\}}-\frac{1}{N_{s}}\big{|}\operatorname{Tr}_{q}\{\tilde{x}\}\big{|} ^{2}\Big{)}\gamma\tau, \tag{23}\] which increases with \(\tau\) but is independent of the shape of drive applied during \(t\in[0,\tau]\) up to order \(\epsilon^{2}\). For non-Markovian noise [\(S_{B}(\omega)\) is not a constant], however, Eq. (23) does not hold. In this case, different drives generally result in different magnitudes of \(E_{\text{dh}}\). To reduce decoherence errors, one should design pulses such that \(M_{k}\) is suppressed where the integrated noise amplitude \(\operatorname{Re}\{2\phi_{k,k}\}\) is large. In the following sections, most strategies discussed for reducing decoherence are centered around this strategy. ## III Applications In this section, we demonstrate the power of our framework by a few examples from a wide range of applications. Our framework not only reproduces some of the established conclusions, but also extends the prediction to situations which have not be carefully studied by previous theories. ### Static quantum systems We first apply our method to an undriven system, with the main purpose of reproducing the dynamical map derived by the Lindblad master equation. In this case, we set the drive \(\hat{H}_{d}(t)=0\) in Eq. (1), which makes the system Hamiltonian \(\hat{H}_{s}(t)=\hat{H}_{s0}\) time-independent. Following the procedure described in Sec. II, we first derive the interaction-picture coupling operator \(\tilde{x}(t)\), and use it to find the filter operators \(\tilde{x}_{k}\) needed in the derivation of the self-energy (18). For the undriven system, the propagator is given by \[\hat{U}_{s0}(t)=\exp[-iH_{s0}t], \tag{24}\] which yields the coupling operator in the interaction picture \[\tilde{x}(t)=\sum_{\omega_{L},\in\mathbb{F}}\tilde{x}(\omega_{L})e^{-i\omega_{ L}t}. \tag{25}\] Above, the frequencies \(\omega_{L}\) associated with different terms are contained in the frequency set \(\mathbb{F}=\{E_{j}-E_{j^{\prime}}\,|\,0\leq j,j\leq N_{s}\}\). For this expression, \(N_{s}\) is the dimension of the system Hilbert space, and \(E_{j}\) is the eigenenergy of the \(j\)th eigenstate for \(\hat{H}_{s}\). We refer to the elements in \(\mathbb{F}\) as the _transition frequencies_, and the corresponding \(\tilde{x}(\omega_{L})\) as the _damping operator_. (Note that \(\omega_{L}=0\) is also included in this transition-frequency set \(\mathbb{F}\).) Using Eq. (12), we obtain the \(k\)th filter operator \[\tilde{x}_{k}=\frac{1}{\tau}\int_{0}^{\tau}dt^{\prime}\tilde{x}(t^{\prime})e^{ ik\omega_{p}t^{\prime}}=\sum_{\omega_{L},\in\mathbb{F}}Q(\omega_{L},k\omega_{p}) \tilde{x}(\omega_{L}), \tag{26}\] where we define \(Q(\omega_{L},\omega)\equiv[e^{i(\omega-\omega_{L})\tau}-1]/[i(\omega-\omega_{L})\tau]\). Inserting the expansion (26) into Eq. (18), we obtain the self-energy for the undriven system as \[\Sigma^{(2)}_{\text{CP}}(\tau)= \sum_{k}\operatorname{Re}\{2\phi_{k,k}\}\mathbb{D}\Big{[}\sum_{ \omega_{L},\in\mathbb{F}}Q(\omega_{L},k\omega_{p})\tilde{x}(\omega_{L})\Big{]}\] \[+\text{ Lamb shifts}. \tag{27}\] Inserting it into Eq. (19), we obtain the dynamical map for the undriven system. In deriving this map, we only specify a time-independent Hamiltonian, but do not make further assumptions such as those usually required by the Lindblad master equation. If we do enforce these assumptions, then our framework reproduces the Lindblad dynamical map. In detail, these conditions are: 1. the difference between the transition frequencies is much larger than \(\omega_{p}\), i.e., \(|\omega_{L}-\omega_{L}^{\prime}|\gg\omega_{p}\) for \(\omega_{L},\omega_{L}^{\prime}\in\mathbb{F}\), \(\omega_{L}\neq\omega_{L}^{\prime}\). 2. the system evolution time \(\tau\) is sufficiently long such that the spectral variation in \(S_{B}(\omega)\) is negligible over the small frequency scale \(\omega_{p}=2\pi/\tau\); These two conditions can be translated to the following two more familiar statements: 1 the system's characteristic time \(\tau_{S}\sim 1/\text{min}\{\omega_{L}-\omega_{L}^{\prime}|\omega_{L},\omega_{L}^{ \prime}\in\mathbb{F},\omega_{L}\neq\omega_{L}^{\prime}\}\) is much shorter than the system evolution time \(\tau\) of interest, which usually has a similar timescale as the system relaxation time \(\tau_{R}\); 2 the bath correlation time \(\tau_{B}\) is also much shorter than the evolution time \(\tau\sim\tau_{R}\) (see Appendix C for a more detailed explanation). Under 1 and 2, the fundamental frequency \(\omega_{p}\) is by far the smallest frequency scale. This allows us to consider the limit \(\omega_{p}\to 0\), and perform the following three approximations. First, we approximate the function \(K^{R}(\omega)\approx\pi\tau\delta(\omega)\) and \(K^{I}(\omega)\approx-\tau\mathcal{P}(1/\omega)\), where \(\delta(x)\) is the Dirac delta function and \(\mathcal{P}\) denotes the Cauchy principal value. Using the approximated \(K^{R/I}(\omega)\), we simplify the integral \(\phi_{k,k}\) and obtain \[\text{Re}\{2\phi_{k,k}\}\approx\tau S_{B}(k\omega_{p}),\quad\text{Im}\{\phi_{ k,k}\}\approx\tau\bar{S}_{B}(k\omega_{p}), \tag{28}\] where we define \[\bar{S}_{B}(\omega)\equiv-\mathcal{P}\int_{-\infty}^{\infty}\frac{d\omega^{ \prime}}{2\pi}\frac{S_{B}(\omega^{\prime})}{\omega^{\prime}-\omega}. \tag{29}\] Second, the infinitesimal \(\omega_{p}\) also justifies the replacement of the summation over \(k\) by an integral over \(\omega\). This step transforms the self-energy in Eq. (18) to an integral \[\Sigma_{\text{CP}}^{(2)}\bar{\rho}_{s}(0)=\int_{-\infty}^{\infty }d\omega\Big{\{} S_{B}(\omega)\mathbb{D}[\bar{x}_{\omega}]\bar{\rho}_{s}(0)\] \[-i\bar{S}_{B}(\omega)\big{[}\bar{x}_{\omega}^{\dagger}\bar{x}_{ \omega},\bar{\rho}_{s}(0)\big{]}\Big{\}}, \tag{30}\] where we define the damping operator \(\mathbb{D}[\hat{L}]\tilde{\rho}\equiv\hat{L}\tilde{\rho}\hat{L}^{\dagger}-[ \hat{L}^{\dagger}\hat{L}\tilde{\rho}+\hat{\rho}\hat{L}^{\dagger}\hat{L}]/2\), and \(\bar{x}_{\omega}\equiv\tilde{x}_{\lfloor\omega/\omega_{p}\rfloor}/\sqrt{ \omega_{p}}\). Third, using the expansion (25) for the undriven system and the definition of \(\bar{x}_{\omega}\), we find the approximation \[\bar{x}_{\omega}^{\dagger}\bar{x}_{\omega} \approx\frac{\tau}{2\pi}\sum_{\omega_{L}}\sum_{\omega_{L}^{ \prime}}Q^{*}(\omega_{L},\omega)Q(\omega_{L}^{\prime},\omega)\bar{x}^{\dagger} (\omega_{L})\bar{x}(\omega_{L}^{\prime})\] \[\approx\sum_{\omega_{L}}\tilde{x}^{\dagger}(\omega_{L})\bar{x}( \omega_{L})\delta(\omega-\omega_{L}). \tag{31}\] A similar delta-function approximation holds for \(\bar{x}_{\omega}^{\dagger}\otimes\bar{x}_{\omega}\) in \(\mathbb{D}[\bar{x}_{\omega}]\). Inserting these approximations into the integral Eq. (30) and carrying out the integral over the delta functions, we finally arrive at the self-energy \[\Sigma_{\text{CP}}^{(2)}(\tau)\tilde{\rho}_{s}(0)=\tau\sum_{\omega _{L}\in\mathbb{F}}\Big{\{} S_{B}(\omega_{L})\mathbb{D}[\bar{x}(\omega_{L})]\tilde{\rho}_{s}(0)\] \[-i\bar{S}_{B}(\omega_{L})\Big{[}\bar{x}^{\dagger}(\omega_{L})\bar {x}(\omega_{L}),\tilde{\rho}_{s}(0)\Big{]}\Big{\}}. \tag{32}\] The CPTP map (19) generated by the self-energy above is identical to that predicted by the Lindblad master equation. As a minimal example, we apply the map above to a static qubit that is described by the Hamiltonian \[\hat{H}_{s0}=\frac{1}{2}\omega_{q}\hat{\sigma}_{z}, \tag{33}\] where \(\omega_{q}\) is the qubit frequency. The transition-frequency set for this qubit is \(\mathbb{F}=\{0,\pm\omega_{q}\}\). If it is transversely coupled to a bath through the operator \(\bar{x}=\hat{\sigma}_{x}\), the expansion (25) is given by \[\bar{x}(t)=\hat{\sigma}^{-}e^{-i\omega_{q}t}+\hat{\sigma}^{+}e^{i\omega_{q}t}. \tag{34}\] Then, following the steps described above, we find the self-energy under conditions 1 and 2 approximated as \[\Sigma_{\text{CP}}^{(2)}(\tau)\tilde{\rho}_{s}(0)\approx\tau\sum_{ \pm}S_{B}(\pm\omega_{q})\mathbb{D}[\hat{\sigma}^{\mp}]\tilde{\rho}_{s}(0)+ \text{Lamb shifts}. \tag{35}\] If the coupling operator is \(\hat{x}=\hat{\sigma}_{z}\), we instead have \(\bar{x}(t)=\hat{\sigma}_{z}\), which yields the approximated self-energy \[\Sigma_{\text{CP}}^{(2)}(\tau)\tilde{\rho}_{s}(0)\approx\tau S_{B}(0)\mathbb{D }[\hat{\sigma}_{z}]\tilde{\rho}_{s}(0)+\text{Lamb shifts}. \tag{36}\] ### Weakly driven systems Although the derivation of the Lindblad dynamical map [25] assumes no drive on the system, in the literature, weak drives are sometimes naively added to the master equation with the damping rates and operators unaffected. Different from the Lindblad method, our framework rigorously includes the drive in the derivation. Below, we use the Keldysh framework to investigate the change of the decoherence map (19) if a weak drive is added. Because the specific noise spectrum may vary in different experiments, here we choose to focus on the filter operators \(\bar{x}_{k}\), which determine the map (19) up to the specific noise spectrum. We start by considering a general system, which is described by the Hamiltonian \(\hat{H}_{s}(t)=\hat{H}_{s0}+\lambda\hat{H}_{d}(t)\). This Hamiltonian consists of the static part \(\hat{H}_{s0}\) and a sufficiently weak driving term \(\lambda\hat{H}_{d}(t)\) (\(\lambda\) is a small dimensionless parameter). Due to the small amplitude of the latter term, we can perturbatively calculate \(\hat{U}_{s}(t)\) using the Magnus expansion [27]: \[\hat{U}_{s}(t)=\hat{U}_{s0}(t)\hat{U}_{d}(t),\quad\hat{U}_{d}(t)=\exp[-i\hat{ \Omega}(t)], \tag{37}\] where \(\hat{U}_{s0}(t)\) is the propagator for the undriven system given in Eq. (24), and \(\hat{\Omega}(t)\) is the Magnus exponent. To leading order of \(\lambda\), this exponent is approximately \[\hat{\Omega}(t)\approx\lambda\int_{0}^{t}dt^{\prime}\hat{U}_{s0}^{\dagger}(t^{ \prime})\hat{H}_{d}(t^{\prime})\hat{U}_{s0}(t^{\prime}). \tag{38}\] By inspecting Eq. (37) and the definition of \(\tilde{x}(t)\), we note that: if \(\hat{\Omega}(t)\) is small, \(\tilde{x}(t)\) can be approximated as \[\tilde{x}(t)\approx\sum_{\omega_{L}\in\mathbb{F}}\Big{\{}\tilde{x}(\omega_{L})-i [\tilde{x}(\omega_{L}),\hat{\Omega}(t)]\Big{\}}e^{-i\omega_{L}t}, \tag{39}\] which is only slightly modified from Eq. (25). In the limit \(|\hat{\Omega}(t)|\to 0\), the filter operator \(\tilde{x}_{k}\) can still be approximated by the undriven expansion (26). Therefore, the noise channels and the resulting dynamical map (19) should also approach those obtained in the undriven case. By contrast, if the condition of negligible \(\hat{\Omega}(t)\) is not satisfied, such approximation may be invalid. In the following, we will use a concrete example to concretely demonstrate both scenarios. We consider a qubit driven by a sinusoidal tone. The Hamiltonian of this driven system is given by \[\hat{H}_{s0}=\frac{\omega_{q}}{2}\hat{\sigma}_{z},\quad\hat{H}_{d}(t)=\frac{d} {2}(\hat{\sigma}^{+}e^{-i\omega_{d}t}+\hat{\sigma}^{-}e^{i\omega_{d}t}). \tag{40}\] The coupling operator for this qubit is taken to be \(\hat{x}=\hat{\sigma}_{x}\), which corresponds to transverse coupling between the qubit and the noise bath. The drive strength \(d\) is assumed to be weak, i.e., \(d\ll\omega_{q}\). In that case, the Magnus expansion of the qubit propagator is applicable, with the exponent given by \[\hat{\Omega}(t)=\frac{d}{2\delta_{q}}\sin(\delta_{q}t)\hat{\sigma}_{x}+\frac{ d}{\delta_{q}}\sin^{2}\left(\frac{\delta_{q}t}{2}\right)\hat{\sigma}_{y}+O(d^{2}), \tag{41}\] where the detuning is defined by \(\delta_{q}\equiv\omega_{q}-\omega_{d}\). From Eq. (41), we find that the Magnus exponent has a negligible magnitude, i.e., \(|\hat{\Omega}(t)|\lesssim 4|d/\delta_{q}|\), if the drive is off-resonant (\(d\ll|\delta_{q}|\)). For such off-resonant drive, Eq. (39) predicts that the expansion of \(\tilde{x}_{k}\) can be approximated by Eq. (34). To verify this, we numerically [28] calculate \(\tilde{x}_{k}\) and the resulting filter strength \(M_{k}\) for both undriven (red coloring) and off-resonantly driven (blue coloring) qubits. The resulting filter strengths \(M_{k}\) versus filter frequencies \(\omega=k\omega_{p}\) are shown in Fig. 3 (a). [We only focus on the frequency range \(\omega\approx\omega_{q}\) as an example, where noise induces energy decay in the qubit. For qubit excitation, the discussion is analogous.] As shown in the plot, the filter strengths for the driven qubit only insignificantly differ from those for the undriven qubit. The filter operator associated with the most prominent filter strength is approximately \(\hat{\sigma}^{-}\), which is the decay operator for the undriven qubit (red peak). For the resonantly-driven qubit, however, the exponent \(\hat{\Omega}(t)\) can grow significantly, even in the limit \(d\ll\omega_{q}\). Choosing \(\delta_{q}=0\) as an example, we find that the exponent \[\hat{\Omega}(t)=\frac{d}{2}t\hat{\sigma}_{x}, \tag{42}\] grows linearly with time. This exponent leads to the expression for the coupling operator \[\tilde{x}(t) =\hat{O}_{s}^{\dagger}(t)\left[\hat{\sigma}^{+}e^{i\omega_{q}t}+ \hat{\sigma}^{-}e^{-i\omega_{q}t}\right]\hat{O}_{s}(t) \tag{43}\] \[=\hat{\sigma}_{x}\cos(\omega_{q}t)-\left[\hat{\sigma}_{y}\cos(dt) -\hat{\sigma}_{z}\sin(dt)\right]\sin(\omega_{q}t).\] Certainly, Eq. (43) cannot be approximated by Eq. (25), rendering the previous approximation of \(\tilde{x}_{k}\) by Eq. (26) invalid. The difference in \(\tilde{x}_{k}\) between the driven and undriven cases causes distinctive behaviors of \(M_{k}\)'s, as shown in Fig. 3 (b). Compared to the plot of \(M_{k}\) for the undriven qubit (red coloring), the plot for the driven qubit (blue coloring) exhibits two additional peaks located at frequencies \(\omega=\omega_{q}\pm d\). These extra peaks imply additional decoherence channels, rendering the qubit sensitive to noise at frequencies \(\omega_{q}\pm d\) in addition to its transition frequency \(\omega_{q}\). These additional damping channels are missed by the standard Lindblad method [Eq. (35)], but can be crosschecked by a Golden-rule type of calculation in the rotating frame [29], or the Floquet theory [5]. (In Appendix D, we explain the appearance of the side peaks in the framework of Floquet master equation.) In summary, the filter operators \(\tilde{x}_{k}\) for the driven system in general differ from those in the undriven case. This further results in different dynamical maps (19). The approximation of one set of \(\tilde{x}_{k}\) by the other is possible if the drive is off-resonant such that \(|\hat{\Omega}(t)|\) is sufficiently small [30]. Finally, we use a third example to demonstrate the predictive power of our method in more complicated situations involving non-periodic drives. For example, we calculate \(\tilde{x}_{k}\) and \(M_{k}\) for a qubit driven by a pulse with a hyperbolic envelope [see inset of Fig. 3 (c)]. Compared to the plot of filter strengths in (b), Figure 3: Filter strength \(M_{k}\) for a driven qubit described by Hamiltonian Eq. (40). In (a)-(c), the horizontal axis shows the filter frequency \(\omega_{k}=k\omega_{p}\), and the vertical shows the filter strength \(M_{k}\) [Eq. (22)]. The widths of the columns are given by the fundamental frequency \(\omega_{p}=2\pi/\tau\). The parameters are chosen as follows. The drive amplitudes for all three simulations are chosen as \(d/\omega_{q}=0.02\), and the frequency \(\omega_{d}\) used for each plot is given in each figure. The duration is set as \(\tau=200\cdot 2\pi/\omega_{q}\) for all three simulations. For (a) and (b), we choose sinusoidal drives with a constant amplitude; the results of \(M_{k}\) for the driven qubit are shown in blue, while those for an undriven qubit are shown in red for reference. For (c), the sinusoidal drive used for (b) is multiplied by a hyperbolic envelope, which is shown in the inset. the ramping up and down of the drive result in two wider side peaks that are not centered at the maximal driving strength, as shown in (c). Such distinctive feature implies the difference in the decoherence processes between systems with periodic and non-periodic drives. The latter case is thought to go beyond the description by the rotating-frame analysis or the Floquet theory, but is conveniently captured by the Keldysh method. ### Ramsey, echo and 1/\(f\) noise Along with the introduction of Eq. (18) in Sec. II.C, we claim that the secular approximation is applicable even for spectra that show strong variation within the frequency scale characterized by \(\omega_{p}\). Here, as a supporting example, we study the state evolution of a qubit which is coupled to a \(1/f\) noise source, whose spectrum is strongly peaked at \(\omega\approx 0\). Particularly, we compare the prediction by the CPTP map (19) and the full-wave version (13) for this example. We consider a qubit longitudinally coupled to a noise bath and subject to a transverse drive. The Hamiltonian for this qubit is \(\hat{H}_{s}(t)=\omega_{q}\hat{\sigma}_{z}/2+d(t)\hat{\sigma}_{x}\), and the coupling operator is \(\hat{x}=\hat{\sigma}_{z}\). The \(1/f\) noise spectrum is given by \(S_{B}(\omega)=2\pi\mathcal{A}_{f}^{2}/|\omega|\), where we set an infrared cutoff frequency \(\omega_{ir}\) to regularize the singularity at \(\omega=0\). We first calculate the map (19) for the simple case of an undriven qubit [\(d(t)=0\)], which is relevant for a Ramsey experiment [31]. Different from the discussion in Sec. III.A, the presence of the strong peak in the noise spectrum violates condition 2, rendering the Lindblad prediction (36) invalid. This difficulty, instead, can be overcome by our Keldysh framework, which takes advantage of the filter functions [6]. To perform the Keldysh calculation, we first derive the filter operator for the undriven qubit \(\tilde{x}_{k}=\delta_{k,0}\hat{\sigma}_{z}\), meaning that \(\tilde{x}_{k=0}\) is the only non-vanishing filter operator. Such decomposition enables the analytical evaluation of both Eqs. (18) and (13), which predicts the same self-energy exponent \[\mathbf{\Sigma}^{(2)}(\tau) =\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}S_{B}(\omega)2K^{R}( \omega)\mathbb{D}[\hat{\sigma}_{z}]\] \[\approx 2\mathcal{A}_{f}^{2}\tau^{2}\ln\left(\frac{1}{2\pi\omega_{ir} \tau}\right)\mathbb{D}[\hat{\sigma}_{z}]. \tag{44}\] The log-quadratic scaling of the self-energy implies a well-known sub-Gaussian dephasing profile [31; 32], which differs from the exponential one predicted by the Lindblad map (36). As the CPTP and full-wave maps agree well for the undriven qubit, we next check whether that agreement extends to the driven case. We focus on a qubit undergoing a spin echo. This protocol uses a \(\pi\)-pulse to help refocus the phase of the qubit and, as a result, mitigate the qubit dephasing due to \(1/f\) noise. For this simulation, we choose the drive such that a finite-width \(\pi\) pulse is applied at the middle of the whole echo duration \(T\) (gray curve in Fig. 4). Due to the application of the pulse, the filter strengths for different \(\tau\in[0,T]\) differ characteristically. For example, we calculate \(\tilde{x}_{k}\) for \(\tau=T/2\) and \(\tau=T\), and show the corresponding filter strength \(M_{k}\) in the insets of Fig. 4. For \(\tau=T\), the filter strength \(M_{k=0}\) vanishes, and the most prominent peaks of \(M_{k}\) are located at \(k=\pm 1\). By contrast, for \(\tau=T/2\), the only prominent filter strength is \(M_{k=0}\), indicating strong sensitivity to noise from \(\omega\approx 0\). Using the filter operators and the \(1/f\) noise spectrum, we further calculate the decoherence maps predicted by both the full-wave [Eq. (11)] and CPTP [Eq. (19)] methods, with \(\tau\) varied over \(\tau\in[0,T]\). Note that since this framework is not based on a differential equation, each \(\tilde{\rho}_{s}(\tau)\) with \(\tau\in[0,T]\) is calculated separately rather than recursively. Then, we evaluate the off-diagonal matrix element \(\tilde{\rho}_{eg}(\tau)\), which is plotted in Fig. 4. The magnitude of this matrix element indicates the phase coherence of the qubit [33], if the qubit is initially prepared in an equal superposition state. Visibly, the two calculations show qualitative agreement, while small deviation exists as the consequence of neglecting the off-diagonal filter functions in Eq. (18). The comparison suggests that, even for a non-trivial decomposition \(\tilde{x}_{k}\) and a highly structured noise spectrum, the secular approximation can still be applicable [34]. Beside the agreement between the two sets of simulations, we also observe the interesting rebound of the matrix element \(|\tilde{\rho}_{eg}(\tau)|\). Specifically, \(|\tilde{\rho}_{eg}(\tau)|\) decreases during the first half of the echo period and then increases after the pulse is applied. [For the first half period, the evolution of \(\tilde{\rho}_{eg}(\tau)\) indeed shows the sub-Gaussian dephasing behavior predicted by Eq. (44).] Such "inverse dephasing" of the qubit implies a negative decoherence rate, which is described in more detail by a time-local master equation introduced in Ref. [4]. This behavior also sends another useful message: although the map \(\mathbf{\Pi}(\tau)\approx\exp[\mathbf{\Sigma}_{\text{CP}}^{(2)}(\tau)]\) is guaranteed to be CPTP, the intermediate map \(\mathbf{\Pi}(\tau)[\mathbf{\Pi}(\tau^{\prime})]^{-1}\) (\(0<\tau^{\prime}<\tau\)) is not necessarily so. Noticing this, one may naturally ask whether it is possible to follow the procedure in Sec. II to derive a CPTP map from Figure 4: Evolution of \(\tilde{\rho}_{eg}(\tau)\) for a qubit during a spin echo experiment. The blue solid and red dashed curves correspond to the predictions by the CPTP (19) and full-wave (11) maps, respectively. The sketch of the echo pulse used for this simulation is plotted by the gray dashed curve. The two insets show the filter strengths \(M_{k}\) as functions of the filter frequencies \(\omega_{k}\), at the middle and end of the echo duration. Along with the filter strengths, the \(1/f\) noise spectrum used for simulation are also sketched. \(t=\tau^{\prime}\) to \(\tau\) (\(0<\tau^{\prime}<\tau\)), if the system and bath are initialized at \(t=0\); however, we point out that the basis for such derivation may not hold - for \(t=\tau^{\prime}>0\), the two subsystems may already be entangled, while the derivation starting from Eq. (2) requires isolation between them. (See discussion of the relation between initial entanglement and the CPTP character of the map in Ref. [35].) ### Floquet qubits We designate this final subsection to test the Keldysh method in studying the Floquet qubit [36; 37; 38; 39; 5]. This type of qubit uses the Floquet states of a periodically driven system to store and manipulate quantum information, which can offer advantages such as increased coherence times and more convenient gate operations than the static qubits. Although the open-system Floquet theory is developed to calculate the decoherence rates in such systems, its applicability is limited to the idle Floquet qubit. In the following, we show that the Keldysh method not only reproduces some results by such theory, but also explore situations that are beyond its application. We start by studying an idle Floquet qubit, where the drive \(\hat{H}_{d}(t)\) in Eq. (1) is periodic, i.e., \(\hat{H}_{d}(t+T_{d})=\hat{H}_{d}(t)\) (\(T_{d}=2\pi/\omega_{d}\) is the drive period). In this case, the closed-system propagator can be expressed as \[\hat{U}_{s}(t)=\sum_{j}|w_{j}(t)\rangle\langle w_{j}(0)|e^{-i\varepsilon_{j}t}, \tag{45}\] where \(|w_{j}(t)\rangle\) and \(\varepsilon_{j}\) are the \(j\)th independent Floquet state and its corresponding quasi-energy. In the interaction picture, the coupling operator is transformed as \[\tilde{x}(t) =\sum_{j,J^{\prime}}|w_{j}(0)\rangle\langle w_{j^{\prime}}(0)| \times\langle w_{j}(t)|\tilde{\epsilon}|w_{j^{\prime}}(t)\rangle e^{-i( \varepsilon_{j^{\prime}}-\varepsilon_{j})t}\] \[=\sum_{\omega_{L}\in\mathbb{F}}\tilde{x}(\omega_{L})e^{-i\omega_ {L}t}, \tag{46}\] where the set \(\mathbb{F}=\{\varepsilon_{j}-\varepsilon_{j^{\prime}}+l\omega_{d}\,|\,0<j,j \leq N_{s},l\in\mathbb{Z}\}\) contains all possible quasi-energy differences; the operators \(\hat{x}(\omega_{L})\) are damping operators in the basis of Floquet states (time-independent in the interaction picture), rather than the eigenstates of the undriven qubit discussed in Sec. III.A. With these preparations, we next show how our Keldysh method reproduces the prediction of decoherence process via the Floquet theory in Ref. [5]. In that instance, a Floquet qubit is coupled to both the \(1/f\) flux-noise bath and dielectric noise bath. The spectrum of the former has a strong peak at \(\omega\approx 0\), and that of the latter has a smoother spectrum. Similar to the steps in that reference, we first disregard the peak at \(\omega=0\) in the spectrum, and then correct the resulting dynamical map with a more careful consideration of the peak. For the first step, both conditions 1 and 2 are satisfied if we assume a sufficiently large evolution time \(\tau\). If so, the map takes on the same form as Eq. (32), while the operators \(\tilde{x}(\omega_{L})\) are updated according to Eq. (46). Such map reproduces that generated by the Markovian Floquet master equation [25]. Then, to address the strong peak at \(\omega=0\) due to the \(1/f\) spectrum, we carefully evaluate the coefficient \(\text{Re}\{2\phi_{0,0}\}=\int_{-\infty}^{\infty}(d\omega/2\pi)2K^{R}(\omega)S_ {B}(\omega)\), which is found to be approximately \(2\pi\tilde{\sigma}_{J}^{2}\tau^{2}\ln|2\pi\omega_{\text{ir}}|\). This coefficient should replace \(\tau S_{B}(\omega_{L}=0)\) multiplying the damping term \(\mathbb{D}[\tilde{x}(\omega_{L}=0)]\) in Eq. (32). After these two steps, the decoherence map derived in Ref. [5] is reproduced exactly. The discussion above focuses on an idle Floquet qubit. For gate operations and readout on a Floquet qubit, non-periodic control is required, and the Floquet theory is no longer applicable. Remarkably, the Keldysh method is still useful in predicting the decoherence map, since the knowledge of the Floquet states and their quasi-energies are not prerequisites for our numerical calculation of the Fourier expansion of \(\tilde{x}(t)\). Below, we show one such example, where a Floquet qubit undergoes an adiabatic evolution from a dynamical sweet spot to Figure 5: Filter strength \(M_{k}\) for a Floquet qubit [5]. The qubit Hamiltonian is given by \(\hat{H}_{s}(t)=[\Delta\hat{\sigma}_{x}+(2\Delta\cos\omega_{d}t+B)\hat{\sigma} _{x}])/2\), where the parameters are chosen as \(\omega_{d}/\Delta=1.17\) and \(B/\Delta=1.37\). The static qubit frequency is \(\omega_{q}=\sqrt{\Delta^{2}+B^{2}}\). The evolution time is set as \(\tau=20\cdot 2\pi/\omega_{d}\) for all three plots. For (a), we set \(A/\Delta=2.27\). The filter strength \(M_{k=0}\) vanishes, which corresponds to a dynamical sweet spot. For (b), the qubit is undriven (\(A=0\)). In this case, the filter strength \(M_{k=0}\) is predominant, implying strong sensitivity to \(1/f\) noise. For (c), the drive used in (b) is continuously switched off according to a hyperbolic envelope, as shown in the inset. The resulting plot of \(M_{k}\) differs from those in both (a) and (b). In (a)-(c), the noise spectra assumed in Ref. [5] is also plotted in gray. an unprotected static working point. (This previously has been used for readout of Floquet qubits [36; 38; 40].) For a concrete simulation, we reuse the qubit model and parameters in Ref. [5] [Fig. 3 (b) and (c) of that reference]. Specifically, we consider a fluxonium qubit with its external flux \(\phi_{e}\) biased slightly away from the half-flux-quantum sweet spot. Under a periodic drive that is carefully tuned, the qubit can be operated at a so-called dynamically sweet spot, where the derivative \(\partial\phi_{01}/\partial\phi_{e}\) vanishes. This leads to the first-order insensitivity of the qubit to the \(1/f\) flux noise. (See more details in the caption of Fig. 5 in the current paper.) As references, we first calculate the filter operators \(\tilde{x}_{k}\) for the dynamical sweet spot (initial) and the static point (final), and plot the resulting \(M_{k}\) in (a) and (b), respectively. At the dynamical sweet spot, the qubit is to the first order insensitive to \(1/f\) noise, as shown by the vanishing filter strength \(M_{k=0}\) with the corresponding filter frequency \(\omega_{k}=0\). In turn, it is sensitive to noise at other Floquet transition frequencies contained in \(\mathbb{F}\) [the locations of these frequencies are pointed to by pink arrows in Fig. 5 (a)]. By contrast, the qubit at the static working point is strongly sensitive to \(1/f\) noise, as indicated by the large filter strength \(M_{k=0}\). In addition, the qubit is also sensitive to noise at the qubit frequencies \(\pm\omega_{q}\). These frequencies are marked by the blue arrows in Fig. 5 (b). We note that the plots for \(M_{k}\) in both (a) and (b) are reminiscent of the plot for the filter weights in Ref. [5] [Fig. 3 (b) and (c) of that reference]. In fact, the resulting dynamical maps reproduce those obtained in Ref. [5], as long as the evolution time \(\tau\) is taken to be sufficiently large. Finally, we calculate \(M_{k}\) for the adiabatic process connecting the two working points, and show the results in (c). For this case, the plot of \(M_{k}\) differ from those from both (a) and (b). Interestingly, the locations of the peaks in (c) overlap with those from both (a) and (b). [We mark the peak locations by arrows with different coloring to indicate their apparent origin.] This feature suggests that, the qubit undergoing the adiabatic evolution is subjected to a combination of decoherence channels from both the Floquet and static regimes. Such feature cannot be predicted by the open-system Floquet theory, =. The examples used in this and the previous subsections are limited to qubits with only two energy levels, while the theory developed in Sec. II applies to general quantum systems. In Appendix E, we apply this framework to derive the dynamical map for an arbitrarily driven harmonic oscillator as an example. ## IV Quantum optimal control Above, we have introduced the CPTP map and applied it in studying a variety of driven systems. Beside predicting the decoherence maps, the framework can also be used to design drive pulses that mitigate decoherence errors, once it is combined with the technique of quantum optimal control [41; 42; 43; 44; 45; 46; 47; 15; 48; 14; 15; 49; 16; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. Optimal-control techniques utilize computer-aided optimization of pulses to minimize state-transfer or gate infidelities. For open-system optimization, the widely used decoherence model is the Lindblad master equation [42; 43]. Beside this model, Refs. [45; 46; 47; 15; 48; 14; 15; 49; 16; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48] have used the filter-function approach to reduce the error in quantum operations induced by correlated classical noise. In the following, we implement our Keldysh decoherence model and extend the filter-function aided optimization to the mitigation of quantum noise. The optimization with our method is also relatively simpler. In fact, once the total evolution time \(\tau\) is set, the integrals for evaluating the coefficients \(\phi_{k,k}\) can be precalculated, which are independent of the form of the drives. The CPTP maps (19) also avoid the risk of the optimization over unphysical maps. In the following, we use two examples to showcase the capability of the Keldysh-assisted quantum optimal control. ### State transfer in the presence of Ohmic noise We first study the state transfer in a qubit coupled to a typical quantum-noise bath, an Ohmic noise. The spectrum of such noise is \(S_{B}(\omega)=\mathcal{A}_{\alpha}\omega\,\Theta(\omega)\) [assuming zero temperature; see inset of Fig. 6 (a) for the spectrum], where \(\mathcal{A}_{\alpha}\) denotes the noise strength and \(\Theta(\omega)\) is the Heaviside function. The qubit coupled to this bath is described by the Hamiltonian \(\hat{H}_{s}(t)=\omega_{q}\hat{\sigma}_{z}/2+d(t)\hat{\sigma}_{x}\), where \(d(t)\) denotes the drive field. The coupling operator is set as \(\hat{x}=\hat{\sigma}_{x}\). In this setup, the spectrum exhibits clear asymmetry between the amplitudes of noise at positive and negative frequencies, which implies distinctive excitation and decay rates in the idle qubit [see Eq. (35)]. Because of this, if the decay rate overwhelms the excitation rate, one can leverage the natural system-bath interaction to realize the \(|e\rangle\rightarrow|g\rangle\) transfer; for the reverse transfer, however, such decay should instead be carefully mitigated. We note that the usual filter-function formalism is not designed to resolve such asymmetry in the noise spectrum, which is a difficulty we can overcome using our method. For our example, we consider the more difficult \(|g\rangle\rightarrow|e\rangle\) transfer. To mitigate the error caused by the energy decay, we program the optimizer to minimize the infidelity \[E_{\text{st}}=1-\left|\text{Tr}\left[\hat{O}_{s}^{\dagger}(\tau)\hat{\rho}_{e} \hat{O}_{s}(\tau)\mathbf{\Pi}(\tau)\hat{\rho}_{g}\right]\right|^{2}. \tag{47}\] Above, \(\hat{U}_{s}(\tau)\) is the closed-system propagator, \(\mathbf{\Pi}(\tau)\) is the CPTP map (19), and the two density matrices are \(\hat{\rho}_{e(g)}\equiv|e(g)\rangle\langle e(g)|\). The implemented optimization algorithm is Gradient Ascent Pulse Engineering, which is commonly used in quantum optimal control [49; 20]. For comparison, we first optimize the pulse assuming that the noise is absent. In this case, the optimizer chooses a pulse reminiscent of a typical resonant Rabi drive [red curve in Fig. 6 (a)], whose amplitude is almost constant. (The step-like structure of the pulse is a result of our requirement of the piecewise-constant drive.) For a closed-system simulation, the pulse induces a smooth increase of the excited-state population, resulting in a negligible state-transfer error (\(<10^{-6}\)) at the end of the pulse. However, for the open-system simulation including the noise bath, the calculation by Eq. (19) predicts a much higher error (\(E_{\text{st}}=8.8\times 10^{-2}\)), which is caused by the interaction between the qubit and the Ohmic noise bath. Especially, the smooth increase in population renders the qubit prone to the energy loss for the whole state-transfer duration [red dashed curve in (b)]. We next optimize the pulse with the noise included in Eq. (47). The optimized pulse is shown by the blue curve in Fig. 6 (a). Different from the closed-system version, the amplitude of the open-system optimized drive is held close to zero until the latter half of the duration, where the amplitude is ramped up rapidly. In this way, the qubit stays in the excited state for a shorter time than in the previous version. Such behavior of the excited-state population reduces the decoherence error [see comparison between blue and red curves in Fig. 6 (b)], yielding a \(4.4\times\) reduction in the state-transfer infidelity (\(E_{\text{st}}=2.0\times 10^{-2}\)). We note that a similar result is obtained by Ref. [43], where the optimization is based on quantum trajectories. For comparison, that optimization presumed the knowledge of the damping rates and operators, which is derived for an idle qubit using the Lindblad equation rather than a driven system [see discussion in Sec. III.B]. From this aspect, our optimization method tends to be more accurate, since it avoids the potential inaccuracy in the damping rates and operators. ### Avoiding two-level-system losses in gate operations Beside state transfer, the Keldysh-assisted optimizer can also help improve gate fidelities. Especially, if the fidelities of certain intuitive gates are limited by one or several resonance peaks in the noise spectrum, our optimizer can offer solutions that reduce the system sensitivity to noise associated with those peaks. To demonstrate this, we consider a noise bath that consists of one Ohmic bath and a few two-level systems (TLSs) [50; 51] [see gray curve for the spectrum of the bath in Fig. 7 (b)]. These discrete-level defects have been widely believed to limit the coherence times of many solid-state qubits [51]. Therefore, the mitigation of them is currently an indispensable task. For the concrete simulation, we choose the same Hamiltonian and coupling operator as in the previous case. We consider a situation where the qubit has a fixed frequency but is accidentally in close resonance with the TLSs. For this setup, the gate operations enabled by idling or weakly driving the qubit should suffer significantly from the TLS loss, because the locations of the peaks of \(M_{k}\) overlap with those of the resonance peaks (see Fig. 3). This situation motivates us to explore pulses that can mitigate the TLS loss. Toward this goal, we use the Keldysh-assisted optimal-control technique to optimize the pulse \(d(t)\), with the cost function set as the gate infidelity given in Eq. (20). In the following, we focus on optimizing the identity gate as an example, which is the key for the quantum-information storage and multi-qubit operations [52]. For the simple identity gates enabled by idling the qubit, the interaction between the qubit and the TLSs limits the fidelity of such operation to \(E_{\text{gate}}=7.0\times 10^{-2}\) over the duration \(\tau=40\cdot 2\pi/\omega_{q}\) [see the overlap of the filter-strength peak of the idle qubit (red) and noise resonance peaks in Fig. 7 (b)]. The Keldysh-assisted optimizer, by contrast, proposes to drive the qubit strongly [\(d(t)/\omega_{q}\sim 0.25\)] according to the solid curve shown in Fig. 7 (a). As a result of the application of this drive, the populations in the two qubit states oscillate over the whole gate period, and return to the original values at the end of the pulse [see dashed curve in (a) for the excited-state population]. These oscillations lead to the appearance of multiple peaks in \(M_{k}\) located away from qubit frequency \(\omega_{q}\) [blue plot in (b)], while the value of \(M_{k}\) at the TLS resonance frequencies are suppressed. As a result, the error in the identity operation is reduced by \(3.2\times\) to \(E_{\text{gt}}=2.2\times 10^{-2}\). For a clear visual comparison between the two schemes, in (c) we show the identity fidelities after multiple of such operations are applied. One can observe that the optimized gate yields much higher fidelity for such repeated application, which corresponds to a longer effective coherence time in the qubit. We also perform optimization for \(X\) and phase gates, and find improvement of a similar magnitude. ## V Conclusion and outlook In conclusion, we introduce a decoherence model for evaluating errors in a noisy driven system subjected to correlated Figure 6: Optimization of the state-transfer fidelity for a qubit coupled to an Ohmic noise bath. (a) shows pulses from both the closed-system (red) and open-system (blue) optimizations. The spectrum of the Ohmic noise is given in the inset. In (b), we simulate the evolution of the excited-state populations during the \(\ket{g}\rightarrow\ket{e}\) transfer using the two pulses, respectively. The solid curves show the open-system evolution of the population, and the dashed one shows the closed-system evolution. For this simulation, we choose the amplitude of the Ohmic noise as \(\mathcal{A}_{o}=0.001\). quantum noise. The second-order Keldysh expansion and the secular approximation lead to a CPTP map (19) for the system density matrix. Using this map, we study decoherence errors in a variety of quantum systems with both periodic and non-periodic drives. The clear physical picture of the noise sensitivity described by \(M_{k}\) provides useful information for developing noise-mitigation strategies, especially if noise spectrum is only qualitatively understood but cannot be accurately measured. The simplicity of the map after the secular approximation makes this decoherence model suitable to be integrated with the quantum-optimal-control technique. Using the examples of both state-transfer and single-qubit gate, we show that the combination can help mitigate non-classical and correlated noise in state transfers and gate operations. In the future, one may consider using the technique developed in Ref. [53] for calculating \(\hat{U}_{s}(t)\) for an even more numerically-efficient optimization, since that technique is also based on a Dyson series (also the basis for our Keldysh calculation). For capturing higher-order decoherence effects, it is also useful to explore a higher-order CPTP map [23]. Finally, our analytically simple map (19) can provide hints for studying decoherence processes for more complicated systems, e.g., nonlinear oscillators [54] and composite system in the ultrastrong-coupling regime [55]. ###### Acknowledgements. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. We thank Peter Groszkowski, Yuxin Wang, and Oluwadara Ogunkoya for helpful discussion. ## Appendix A Full second-order expansion and filter functions We present more details about the expansion of Eq. (9) and the filter functions \(I_{k,k^{\prime}}(\omega)\) in this appendix. In terms of \(\tilde{x}(t)\) and \(\tilde{\eta}(t)\), Eq. (9) can be expressed as \[\mathbf{\Sigma}^{(2)}(\tau)\tilde{\rho}_{s}(0)= (-i)^{2}\!\int_{0}^{\tau}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt_{2} \,\tilde{x}(t_{1})\tilde{x}(t_{2})\tilde{\rho}_{s}(0)\] \[\qquad\times\epsilon^{2}\mathrm{Tr}_{B}\{\tilde{\eta}(t_{1}) \tilde{\eta}(t_{2})\tilde{\rho}_{B}(0)\}\] \[\qquad+(i)^{2}\!\int_{0}^{\tau}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt _{2}\,\tilde{\rho}_{s}(0)\tilde{x}(t_{2})\tilde{x}(t_{1})\] \[\qquad\qquad\times\epsilon^{2}\mathrm{Tr}_{B}\{\tilde{\rho}_{B}( 0)\tilde{\eta}(t_{2})\tilde{\eta}(t_{1})\}\] \[\qquad+(i)(-i)\!\int_{0}^{\tau}\!\!dt_{1}\!\int_{0}^{\tau}\!\!dt \,\tilde{x}(t_{1})\tilde{\rho}_{s}(0)\tilde{x}(t_{2})\] \[\qquad\qquad\times\epsilon^{2}\mathrm{Tr}_{B}\{\tilde{\eta}(t_{1 })\tilde{\rho}_{B}(0)\tilde{\eta}(t_{2})\}. \tag{20}\] Then, inserting \(\tilde{\rho}_{B}(0)=\hat{\rho}_{B,\mathrm{eq}}\) and the Fourier transformation \[\epsilon^{2}\mathrm{Tr}_{B}\{\hat{\rho}_{B,\mathrm{eq}}\,\tilde{\eta}(t_{1}) \tilde{\eta}(t_{2})\}\!=\!\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}S_{B}( \omega)\exp\!\left[-i\omega(t_{1}-t_{2})\right] \tag{21}\] into Eq. (A), we further transform \(\mathbf{\Sigma}^{(2)}(\tau)\) into \[\mathbf{\Sigma}^{(2)}(\tau)\tilde{\rho}_{s}(0)= -\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}S_{B}(\omega)\int_{0} ^{\tau}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt_{2}\] \[\qquad\times\tilde{x}(t_{1})\tilde{x}(t_{2})\tilde{\rho}_{s}(0)e ^{-i\omega(t_{1}-t_{2})}\] \[-\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}S_{B}(\omega)\int_{0} ^{\tau}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt_{2}\] \[\qquad\times\tilde{\rho}_{s}(0)\tilde{x}(t_{2})\tilde{x}(t_{1})e ^{-i\omega(t_{2}-t_{1})}\] \[+\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}S_{B}(\omega)\int_{0} ^{\tau}\!\!dt_{1}\!\int_{0}^{\tau}\!\!dt_{2}\] \[\qquad\times\tilde{x}(t_{1})\tilde{\rho}_{s}(0)\tilde{x}(t_{2})e ^{-i\omega(t_{2}-t_{1})}. \tag{22}\] In Sec. II.C, Eq. (A) is further transformed into Eq. (13), which is based on the filter function \(I_{k,k^{\prime}}(\omega)\) defined in Figure 7: Optimization of the identity gates for a qubit coupled to a few near-resonant TLSs. (a) shows the pulse obtained by the Keldysh-assisted optimal control (blue) and the resulting evolution of the excited-state population (purple) with the qubit initialized in the excited state. (b) plots the filter strengths for the free-induced identity gate (red) and Keldysh-optimized (blue) version. The noise spectrum used for the simulation is also plotted in gray. (c) compares the fidelities of the two identity operations after they are each repeated multiple times. Eq. (14). Carrying out the double integral in Eq. (14), we find \[I_{k,k^{\prime}}(\omega) = \frac{e^{-i\omega\tau}-1}{(\omega-k\omega_{p})(k^{\prime}\omega_{p} -\omega)} \tag{16}\] \[-\frac{i\tau}{(\omega-k\omega_{p})}\delta_{k,k^{\prime}}.\] Note that the apparent poles in this expression are all removable. For \(k=k^{\prime}\), Eq. (16) is reduced to Eq. (15), which has been studied in detail in Sec. II.C. For off-diagonal ones, we find the inequality \[|I_{k,k^{\prime}}(\omega)|<\frac{\tau^{2}}{2\pi\Big{(}|k-k^{\prime}|-\tfrac{1 }{2}\Big{)}} \tag{17}\] by inspecting the first line of Eq. (16). This inequality is referenced in Sec. II.C for justifying the secular approximation. ## Appendix B Connection to Ref. [14] We use this appendix to connect our theory to previous filter-function research. Particularly, we show that if only classical noise is present, the map Eq. (11) can reproduce some of the formula used in Ref. [14]. For classical noise, the correlation function \(C(t_{1},t_{2})\equiv\mathrm{Tr}_{B}\{\tilde{\rho}_{B,\mathrm{eq}}\,\bar{\eta }(t_{1})\bar{\eta}(t_{2})\}\) is real-valued, i.e., \[C(t_{1},t_{2})=C^{*}(t_{2},t_{1})=C(t_{2},t_{1}). \tag{18}\] This relation implies that the noise spectrum is symmetric, i.e., \(S_{B}(\omega)=S_{B}(-\omega)\). Then, if we insert Eq. (18) into Eq. (15), we find the self-energy \[\begin{split}\Sigma^{(2)}(\tau)\tilde{\rho}_{s}(0)& =-\int_{0}^{t}\!dt_{1}\int_{0}^{t_{1}}\!\!dt_{2}\,\bar{x}(t_{1}) \bar{x}(t_{2})\tilde{\rho}_{s}(0)C(t_{1},t_{2})\\ &\quad-\!\!\int_{0}^{t}\!\!dt_{1}\!\int_{0}^{t_{1}}\!\!dt_{2}\, \bar{\rho}_{s}(0)\bar{x}(t_{2})\bar{x}(t_{1})C(t_{1},t_{2})\\ &\quad+\!\!\int_{0}^{t}\!\!dt_{1}\!\int_{0}^{t}\!\!dt_{2}\,\bar{x }(t_{1})\tilde{\rho}_{s}(0)\bar{x}(t_{2})C(t_{1},t_{2}).\end{split} \tag{19}\] Inserting this quantity into the approximated map \(\mathbf{\Pi}(\tau)\approx\mathbf{\Pi}^{(0)}(\tau)+\mathbf{\Sigma}^{(2)}(\tau)\), we recover the _noise-averaged quantum process_ in Ref. [14]. ## Appendix C Bath correlation time and spectral variation In this appendix, we investigate the relation between the variation of \(S_{B}(\omega)\) and the bath correlation time. Such a relation is useful for interpreting condition 2 as a comparison between the bath correlation time and the system evolution time. The variation of \(S_{B}(\omega)\) can be roughly quantified by the second-order derivative of the spectrum. Specifically, we define the spectral roughness \(R(\omega)\) by \[R(\omega)\equiv\frac{|d^{2}S_{B}(\omega)/d\omega^{2}|}{|S_{B}(\omega)|}. \tag{20}\] To relate this quantity to the correlation time, we insert the inverse Fourier transform of \(S_{B}(\omega)\) into the expression \(R(\omega)\) and express it as \[R(\omega)=\frac{\left|\int_{-\infty}^{\infty}dt\,C(t,0)t^{2}\,e^{i\omega t} \right|}{\left|\int_{-\infty}^{\infty}dt\,C(t,0)e^{i\omega t}\right|}. \tag{21}\] The right-hand side of Eq. (21) appears to be related to the bath correlation time. To understand this expression more clearly, we consider a two-level-system defect as an example. The time-domain correlation function of such a two-level system (at zero temperature) has the form [50] \[C(t,0)=C(0,0)e^{-i\omega_{t}t-t/T_{t}}, \tag{22}\] where \(\omega_{t}\) and \(T_{t}\) are its resonance frequency and coherence time (usually considered as its correlation time), respectively. For this function, the right-hand side of Eq. (21) is evaluated as \[\frac{\left|\int_{-\infty}^{\infty}dt\,C(t,0)t^{2}\,e^{i\omega t}\right|}{ \left|\int_{-\infty}^{\infty}dt\,C(t,0)e^{i\omega t}\right|}=\frac{\left|2/T_{t }^{2}-6(\omega-\omega_{t})^{2}\right|}{\left[1/T_{t}^{2}+(\omega-\omega_{t})^ {2}\right]^{2}}, \tag{23}\] which equals \(2T_{t}^{2}\) for \(\omega=\omega_{t}\). Therefore, it is reasonable to define a frequency-dependent bath correlation time \[\tau_{B}(\omega)=\sqrt{\frac{\left|\int_{-\infty}^{\infty}dt\,C(t,0)t^{2}\,e ^{i\omega t}\right|}{2\left|\int_{-\infty}^{\infty}dt\,C(t,0)e^{i\omega t} \right|}}. \tag{24}\] Such definition leads to the relation \[R(\omega)=2\tau_{B}^{2}(\omega), \tag{25}\] which indicates that more significant variations in \(S_{B}(\omega)\) correspond to longer bath correlation times. ## Appendix D Floquet master equation for a weakly driven qubit In this appendix, we use the Markovian Floquet master equation to explain the appearance of the side peaks in Fig. 3 (b). The two Floquet states for the weakly driven qubit are \[|w_{\pm}(t)\rangle=\frac{1}{\sqrt{2}}\left[|g\rangle\pm|e\rangle e^{-i\omega_ {0}t}\right],\] and their quasi-energies are \(\varepsilon_{\pm}=\pm d/2-\omega_{q}/2\). Inserting them into Eq. (46), we can again find the expression of the rotated coupling operator Eq. (43). Following the derivation of the Floquet master equation, we extract the transition frequencies \(\omega_{L}\) and their corresponding damping operators \(\tilde{x}(\omega_{L})\) from Eq. (43). Then, the Lindbladian for the Markovian Floquet master equation is given by \[\mathcal{L}= S_{B}(\omega_{q})\mathbb{D}\Big{[}\frac{\hat{\sigma}_{x}}{2} \Big{]}+\sum_{\pm}S_{B}(\omega_{q}\pm d)\mathbb{D}\Big{[}\frac{\mp\hat{\sigma}_ {z}-i\hat{\sigma}_{y}}{4}\Big{]}\] \[+S_{B}(-\omega_{q})\mathbb{D}\Big{[}\frac{\hat{\sigma}_{x}}{2} \Big{]}+\sum_{\pm}S_{B}(-\omega_{q}\mp d)\mathbb{D}\Big{[}\frac{\mp\hat{\sigma} _{z}+i\hat{\sigma}_{y}}{4}\Big{]}\] \[+\text{Lamb-shift terms}. \tag{61}\] The damping terms present in this map indeed capture the noise channels predicted in Fig. 3 (b). ## Appendix E Driven harmonic oscillator In the main text, the examples we present are limited to qubits with only two levels. Here, we demonstrate the applicability of our framework in a quantum harmonic oscillator, which has infinite levels. The system Hamiltonian of this oscillator is specified by \[\hat{H}_{s}(t)=\omega_{r}\hat{a}^{\dagger}\hat{a}+d(t)(\hat{a}^{\dagger}+\hat{ a}), \tag{62}\] and the coupling operator for the oscillator is \(\hat{x}=\hat{a}+\hat{a}^{\dagger}\). For this linear system, the closed-system propagator can be analytically derived as [56] \[\hat{U}_{s}(t)=e^{\alpha(t)\hat{a}^{\dagger}-\alpha^{*}(t)\hat{a}}e^{-i\omega_ {r}\hat{a}^{\dagger}\hat{a}t}e^{-i\Phi(t)}, \tag{63}\] where the displacement \(\alpha\) is calculated by \[i\dot{\alpha}(t)=\omega_{r}\alpha(t)+d(t), \tag{64}\] and the additional phase acquired is given by \[\Phi(t)=-\int_{0}^{t}dt^{\prime}\left[\omega_{r}|\alpha|^{2}+\frac{1}{2}i( \alpha\dot{a}^{*}-\alpha^{*}\dot{a})\right]. \tag{65}\] In the interaction picture, the coupling operator is transformed as \[\tilde{x}(t)=\left[\hat{a}e^{-i\omega_{r}t}+\alpha(t)\right]+\text{H.c.}, \tag{66}\] which leads to only two frequency components, \(\tilde{x}(\omega_{r})=\hat{a}+\alpha\) and \(\tilde{x}(-\omega_{r})=a^{\dagger}+\tilde{\alpha}^{*}\). We note that the c-numbers \(\alpha(t),\alpha^{*}(t)\) in \(\tilde{x}(\pm\omega_{r})\) only contribute to the Lamb-shift terms to the map (19), which we choose to omit due to the weak noise strength. Then, if we again assume that the two conditions 1 and 2 in Sec. III.A hold, the self-energy is given by \[\mathfrak{X}^{(2)}_{\text{CP}}(\tau)=\tau\left\{S_{B}(\omega_{r})\mathbb{D} [\hat{a}]+S_{B}(-\omega_{r})\mathbb{D}[\hat{a}^{\dagger}]\right\}, \tag{67}\] identical to the prediction by the Lindblad master equation. The message sent by such analysis is that, the Lindblad map is a good approximation for the harmonic oscillator under arbitrary linear drives, as long as \(\tau_{B}\) and \(\tau_{S}\sim 2\pi/\omega_{r}\) are much smaller compared to \(\tau\sim\tau_{R}\). Such conclusion is in clear contrast to those in Sec. III.B and D for the driven qubits. Note that this conclusion may be invalid if nonlinearity in the cavity is induced by its coupling to qubits [54]. This conclusion may also be invalid if the drive also affects the noise bath [57], which results in a varying \(S_{B}(\omega)\) during the drive time.
2305.15579
Post-model-selection prediction for GLM's
We give two prediction intervals (PI) for Generalized Linear Models that take model selection uncertainty into account. The first is a straightforward extension of asymptotic normality results and the second includes an extra optimization that improves nominal coverage for small-to--moderate samples. Both PI's are wider than would be obtained without incorporating model selection uncertyainty. We compare these two PI's with three other PI's. Two are based on bootstrapping procedures and the third is based on a PI from Bayes model averaging. We argue that for general usage either the asymptotic normality or optimized asymptotic normality PI's work best. In an Appendix we extend our results to Generalized Linear Mixed Models.
Dean Dustin, Bertrand Clarke
2023-05-24T21:31:54Z
http://arxiv.org/abs/2305.15579v1
# Post-model-selection prediction for GLM's ###### Abstract We give two prediction intervals (PI) for Generalized Linear Models that take model selection uncertainty into account. The first is a straightforward extension of asymptotic normality results and the second includes an extra optimization that improves nominal coverage for small-to-moderate samples. Both PI's are wider than would be obtained without incorporating model selection uncertyaitty. We compare these two PI's with three other PI's. Two are based on bootstrapping procedures and the third is based on a PI from Bayes model averaging. We argue that for general usage either the asymptotic normality or optimized asymptotic normality PI's work best. In an Appendix we extend our results to Generalized Linear Mixed Models. prediction interval, generalized linear model, post-model selection ## 1 Introduction It is well known that linear models and their extensions - generalized linear, linear mixed, generalized linear mixed models, and generalized mixed models - are the workhorses of statistical analysis. Aside from formulating such models, analysts have to chose amongst competing models usually in the same class. Unless a model is proposed pre-experimentally, model selection from a model list is done after the data is collected. There are numerous model selection procedures, but regardless of which model is chosen as "best", the resulting model will have an associated variability inherited from the variability in the data. How to take this variability into account properly when making predictions is the main topic of this paper. Common practice in many predictive contexts is to choose a model and then use it to generate predictions. Such plug-in methods are common in many modeling contexts. This is pragmatic but neglects taking account of the uncertainty due to model selection or, pehaps more commonly, variable selection. Here we propose prediction intervals (PI's) for generalized linear models (GLM's) that are modified by the model selection principle (MSP) used for variable selection so that their nominal coverage is asymptotically correct in the limit of sample sizes. This is important because Hong et al (2018)) showed that using model selection procedure procedures like Akaike's information criterion can result in predictive intervals with lower than nominal coverage if the PI's do not take the uncertainty of the MSP into account. The post model selection inference problem has gained wide interest since the problem was first addressed in Berk et al (2013). The so called post selection inference (PoSI) intervals introduced in Berk et al (2013) are universally valid for any model selection principle (MSP). However, PoSI intervals are known to be conservative (see Leeb et al (2015)) partially because they allow for any ad-hoc MSP to be used. The PoSI framework was used to construct universally valid (over all MSP's) confidence intervals for the mean of a predictive distribution in LM's in Bachoc et al (2019). Universally valid confidence regions for the simultaneous inference problem are constructed Kuchibhotla et al (2020). A different approach was proposed by Efron (2014) that uses bootstrap intervals to address the post model selection inference problem under a single choice of MSP. Stine (1985) introduced bootstrapped predictive intervals in linear regression in 1985, but these intervals did not consider uncertainty due to model selection. Leeb (2009) introduced a model selection procedure based on cross validation techniques and proved that using this technique, the resulting prediction interval from the selected model is approximately valid. While this is a seemingly strong and useful result, it holds only in the high sparsity case with \(n<<p\) as well as in the limit of large \(n\). Specifically in his Proposition 4.3 the intervals are guaranteed to be within \(1/\sqrt{n}+\epsilon\) of the nominal coverage for \(0<\epsilon\leq\log(2)\). More recently, predictive intervals based on Shorth - i.e. the shortest interval containing a pre-specified number of values - for GLM's and GAMs are studied in Stine (2021). While these intervals are valid, and account for uncertainty due to the MSP, they are not as intuitive and general as the ones we present in Subsec. 3. Our methodology is in contrast to the PoSI-based intervals from Berk et al (2013) that essentially widen PI's until the nominal coverage is achieved. Indeed, the PoSI intervals take the pessimistic (if practical) view that model developers will use MSP's that are not theoretically sound. Our approach is optimistic in that we assume a proper MSP with well known theoretical properties will be used. This allows us to incorporate the variability from a given generic MSP into our PI's. Here, we present two PI's that account for the uncertainty of an MSP in an intuitive manner for GLM's. These PI's are easy to understand and, importantly, are easy to implement. We also present a less intuitive way to construct a PI to give better coverage along the lines of PoSI intervals. This PI seems work well in terms of coverage, but interpretation the interval is difficult. The structure of this paper is as follows. In Sec. 2 we define the notation and setting needed for our approach. In fact, the notation incorporates much of the intuition behind our approach. In Sec. 3, we present the main theorem that gives a PI's dependent on an MSP for the case of GLM's. We then give a finite sample improvement for use with this PI in small-to-moderate sample settings. We also define three other intervals, two based on bootstrapping and one from the Bayes model average. In Sec. 4 we present our simulation results. Finally, in Sec. 5 we summarize the implications of our work. We extend our theory to GLMM's in the Appendices. ## 2 Notation and Setting Throughout this paper we assume model selection and variable selection are synonymous, and defined as followings. Let \(\mathcal{D}_{n}=\{(y_{1},x_{1}),\ldots,(y_{n},x_{n})\}\) where \(Y_{i}=y_{i}\) is an outcome of the response variable and \(x_{i}\) is a value of the \(d\)-dimensional explanatory variable. We use superscripts to indicate vectors, thus \(y^{n}=(y_{1},\ldots,y_{n})^{T}\). Let \(m\in\mathcal{M}\) be a candidate model in the full collection of models \(\mathcal{M}\). We define a variable selection procedure \(M=M(\mathcal{D}_{n})\) which takes the available data and maps it to a subset of variables based on some objective function we denote \(Q\). We denote a chosen model \(\hat{m}=\arg\min_{m}Q(m,\mathcal{D}_{n})\). We think of \(Q\) as an objective function such as the Akaike of Bayes information criterion (AIC, BIC) or as a penalized loss function. For instance, for linear models, if \(Q\) is the AIC, we have \[Q_{AIC}(m,\mathcal{D}_{n})=-2\ln(p(y^{n}|X_{m}^{n},\hat{\beta}_{m}^{MLE}))+2d, \tag{2.1}\] where \(d\) is the number of parameters that need to be estimated in model \(m\in\mathcal{M}\). This defines a function \[M:\mathcal{D}_{n}\mapsto\mathcal{M}.\] from the data \(\mathcal{D}_{n}\) into the model space. That is, we think of the variable selection procedure as a function \(M:\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\mapsto\mathcal{M}\). Other choices for \(Q\) include the BIC given by \[Q_{BIC}(m,\mathcal{D}_{n})=2\ln(p(y^{n}|X_{m}^{n},\hat{\beta}_{m}^{MLE}))+d\log (n), \tag{2.2}\] and the general Bethel-Shumway class of information criteria defined in Bachoc et al (1988). The BIC and the Bethel-Shumway class of information criteria are consistent for model/variable selection. ### Variable Selection In the predictive context, the interpretation of a parameter in a linear model is consistent across models: If the parameter, say, \(\beta_{j}\), appears in multiple models it always means the change in \(Y\) for a unit change in \(x_{j}\) holding other explanatory variables constant. This is in contrast to the modeling view, see Berk et al (2013), that regards each parameter as an element in a whole model. and affected by each other's values. Thus, for us, if \(\beta_{j}=0\), this is mathematically equivalent to a model that does not include \(x_{j}\). Indeed, in practice, we often set a threshold \(\eta>0\) and and say that when \(|\beta_{j}|<\eta\), we set \(\beta_{j}=0\). With this in mind, we can define \(M\) as follows. Let \(X=(x_{1},\ldots,x_{d})\), and define \(M=\{\hat{\delta}_{1},\ldots,\hat{\delta}_{d}\}\) where for \(j=1,\ldots,d\) \[\hat{\delta}_{j}=\begin{cases}1\text{ if }X_{j}\text{ is selected under Q}\\ 0\text{ otherwise.}\end{cases}\] For the true model we have \(m_{T}=\{\delta_{1,T},\ldots,\delta_{p,T}\}\) where \[\delta_{j,T}=\begin{cases}1\text{ if }X_{j}\in m_{T}\\ 0\text{ otherwise.}\end{cases}\] Define the set \(\mathcal{M}_{S}\) to be the set containing all possible combinations \(M\) can take. The cardinality of \(\mathcal{M}_{S}\) is \(2^{d}\) and assuming \(m_{T}\) exists, \(m_{T}\in\mathcal{M}_{S}\). We write \[\beta_{T}=(\beta_{\delta_{1,T}},\ldots,\beta_{\delta_{p,T}})\text{ and }\hat{ \beta}_{M}=(\hat{\beta}_{\hat{\delta}_{1}},\ldots,\hat{\beta}_{\hat{\delta}_{p }}),\] and \(\dim(\hat{\beta}_{M})=\dim(\beta_{m_{T}})\). Note that if \(\hat{\delta}_{j}=0\), then by default we set \(\hat{\beta}_{\hat{\delta_{j}}}=0\). Furthermore, we have that \(\delta_{j,T}=0\) is equivalent to \(\beta_{j}=0\). We specify our target of inference as \(\beta_{T}\) and write \[\beta_{m_{T}}=\left(X^{\prime}_{m_{T}}X_{m_{T}}\right)^{-1}X^{\prime}_{m_{T}}E (Y)\] in the linear models context. That is we are trying to estimate the true parameters, regardless of the model that is chosen. This is in contrast to the random target of inference \[\beta_{M}=\left(X^{\prime}_{M}X_{M}\right)^{-1}X^{\prime}_{M}E(Y)\] defined in Berk et al (2013). Thus, for linear models, we define the estimate for \(\beta_{j}\) as follows: \[\hat{\beta}_{\hat{M},j}=\begin{cases}\left[\left(X^{\prime}_{\hat{M}}X_{\hat{ M}}\right)^{-1}X^{\prime}_{\hat{M}}y\right]_{j}&\text{if }\hat{\delta}_{j}=1\\ 0&\text{if }\hat{\delta}_{j}=1.\end{cases}\] Now there are two steps in the process of obtaining the true model. The first step is to estimate the \(\delta_{j}\)'s. In this step we want \(M\) to give \(\hat{\delta}_{j}=1\) if \(\delta_{j,T}=1\), however, \(M\) may also give \(\hat{\delta}_{j}=1\) even if \(\delta_{j,T}=0\). In this case, our definition allows the estimate \(\hat{\beta}_{\delta_{j}}\) to be zero. Thus, even if \(M\) includes variables that are not in \(M_{T}\) we can still estimate their coefficients to be zero which allows \(M\to m_{T}\) asymptotically (as seen in Theorem 3.1). ### Prediction in Generalized Linear Models As noted, we restrict attention to GLM's and GLMM's. To be more precise, suppose \(Y\sim\mathcal{G}(\mu,R)\) where \(\mathcal{G}\) is an exponential family with mean \(\mu\) and variance \(R\). Then the pdf of \(Y\) given the canonical parameter \(\theta\) is \[f(y|\theta)=e^{\frac{y\theta-b\theta)}{a(\theta)}+c(y,\phi)} \tag{2.3}\] where \(\phi\) is a scale parameter. In one parameter exponential families such as Poisson or Binomial distributions, \(a(\phi)=1\). From (2.3) we have the following properties: * \(E(Y|X)=\frac{\partial b(\theta)}{\partial\theta}=\mu\) * \(Var(Y|X)=a(\phi)\frac{\partial^{2}b(\theta)}{\partial\theta^{2}}=a(\phi)V(\mu)\) * \(I(\theta)=Var(\ell(\theta|y,\phi))\) Following standard GLM practice, we model the mean of \(Y\) by transforming it to a linear function of the explanatory variables. The function we use to transform \(E(Y)=\mu\) is called the link function and we denote it by \(g(\cdot)\). Note that \(g(\cdot)\) is a continuous invertible function. This gives us the linear predictor \[\eta=g(E(Y|X))=g(\mu)=X\beta \tag{2.4}\] and we define the inverse link function to be the inverse of \(g(\cdot)\) which is \[\mu=E(Y|X)=g^{-1}(X\beta). \tag{2.5}\] For nwo, we assume that \(X\) is of full rank, to avoid problems with estimability. Note that the canonical parameter \(\theta\) is a function of \(\mu\) so we write \(\theta=\theta(\mu)=\theta(g^{-1}(X\beta))\). Now we can write the log-likelihood of (2.3) as \[\ell(\beta|y,\phi)=\frac{y\left(\theta(g^{-1}(X\beta))\right)-b(\theta(g^{-1} (X\beta)))}{a(\phi)}+c(y,\phi). \tag{2.6}\] Typically maximum likelihood along with the Newton-Raphson algorithm or Fisher scoring is used to estimate \(\beta\). Usually the dispersion parameter is also unknown and must be estimated using \(\hat{\phi}\). Suppose the inferential goal is predicting the next outcome \(Y^{n+1}\). The usual point predictor under a MSP \(M\) is \[\hat{Y}_{M}^{n+1}=\hat{\mu}_{M}=g^{-1}(X_{M}^{\prime\prime n+1}\hat{\beta}_{M }). \tag{2.7}\] Henceforth, our focus in on constructing valid PI's for this point predictor. ## 3 Candidate PI's In this section we define four PI's. The first is derived in our Theorem 3.1. The second is an improvement on this interval by incorporating an extra optimization to ensure more rapid convergence to the nominal coverage. Both of these are in Subsec. 3.1. In Subsec. 3.2 we give our third and fourth intervals are based on bootstrapping approach. We will argue that our optimized interval provides the best performance. ### Main Result and Two PI's One choice for a PI uses asymptotic normality of the point predictor (2.7). Define the statistic \[Z_{pred}=Z_{pred}(M)=\frac{\hat{Y}_{M}^{n+1}-Y^{n+1}}{\sqrt{Var(\hat{Y}_{M}^{n +1}-Y^{n+1})}}. \tag{3.1}\] We have the following result giving our first PI. **Theorem 3.1**: _Suppose \(Y^{n},Y^{n+1}\) come from an exponential family distribution and let \(M\) be a consistent MSP. An asymptotically normal prediction interval for a new outcome derived from a GLM, is \(PI(M)\) given by_ \[g^{-1}(X_{M}^{\prime n+1}\hat{\beta}_{M})\pm z_{1-\alpha/2}\sqrt{\left[\frac{d }{d\eta}g^{-1}(\hat{\eta}_{M}^{n+1})\right]^{2}\ X_{M}^{\prime n+1}Var(\hat{ \beta}_{M})X_{M}^{n+1}+a(\hat{\phi})_{M}V(\hat{\mu})_{M}}. \tag{3.2}\] _Proof_ Asymptotically, for any fixed \(m\), \[\hat{\beta}_{m}\sim N\left(\beta_{m},(X_{m}^{\prime}WX_{m})^{-1}\right)\] where \(W=(DVD)^{-1}\). In this notation, \(V=\text{diag}[Var(y_{i})]\) is the \(n\times n\) variance matrix of observations, \(D=\text{diag}[\frac{\partial\eta_{i}}{\partial\mu_{i}}]\) is the \(n\times n\) matrix of derivatives and \(\mu\) is the \(n\times 1\) mean vector. This implies \[\sqrt{n}(\hat{\beta}_{m}-\beta_{m})\overset{D}{\rightarrow}N(0,(X_{m_{T}}^{ \prime}WX_{m_{T}})^{-1}) \tag{3.3}\] where \(0\in\mathbb{R}^{p}\) and \((X_{m}^{\prime}WX_{m})^{-1}\in\mathbb{R}^{|M|\times|M|}\). While this is useful, it is only a step toward the convergence \(\hat{\beta}_{M}\rightarrow\beta_{m_{T}}\). Since \(M\) is consistent, \(m_{T}\in\mathcal{M}\) and hence \(\hat{\delta_{j}}\rightarrow\delta_{j,T}\) with probability \(1\) for all \(j\) which implies \(M\to m_{T}\). Hence, with this assumption we get an analog to (3.3) \[\sqrt{n}(\hat{\beta}_{M}-\beta_{m_{T}})\overset{D}{\rightarrow}N(0,V_{m_{T}} ^{*}) \tag{3.4}\] where \(V_{m_{T}}^{*}=(X_{m_{T}}^{\prime}WX_{m_{T}})^{-1}\). Now we define the set \[S_{n}=\{\omega|\forall j,\hat{\delta}_{j}(\omega)=\delta_{j,T}\}\] and let \(\mathbf{1}_{S_{n}}\) be the indicator that \(\omega\in S_{n}\). Further, let \(\mathbf{1}_{S_{n}^{c}}\) be the indicator that \(\omega\) is in the complement of \(S_{n}\) and write \[\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{\prime n+1}\beta_{m_{T}}) =\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{\prime n+1}\beta_{m_{T}} )\mathbf{1}_{S_{n}}+\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{ \prime n+1}\beta_{m_{T}})\mathbf{1}_{S_{n}^{c}}. \tag{3.5}\] First, note that the first term on the RHS of (3.5) becomes \[\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{\prime n+1}\beta_{m_{T}} )\mathbf{1}_{S_{n}}=\sqrt{n}(X_{m_{T}}^{\prime n+1}\hat{\beta}_{m_{T}}-X_{m_{ T}}^{\prime n+1}\beta_{m_{T}})\mathbf{1}_{S_{n}}\] under consistent model selection. This term clearly converges in distribution to a normal. Namely \[\sqrt{n}(X_{m_{T}}^{\prime n+1}\hat{\beta}_{m_{T}}-X_{m_{T}}^{\prime n+1}\beta _{m_{T}})\mathbf{1}_{S_{n}}\overset{D}{\rightarrow}N\left(0,X_{m_{T}}^{\prime n +1}V_{m_{T}}^{*}X_{m_{T}}^{n+1}\right)\] Now observe the second term on the RHS of (3.5) can be written as \[\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{\prime n+1} \beta_{m_{T}})\mathbf{1}_{S_{n}^{c}}\] \[=\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{M}^{\prime n+1} \beta_{m_{T}}+X_{M}^{\prime n+1}\beta_{m_{T}}-X_{m_{T}}^{\prime n+1}\beta_{m_{ T}})\mathbf{1}_{S_{n}^{c}}\] \[=\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{M}^{\prime n+1} \beta_{m_{T}})\mathbf{1}_{S_{n}^{c}}+\sqrt{n}(X_{M}^{\prime n+1}\beta_{m_{T}}- X_{m_{T}}^{\prime n+1}\beta_{m_{T}})\mathbf{1}_{S_{n}^{c}}\] \[=X_{M}^{\prime n+1}\sqrt{n}(\hat{\beta}_{M}-\beta_{m_{T}}) \mathbf{1}_{S_{n}^{c}}+\sqrt{n}(X_{M}^{\prime n+1}-X_{m_{T}}^{\prime n+1}) \beta_{m_{T}}\mathbf{1}_{S_{n}^{c}}. \tag{3.6}\] Using (3.4) and the fact that \(X_{M}^{\prime n+1}\) is bounded, we know the first in (3.6) converges in distribution to a normal. Also, we see that in the second term in (3.6), \(\beta_{m_{T}}\) is a bounded constant vector, \((X_{M}^{\prime n+1}-X_{m_{T}}^{\prime n+1})\) is bounded and \(P(S_{n}^{c})\overset{P}{\rightarrow}0\) by assumption. Thus we see asymptotically that (3.5) is \[\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{\prime n+1} \beta_{m_{T}}) \overset{\cong}{=}N(0,X_{m_{T}}^{\prime n+1}V_{m_{T}}^{*}X_{m_{T}}^{ \prime n+1})\mathbf{1}_{S_{n}}\] \[+N(0,X_{m_{T}}^{\prime n+1}V_{m_{T}}^{*}X_{m_{T}}^{\prime n+1}) \mathbf{1}_{S_{n}^{c}}+\sqrt{n}\mathbf{1}_{S_{n}^{c}}\] Now we need to show that \(\sqrt{n}\mathbf{1}_{S_{n}^{c}}=o_{p}(1)\). First note that the union of events bound gives \[P(S_{n}^{c})\leq\sum_{j=1}^{p}P(|\hat{\delta}_{j}-\delta_{j,T}|>\eta),\] for some \(\eta>0\), so using symmetry in the MSP it is enough to show \[\lim_{n\rightarrow\infty}P(|\hat{\delta}_{j}-\delta_{j,T}|>1/\sqrt{n})=0\] for any \(j\). It is easy to see that \[\lim_{n\rightarrow\infty}P(|\hat{\delta}_{j}-\delta_{j,T}|>1/\sqrt{n})=\lim_ {n\rightarrow\infty}P(|\hat{\delta}_{j}-\delta_{j,T}|=1) \tag{3.7}\] because \(\delta_{j,T}\) and \(\hat{\delta}_{j}\) are either \(1\) or \(0\). Now because we have chosen a consistent MSP we have \[\lim_{n\rightarrow\infty}P(|\hat{\delta}_{j}-\delta_{j,T}|=1)=0.\] Hence, the left hand side of (3.7) is also equal to zero, implying that \(\sqrt{n}\mathbf{1}_{S_{n}^{c}}=o_{p}(1)\). Now, Slutsky's theorem gives us, \[\sqrt{n}(X_{M}^{\prime n+1}\hat{\beta}_{M}-X_{m_{T}}^{\prime n+1}\beta_{m_{T}} )\overset{D}{\rightarrow}N\left(0,X_{m_{T}}^{\prime n+1}V_{m_{T}}^{*}X_{m_{T }}^{n+1}\right).\] Now to get a predictive distribution, we observe that the delta method gives us \[\sqrt{n}\left(g^{-1}(X_{M}^{\prime n+1}\hat{\beta}_{M})-g^{-1}(X_{m_{T}}^{ \prime n+1}\beta_{m_{T}})\right)\overset{D}{\rightarrow}N\left(0,\left[\frac{ d}{d\eta}g^{-1}(\eta_{m_{T}}^{n+1})\right]^{2}X_{m_{T}}^{\prime n+1}V_{m_{T}}^{*}X_{m_{T }}^{n+1}\right) \tag{3.8}\] where \(\eta_{m_{T}}^{n+1}=X_{m_{T}}^{\prime n+1}\beta_{m_{T}}\) Thus, we see that although the GLM estimates are biased we still get convergence in distribution when model selection occurs in the \(\mathcal{M}\)-closed case. The variance of \(\hat{Y}_{M}^{n+1}-Y^{n+1}\) is \[Var(\hat{Y}_{M}^{n+1}-Y^{n+1}) =Var(\hat{Y}_{M}^{n+1})+Var(Y^{n+1})\] \[=Var(g^{-1}(X_{M}^{\prime n+1}\hat{\beta}_{M}))+a(\phi)V(\mu)\] \[=\frac{1}{n}\left[\frac{d}{d\eta}g^{-1}(\eta_{m_{T}}^{n+1}) \right]^{2}X_{m_{T}}^{\prime n+1}V_{m_{T}}^{*}X_{m_{T}}^{n+1}+a(\phi)V(\mu) \tag{3.9}\] due to (3.8). Again, because \(Y^{n+1}\) is a random variable, and not a parameter, we must consider the variance of it as well, which we get assuming it will come from the exponential family distribution as \(Y_{1},\ldots,Y_{n}\). This quantity, however, is impossible to compute because we do not know \(m_{T}\). Hence, we must replace \(m_{T}\) with \(M\), making the variance a random quantity that depends on model selection. Now we use (3.1) as a pivotal quantity to get \[1-\alpha \leq P\left(\left|Z_{pred}\right|<z_{1-\alpha/2}\right)\] \[=P\left(\left|\hat{Y}^{n+1}-Y^{n+1}\right|<z_{1-\alpha/2}\sqrt{ Var(\hat{Y}^{n+1}-Y^{n+1})}\right)\] \[=P\left(\hat{Y}^{n+1}-z_{1-\alpha/2}\sqrt{Var(\hat{Y}^{n+1}-Y^{n+1 })}<Y_{n+1}<\hat{Y}^{n+1}+z_{1-\alpha/2}\sqrt{Var(\hat{Y}^{n+1}-Y^{n+1})} \right). \tag{3.10}\] Hence using (3.9) \[\left[g^{-1}(X_{M}^{\prime n+1}\hat{\beta}_{M})\pm z_{1-\alpha/2}\sqrt{\frac{ 1}{n}\left[\frac{d}{d\eta}\widehat{g^{-1}(\eta_{M}^{n+1})}\right]^{2}\ X_{M}^{ \prime n+1}V_{M}^{*}X_{M}^{n+1}+a(\hat{\phi}_{M})V(\hat{\mu}_{M})}\right]\] is a \(100(1-\alpha)\%\) prediction interval for \(Y^{n+1}\) We now offer an improvement on the PI from Theorem 3.1. Note that this result uses the standard normal quantile to define the predictive interval, but this is not the only choice. Instead, we can adjust the width of the interval to correct to poor coverage. To do this, we write the interval as \[PI(C_{\alpha,M})=g^{-1}(X_{M}^{\prime n+1}\hat{\beta}_{M})\pm\] \[C_{\alpha,M}\sqrt{\left[\frac{d}{d\eta}g^{-1}(\hat{\eta}_{M}^{n+ 1})\right]^{2}\ X_{M}^{\prime n+1}Var(\hat{\beta}_{M})X_{M}^{n+1}+a(\hat{\phi} )V(\hat{\mu})} \tag{3.11}\] where \(C_{\alpha,M}\) is chosen to satisfy \[C_{\alpha,M}=\arg\min_{C}P\left(Y^{n+1}\in PI(M,C)\right) \tag{3.12}\] for all \(C\) such that \(P\left(Y^{n+1}\in PI(M,C)\right)\geq 1-\alpha\). Importantly, this probability also sees the random variable \(M\) and hence inherits the uncertainty associated with \(M\) as well and \(Y^{n+1}\). This is in the same spirit of the PoSI constant in Berk et al (2013). That is, we enlarge \(C_{\alpha,M}\) to account for the uncertainty in \(M\). We can approximate \(C_{\alpha,M}\) using Monte Carlo cross validation. We begin by choosing an interval on \(\mathbb{R}^{+}\) denoted \(\mathcal{C}=[C_{low},C_{high}]\) that we will perform the line search on to estimate \(C_{\alpha,M}\). Next, we randomly splitting \(\mathcal{D}_{n}\) into \(L\in\mathbb{N}\) test and train sets, \(\mathcal{D}_{train,\ell}\) and \(\mathcal{D}_{test,\ell}\) for \(\ell=1,\ldots,L\). Then for each \(\ell\) we estimate \(\beta\) by \[\hat{\beta}_{\ell}=(X^{\prime}_{train,M,\ell}X_{train,M,\ell})^{-1}X^{\prime} _{train,M,\ell}y_{train,\ell}\] using \(\mathcal{D}_{train,m}\), and form the predictor \(\hat{Y}^{test}_{M,\ell}=g^{-1}(X_{test,M,\ell}\hat{\beta}_{\ell})\). Now we form the prediction interval \(PI_{M,\ell}(C)\) namely \(\hat{Y}^{n+1}_{M,\ell}\pm\) \[C\sqrt{\left[\frac{d}{d\eta}g^{-1}(\hat{\eta}^{n+1}_{test,M,\ell})\right]^{2} \ X^{\prime n+1}_{test,M,\ell}Var(\hat{\beta})_{test,M,\ell}X^{n+1}_{test,M, \ell}+a(\hat{\phi})_{M,\ell}V(\hat{\mu}))}, \tag{3.3}\] and for each \(C\in\mathcal{C}\), check if \(y_{test,\ell}\in PI_{M,\ell}(C)\). Then we choose the value \(C\) that gives us \(1-\alpha\) coverage for the Monte Carlo samples. More formally, we can approximate \(C_{\alpha,M}\) by \[\hat{C}^{MC}=\arg\min_{C}\frac{1}{L}\sum_{\ell=1}^{L}\left|\frac{1}{\#( \mathcal{D}_{test,\ell})}\sum_{i=1}^{\#(\mathcal{D}_{test,\ell})}I_{y_{test, \ell}\in PI_{M,\ell}(C)}-(1-\alpha)\right| \tag{3.4}\] where \(I_{y_{test,\ell}\in PI_{M,\ell}(C)}\) is the indicator that the test values are in the constructed intervals. The intuition behind using this interval in place of the PI in Theorem 3.1 is that, in finite samples the difference between \(z_{1-\alpha/2}\) and \(\hat{C}^{MC}\) can be interpreted as t he added variability due to model uncertainty. ### Two Bootstrap Based PI's In the frequentist setting, perhaps the most natural way to obtain a prediction interval that takes into account the uncertainty of both model selection and the uncertainty associated with the distribution of the new outcome is to make use of the bootstrap. Accordingly, to form our fiest bootstrap PI, we use the bootstrap to estimate the distribution of \[\hat{\mu}_{M}=E(Y^{n+1}|X^{n+1}_{M})=g^{-1}(X^{n+1}_{M}\hat{\beta}_{M}), \tag{3.15}\] and \(a(\hat{\phi})_{M}\). Then for each bootstrapped mean and dispersion function, we generate a new observation from the distribution of \(Y^{n+1}|X^{n+1},\mu,\phi\), i.e. \(\mathcal{G}\). Let \(\hat{p}(\hat{\mu})\) denote be the bootstrapped density of (3.15) and \(\hat{p}(a(\hat{\phi}))\) be the bootstrapped density of \(a(\hat{\phi})_{M}\). Then \(\hat{p}(Y^{n+1})\) be the resulting estimated density of \(Y^{n+1}\). The procedure is as follows, * obtain \(B\) bootstrap replications of \(\hat{\mu}_{M}\), denoted \(\mu^{*}_{1},\ldots,\mu^{*}_{B}\) * obtain \(B\) bootstrap replications of \(a(\hat{\phi})_{M}\), denoted * generate \(y_{1}^{*}(\mu_{1}^{*},a(\phi)_{1}^{*}),\ldots,y_{B}^{*}(\mu_{B}^{*},a(\phi)_{B}^{*})\), from \(\mathcal{G}\). The sample \(y_{1}^{*},\ldots,y_{B}^{*}\) can be used to estimate an approximate marginal predictive distribution for \(Y^{n+1}\). To obtain the PI, we use the appropriate percentile interval from this distribution. That is, to obtain a \(100(1-\alpha)\%\) PI we use the interval \[[q_{1-\alpha/2}^{*},q_{\alpha/2}^{*}] \tag{3.16}\] where \(q_{\alpha}^{*}\) is the \(\alpha\) quantile from \(\hat{p}(Y^{n+1})\). The use of \(\hat{p}(\hat{\mu})\) and \(\hat{p}(a(\hat{\phi}))\) to obtain the estimated predictive distribution \(\hat{p}(Y^{n+1})\) allows \(\hat{p}(Y^{n+1})\) to inherit the variability from \(\hat{p}(\hat{\mu})\), \(\hat{p}(a(\hat{\phi}))\) and the variability that is already associated with the known parametric distribution \(\mathcal{G}\). Hence, the interval (3.16) is typically widened due to the uncertainty of the model selection procedure as well as the uncertainty of the distribution of \(Y^{n+1}\). Now, in the GLM setting, coverage for the PI in (3.16) should be closer to the \(1-\alpha\) nominal coverage than the PI resulting from ignoring the model uncertainty. Note that as \(n\rightarrow\infty\) the variability due to model uncertainty will go to 0 and this interval will converge to the standard PI. Bootstrap PI's for the Gaussian case are studied in a fairly narrow (small \(d\) and moderate \(n\)) setting in Hong et al (2018). These authors suggest that in this setting the bootstrap distribution fails to assess the uncertainty of model selection accurately. We explore different simulation settings to evaluate the performance of bootstrap intervals in Subsec. 4. Our second bootstrapped PI is formed as follows. Recall he interval in Theorem 3.1 is a random because it depends on \(M\). It is directly usable for predictions, but we must use \(\hat{M}\) in place of \(M\) to get a confidence statement. Nevertheless, we provide an approximate interval by "smoothing" over \(M\), which accounts for the uncertainty of \(M\) in both the center and width of the interval. This is similar to the approach used in Efron (2014) for estimation. The method we propose is to use \(\hat{p}(\hat{\mu})\), the bootstrap the distribution of \(\hat{\mu}_{M}=g^{-1}(\hat{\eta}_{M})\) as described earlier in this Subsection, to obtain an approximation for the predictor and its variance that accounts for model selection uncertainty. Specifically, we use \[\tilde{\mu}=\frac{1}{B}\sum_{b=1}^{B}\mu_{b}^{*}\] for the point predictor. We approximate the variance of \(\hat{\mu}_{M}\) with \[Var(\mu^{*}) =\widehat{Var}(\hat{\mu}_{M})\] \[=\frac{1}{B-1}\sum_{b=1}^{B}\left(\mu_{b}^{*}-\tilde{\mu}\right)^ {2}\] \[\approx\left[\frac{d}{d\eta}g^{-1}(\hat{\eta}_{M}^{n+1})\right]^{ 2}\ X_{M}^{\prime n+1}Var(\hat{\beta}_{M})X_{M}^{n+1},\] and the estimated variance of the predictive distribution is given by \[Var(Y^{*}) =\widehat{Var}(Y^{n+1})\] \[=\frac{1}{B-1}\sum_{b=1}^{B}\left(y^{*}(\mu_{b}^{*})-\bar{y}^{*} \right)^{2}\] \[\approx a(\hat{\phi}_{M})V(\hat{\mu}_{M})\] where \(\bar{y}^{*}=\frac{1}{B-1}\sum_{b=1}^{B}y^{*}(\mu_{b}^{*})\). We treat \(Y^{*}\) as a random variable approximating \(Y^{n+1}\). Note also that we are required to estimate \(a(\hat{\phi}_{M})V(\hat{\mu}_{M})\) in (3.2), but this again is a random variable so we using bootstrapping to account for the uncertainty in \(M\) is necessary for this term also. Now as an ad-hoc fix, we rewrite (3.2) to give our second bootstrapped PI \[PI(M)=\tilde{\mu}\pm z_{1-\alpha/2}\sqrt{Var(\mu^{*})+Var(Y^{*}))}. \tag{3.17}\] ## 4 Simulation Results for GLM's We give two contexts in which the PI's we have defined in (3.2), (3.11), (3.16), (3.17) can be readily found. Respectively, these intervals are labeled the asymptotic normal PI (AN), the optimized AN \(\hat{C}^{MC}\), the bootstrapped (boot) PI, and the'smoothed' asymptotic normal (S-AN) PI. In addition to the intervals we have derived, we give the BMA PI's as well as the 'Naive' PI's obtained by applying the inverse link to a confidence interval for the mean on the linear predictor scale; this is often done by practitioners as a pragmatic solution. In Sec. 4.1 we present these intervals for the standard Gaussian case and in Sec. 4.2, we present the prediction intervals for binomial regression, i.e., a more general case of logistic regression. For both cases we use 500 new observations from their respective distribution and calculate the estimated predictive coverage using \[\widehat{coverage}=\frac{1}{500}\sum_{i=1}^{500}I_{y_{i}^{new}\in PI_{i}(X_{i}^ {new},X^{n},y^{n})}, \tag{4.1}\] where each \(PI_{i}(X_{i}^{new},X^{n},y^{n})\) depends on the data and the new observed explanatory variables. For the PIs that require bootstrapping we resample the data 500 times to obtain the bootstrapped distributions. ### Gaussian Linear Models In the standard case, we assume \(Y\sim N(\mu,\sigma^{2})\), and the log likelihood is \[L(\mu_{i},\sigma^{2}|y_{i})=\frac{y_{i}\mu_{i}-(\mu_{i}^{2}/2)}{\sigma^{2}}- \left(\frac{y_{i}^{2}}{2\sigma^{2}}+\log(\sigma\sqrt{2\pi}\right),\] the canonical parameter is \(\theta_{i}=\mu_{i}\), \(b(\theta)=\mu+i^{2}/2\), \(a(\phi)=\sigma^{2}\), and \(V(\mu_{i})=1\). The linear predictor uses the identity link function and the point predictor is \[\hat{Y}_{M}^{n+1}=X_{M}^{\prime n+1}\hat{\beta}_{M}.\] The asymptotic normal PI from (3.2) for \(Y^{n+1}\) is \[PI(M)=\left[\hat{Y}_{M}^{n+1}\pm z_{1-\alpha/2}\hat{\sigma}_{M}\sqrt{X_{M}^{ \prime n+1}(X_{M}^{\prime}X_{M})^{-1}X_{M}^{n+1}+1}\right].\] Our simulation results for Gaussian data includes coverage and width estimates for the normal PI in (3.2), the PI (3.13) using \(\hat{C}^{MC}\), the bootstrap PI in (3.16), and the'smoothed' normal interval (3.17). We do not include the Naive interval because in the Gaussian case it is equivalent to AN. For the interval using \(\hat{C}^{MC}\), we do a grid search for the value of \(C_{\alpha,M}\) on the interval from 1.95 to 5 in increments of 0.05. The simulation setup is as follows. First, we consider two model selection procedures, BIC and AIC. Both methods are implemented in R using the step() function by setting the respective penalties for BIC and AIC. We also use BMA implemented with the BAS package in R. We consider various choices for \(n\) (30,50,100,200) and choose \(p=25\). We randomly generate values for \(\sigma\) and \(\beta\) once, and fix those values throughout the simulations. Accordingly, let \[\beta=(\beta_{1},\ldots,\beta_{25})^{\prime}=(6.43,4.39,4.26,4.11,0,\ldots,0)^ {\prime}\] and \(\sigma=0.93\). We simulate \(n\) observations for the design matrix \(X\) according to \[X\sim MVN_{p}(0,I_{p}),\] and then draw and \(n\times 1\) vector of observations from \(Y\sim N(X\beta,\sigma^{2}I_{n})\). We then calculate estimated coverage using (4.1). Ideally, we want coverage close to 0.95. When choosing between competing PI's with good coverage, we prefer the one with the narrowest width. The results are seen in Table 1. Note that the differences between using AIC and BIC are negligible, so we describe the performance of each PI only once (rather than once for each MSP). It is seen in Table 1 that AN has low coverage for \(n=50\), but gets close to the nominal coverage for the larger sample sizes. For \(n=50\), both S-AN and boot give at least the nominal coverage and arguably reasonable width of PI's to be useful. Here, \(\hat{C}^{MC}\) gives close to the stated 95% coverage and is noticeably narrower than both S-AN and boot, so it is the preferred PI. When \(n=100\) and 200, we observe all of the 5 PIs are roughly equal in terms of coverage and width. Since AN is the easiest to implement as it does not require any bootstrapping or cross validation, we recommend using it with relatively large \(n\). For intermediate \(n\) we recommend using \(\hat{C}^{MC}\) as it gives appropriate coverage and is narrower than the other PIs. We give the optimal choices for \(\hat{C}^{MC}\) for each sample size in Table 2. We observe that as sample size increases, \(\hat{C}^{MC}\) decreases as expected. This reflects the fact that as we gather more data, the uncertainty in model selection also decreases. ### Binomial Regression Suppose we have \(n\) independent but not identically distributed random variables following \(Y_{i}\sim Bin(r_{i},p_{i})\) so \(E(Y_{i})=r_{i}p_{i}\). We write \(W=\frac{Y_{i}}{r_{i}}\) as our response to model the proportion of success, and then we convert back to number of successes to form our predictive interval. Now we have \(E(W)=p_{i}\) and \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(n\) & MSP & Interval & Coverage & Avg.Width (SE) \\ \hline 50 & AIC & AN & 0.88 & 3.63 (0.17) \\ & & S-AN & 1 & 14.2 (3.81) \\ & & boot & 0.99 & 8.8 (1.87) \\ & & \(\hat{C}^{MC}\) & 0.98 & 5.5 (0.35) \\ & BIC & AN & 0.88 & 3.63 (0.18) \\ & & S-AN & 1 & 13.79 (3.80) \\ & & boot & 0.99 & 8.4 (1.78) \\ & & \(\hat{C}^{MC}\) & 0.98 & 5.37 (0.26) \\ & BMA & & 0.89 & 3.98 (0.11) \\ \hline 100 & AIC & AN & 0.91 & 3.63 (0.08) \\ & & S-AN & 0.95 & 4.43 (0.41) \\ & & boot & 0.92 & 3.81 (0.26) \\ & & \(\hat{C}^{MC}\) & 0.93 & 4.08 (0.09) \\ & BIC & AN & 0.92 & 3.70 (0.06) \\ & & S-AN & 0.94 & 4.25 (0.33) \\ & & boot & 0.92 & 3.74 (0.22) \\ & & \(\hat{C}^{MC}\) & 0.94 & 3.97 (0.05) \\ & BMA & & 0.93 & 3.73 (0.05) \\ \hline 200 & AIC & AN & 0.92 & 3.70 (0.05) \\ & & S-AN & 0.93 & 3.95 (0.16) \\ & & boot & 0.91 & 3.64 (0.14) \\ & & \(\hat{C}^{MC}\) & 0.93 & 3.96 (0.06) \\ & BIC & AN & 0.94 & 3.75 (0.03) \\ & & S-AN & 0.94 & 3.91 (0.14) \\ & & boot & 0.92 & 3.65 (0.12) \\ & & \(\hat{C}^{MC}\) & 0.94 & 3.92 (0.03) \\ & BMA & & 0.92 & 3.74 (0.03) \\ \hline \end{tabular} \end{table} Table 1: Simulation results for Gaussian data with \(p=25\) and \(p_{0}=4\). \begin{table} \begin{tabular}{|c|c|c|} \hline n & MSP & \(C^{MC}\) \\ \hline 50 & AIC & 2.95 \\ & BIC & 2.90 \\ \hline 100 & AIC & 2.20 \\ & BIC & 2.10 \\ \hline 200 & AIC & 2.10 \\ & BIC & 2.05 \\ \hline \end{tabular} \end{table} Table 2: Gaussian cross validation results for the optimal choice for \(\hat{C}^{MC}\). the log likelihood for a given \(i\) is given by \[L(p_{i}|w_{i})=\frac{w_{i}\log\left(\frac{p_{i}}{1-p_{i}}\right)+\log(1-p_{i})}{ \frac{1}{r_{i}}}+\log\binom{r_{i}}{nw_{i}},\] which reveals the canonical parameter \[\theta_{i}=logit(p_{i})=\log\left(\frac{p_{i}}{1-p_{i}}\right).\] We also see that \(a(\phi)=\frac{1}{r_{i}}\), \(b(\theta_{i})=-\log(1+e^{\theta_{i}})=-\log(1-p_{i})\), and thus \[V(p_{i})=\frac{\partial^{2}b(\theta_{i})}{\partial p_{i}^{2}}=\frac{p_{i}(1-p _{i})}{r_{i}}.\] Thus the linear predictor is defined by the logit link as \[E\left(\frac{Y_{i}}{r_{i}}\right)=g(p_{i})=\log\left(\frac{p_{i}}{1-p_{i}} \right)=X_{i}^{\prime}\beta\] and the inverse link function, which gives the probability of success, is given by \[p_{i}=g^{-1}(X_{i}^{\prime}\beta)=\frac{1}{1+e^{-X_{i}^{\prime}\beta}}.\] Of course, we do not know \(p_{i}\), so we estimate \(p_{i}\) by \[\hat{p}_{i}=g^{-1}(X_{i}^{\prime}\hat{\beta})=\frac{1}{1+e^{-X_{i}^{\prime} \hat{\beta}}}.\] Given \(n\) observations \(Y_{1},\ldots,Y_{n}\), our goal is to predict the total number of successes \(Y_{n+1}\) in \(r_{n+1}\) trials while accounting for model selection. We denote the predicted probability of success \(\hat{p}_{M}^{n+1}\) and its value is given by \[\hat{p}_{M}^{n+1}=\frac{1}{1+e^{-X_{M}^{\prime n+1}\hat{\beta}_{M}}}.\] Recalling that \[E(Y_{n+1})=r_{n+1}\cdot g^{-1}(X^{\prime n+1}\beta)=r_{n+1}\cdot p_{n+1},\] the form of the post-model selection AN PI for a binomial random variable is \[PI(M)=r_{n+1}\cdot\hat{p}_{M}^{n+1}\pm\] \[z_{1-\alpha/2}\cdot r_{n+1}\sqrt{\frac{e^{-2\hat{\eta}_{M}^{n+1}}}{ \left(1+e^{-\hat{\eta}_{M}^{n+1}}\right)^{4}}X_{M}^{\prime n+1}Var(\hat{\beta}_{M })X_{M}^{n+1}+\frac{1}{r^{n+1}}\hat{p}_{M}\left(1-\hat{p}_{M}\right)} \tag{4.2}\] where the factor \(r_{n+1}\) in the width of the intervals comes from the the distribution in (3.8) being multiplied by this factor. The interval in (4.2) gives a prediction interval for total number of successes in \(r_{n+1}\) trials. In the setting described above, our simulations are as follows. Let \(X\sim MVN_{p}(0,I_{p})\) and \[\beta=(\beta_{1},\ldots,\beta_{25})^{\prime}=(0.252,0.171,-0.268,0.09,0,\ldots 0 )^{\prime}.\] Now we calculate the estimated coverage using (4.1). Again, we want coverage close to 0.95 and narrow width. The simulated results are given in Table 3. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(n\) & MSP & Interval & Coverage & Avg Width (SE) \\ \hline 50 & AIC & Naive &.51 & 4.9 (1.35) \\ & & AN & 0.83 & 9.51 (0.93) \\ & & S-AN & 1 & 28.31 (2.22) \\ & & boot & 1 & 20.82 (3.35) \\ & & \(\hat{C}^{MC}\) & 0.97 & 15.55 (0.78) \\ & BIC & Naive &.48 & 4.03 (1.08) \\ & & AN & 0.90 & 9.56 (0.76) \\ & & S-AN & 1 & 23.36 (3.93) \\ & & boot & 1 & 17.81 (3.06) \\ & & \(\hat{C}^{MC}\) & 0.98 & 14.35 (0.74) \\ \hline 100 & AIC & Naive &.46 & 3.29 (0.86) \\ & & AN & 0.94 & 9.42 (0.87) \\ & & S-AN & 0.99 & 14.11(2.22) \\ & & boot & 0.99 & 11.99(1.66) \\ & & \(\hat{C}^{MC}\) & 0.99 & 13.01 (0.61) \\ & BIC & Naive &.43 & 2.77 (0.69) \\ & & AN & 0.95 & 9.38 (0.79) \\ & & S-AN & 1 & 13.38 (2.00) \\ & & boot & 0.99 & 11.57 (1.47) \\ & & \(\hat{C}^{MC}\) & 0.99 & 12.67 (0.54) \\ \hline 200 & AIC & Naive &.37 & 2.43 (0.90) \\ & & AN & 0.94 & 8.91 (1.60) \\ & & S-AN & 0.97 & 10.08(2.10) \\ & & boot & 0.97 & 9.37 (1.96) \\ & & \(\hat{C}^{MC}\) & 0.94 & 9.15 (0.87) \\ & BIC & Naive &.37 & 2.18 (0.79) \\ & & AN & 0.94 & 8.93 (1.56) \\ & & S-AN & 0.97 & 9.75 (1.92) \\ & & boot & 0.98 & 9.30 (1.82) \\ & & \(\hat{C}^{MC}\) & 0.95 & 9.01 (0.83) \\ \hline \end{tabular} \end{table} Table 3: Simulation results for binomial data with \(r=30\). For \(n=50\), the Naive interval has very poor coverage for both AIC and BIC. Using AIC and the AN interval results in undercoverage, but it is much better than the Naive PI. This is also true using BIC as the MSP. Both S-AN and boot are conservative, give coverage larger than the stated coverage. The width of both S-AN and boot make the intervals fairly uninformative despite having better coverage than Naive and AN. Finally, we observe \(\hat{C}^{MC}\) performs noticeably better than the other PI's. This suggests that the cross validation step to widen the asymptotic normal PI is useful. Looking at the \(n=100\) and case, we see the Naive interval is worse than the smaller sample case. AN gives very good coverage and the smallest width among all of the PIs for both AIC and BIC. The other 3 PIs, S-AN, boot, and \(\hat{C}^{MC}\) give close stated coverage but they are slightly wider than the AN interval. When \(n=200\) we see Naive is by far the worst among the 5 PIs, but the other 4 are roughly the same with AN and \(\hat{C}^{MC}\) having perhaps slightly better coverage and narrower PIs than S-AN and boot. These results confirm two main points. First, the AN PI achieves the stated 95% coverage as given in Theorem 3.1 when the sample size is large enough. Second \(\hat{C}^{MC}\) always gives appropriate coverage, and appears to reduce to AN as \(n\) increases. This leads us to recommend using \(\hat{C}^{MC}\), especially for intermediate sample sizes, and use AN for large \(n\). Again, we list the optimal cross validation constants in Table 4. As in the Gaussian case, we see that \(\hat{C}^{MC}\) decreases as the sample size increases. However, in the Binomial case, \(\hat{C}^{MC}\) is noticeably larger than the Gaussian case. This may be due to the fact that we are using a normal PI for data that is not normal. ## 5 Discussion Our main contribution is the PI in Theorem 3.1, and the small sample correction using \(\hat{C}^{MC}\) given in (3.13). Much of the literature on GLM prediction has focused on _confidence_ intervals around predictors, which we refer to as the 'Naive' PI, rather than true prediction intervals. That is, it is common for analysts to apply the inverse link function to the endpoints of a confidence interval in the linear predictor scale. This approach does not account for uncertainty appropriately because it uses the variability on the linear predictor scale rather than the data scale. Here we have presented prediction intervals that are derived on the model-scale rather than the linear-predictor scale. \begin{table} \begin{tabular}{|c|c|c|} \hline n & MSP & \(\hat{C}^{MC}\) \\ \hline 50 & AIC & 5.00 \\ & BIC & 4.40 \\ \hline 100 & AIC & 4.10 \\ & BIC & 3.90 \\ \hline 200 & AIC & 3.00 \\ & BIC & 2.85 \\ \hline \end{tabular} \end{table} Table 4: Binomial cross validation results for the optimal choice for \(\hat{C}^{MC}\). We have presented several prediction intervals that consider model uncertainty. The PI derived in Theorem 3.1 accounts for model uncertainty via the consistency of the MSP. This PI severely underperforms in terms of predictive coverage in small sample size, e.g. \(n\approx p\), cases but as \(n\rightarrow\infty\) the predictive coverage is roughy the nominal \(1-\alpha\) coverage. The boot and S-AN PIs tend to be too wide, suggesting far too much model uncertainty, for these PIs to be useful. These two PIs overcorrect the width of the intervals for the amount of the uncertainty in the model selection considered here. Again as \(n\) increases, both boot and S-AN become usable (due to the MSP choosing the correct model). At this point, the bootstrapping is not necessary, however. Since AN performs well with large samples, there is no need to bootstrap, we can directly use AN. Taken together, our results provide valid post model selection PIs for GLM's for moderate and large samples. ## Appendix 5.A Extension to GLMM's The approach described in Sec. 2.2 to obtain valid prediction intervals after model selection extends naturally to the class of generalized linear mixed models. Here we assume the random variable \(Y|X,\beta,Z,U\sim\mathcal{G}\) where \(\mathcal{G}\) is a distribution in the exponential family. We write the linear predictor as \[\eta=g(E(Y|U))=g(\mu|U)=X\beta+ZU \tag{5.1}\] where \(\beta\) is the vector of fixed effects, \(U\) is a random effect such that \(U\sim N(0,\Sigma_{U})\). \(X\) and \(Z\) are their respective design matrices. The mean function is \[\mu=E(Y|U)=g^{-1}(\eta)=g^{-1}(X\beta+ZU)\] and the variance is \[Var(Y|U)=V_{\mu}^{1/2}AV_{\mu}^{1/2}\] where \(V_{\mu}^{1/2}=\text{diag}\left[\sqrt{V(\mu)}\right]\) and \(A=\text{diag}\left[1/a(\phi)\right]\). As with GLM's, model selection is often performed when forming predictors. In the GLMM setting model selection can be done on both \(X\) and \(Z\), however here we focus on model selection on the design matrix \(X\). Analogous to the GLM case, we state an asymptotic normal predictive interval For GLMM's that is derived in the same way as the GLMM. The only difference is the random effects part of the linear predictor. However, recall the random effects have expectation 0, so the location of the asymptotic distribution does not change. The variance, on the other hand, does increase. This is seen in (5.2 ) as the width has an extra term for the variance of the random effects: We get that \(PI(M,C_{\alpha})=g^{-1}(\hat{\eta}_{M}^{n+1})\pm\) \[C_{\alpha}\sqrt{\left[\frac{d}{d\eta}g^{-1}(\hat{\eta}_{M}^{n+1})\right]^{2} \left(X_{M}^{\prime n+1}Var(\hat{\beta}_{M})X_{M}^{n+1}+Z^{\prime n+1}Var(\hat {u})Z^{n+1}\right)+a(\hat{\phi})V(\hat{\mu})}. \tag{5.2}\] ## Appendix 5.B GLMM bootstrap intervals As with the GLM AN interval, we can approximate the variance of \(g^{-1}(\hat{\eta}_{M}^{n+1})\) using bootstrapping and replace (5.2) with \[PI(M,C_{\alpha})=g^{-1}(\hat{\eta}_{M}^{n+1})\pm z_{1-\alpha/2}\sqrt{\hat{Var}(g ^{-1}(\hat{\eta}_{M}^{n+1}))^{boot}+a(\hat{\phi})V(\hat{\mu})} \tag{5.3}\] where \(\hat{Var}(g^{-1}(\hat{\eta}_{M}^{n+1}))^{boot}\) is simply the variance of the bootstrapped distribution of \(g^{-1}(\hat{\eta}_{M}^{n+1})\). We can also use the bootstrap approach, in same way as with the GLM, to obtain a bootstrap distribution for a new outcome. In the GLMM setting, we bootstrap the expected value of the distribution for a new outcome, \[\hat{\mu}_{M}=g^{-1}(X_{M}^{\prime n+1}\hat{\beta}+Z^{\prime n+1}\hat{u}).\] We proceed as follows: * obtain \(B\) bootstrap replications of \(\hat{\mu}_{M}\), denoted \(\mu_{1}^{*},\ldots,\mu_{B}^{*}\), * obtain \(B\) bootstrap replications of \(a(\hat{\phi})_{M}\), denoted \(a(\phi)_{1}^{*},\ldots,a(\phi)_{B}^{*}\), * generate \(y_{1}^{*}(\mu_{1}^{*},a(\phi)_{1}^{*}),\ldots,y_{B}^{*}(\mu_{B}^{*},a(\phi)_{ B}^{*})\), from \(\mathcal{G}\). The sample \(y_{1}^{*},\ldots,y_{B}^{*}\) is used to obtain the predictive interval by extracting the appropriate percentile interval from this distribution. Thus the PI is \[[q_{1-\alpha/2}^{*},q_{\alpha/2}^{*}], \tag{5.4}\] the \(1-\alpha/2\) and \(\alpha/2\) quantiles from \(y_{1}^{*},\ldots,y_{B}^{*}\) which inherits the uncertainty of \(M\), \(\hat{\beta}\) and \(\hat{u}\). These intervals are implementable assuming we already have estimates \(\hat{\beta}\) and \(\hat{u}\). Regardless of which method is used to form predictors, we theoretically can use both intervals (5.3) or (5.4) because predictors and mean and variance functions, as well as the uncertainty associated with the MSP can be obtained through the bootstrap procedure. Thus, a closed form solution of parameter estimates is not necessary to obtain valid PI's. Also, we can, at least theoretically, still use both of the intervals presented to account for the uncertainty of model selection in the random effects design matrix. ## Appendix 5.C Computational issues for GLMM's The theoretical and bootstrap based intervals we havew proposed to capture the uncertainty of model selection are not implementable, at least yet. This is due to convergence issues with implementing GLMM's. Estimation in GLMM's requires integrating out the random effects, and these integrals do not have closed form solutions. Thus, numerical integration is necessary, making the integrals computationally hard. In practice, now, there is no a single best approach so one tries many approaches until the algorithm converges. Once convergence is achieved, classical approaches to assess model fit are used. In the bootstrapping approach, we require estimation over many repeated samples of the data and this would require convergence of the estimates in the GLMM over each resample. The estimates require numerical integration for each resample of the data which requires a person trying several algorithms until one works. We attempted this, but we were unsuccessful because convergence in each resample using a fixed numerical integration method is not feasible. Acknowledgments.The first author acknowledges funding from the University of Nebraska Program of Excellence in Computational Science.
2303.10957
Adaptive Thiele interpolation
The current implementation of Thiele rational interpolation in Maple (the ThieleInterpolation routine) breaks down when the points are not well-ordered. In this article, it is shown how this breakdown can be avoided by ordering the interpolation points in an adaptive way.
Oliver Salazar Celis
2023-03-20T09:30:39Z
http://arxiv.org/abs/2303.10957v1
# Adaptive Thiele interpolation ###### Abstract The current implementation of Thiele rational interpolation in Maple (the ThieleInterpolation routine) breaks down when the points are not well-ordered. In this article, it is shown how this breakdown can be avoided by ordering the interpolation points in an adaptive way. DEFINE ISSUE using \(\backslash\) issue ## 1 Introduction Maple provides univariate rational interpolation functionality using the ThieleInterpolation routine from the CurveFitting toolbox. Given \(n+1\) distinct complex points \(x_{0},x_{1},\ldots,x_{n}\) together with finite function valuations \(f(x_{i})=f_{i}\) (\(i=0,\ldots,n\)), this routine produces a continued fraction of the form \[C_{n}(x)=\varphi_{0}[x_{0}]+\frac{x-x_{0}}{\bigodot\varphi_{1}[x_{0},x_{1}]}+ \frac{x-x_{1}}{\bigodot\varphi_{2}[x_{0},x_{1},x_{2}]}+\frac{x-x_{2}}{\bigodot \cdots}+\frac{x-x_{n-1}}{\bigodot\varphi_{n}[x_{0},\ldots,x_{n}]}\,, \tag{1}\] where the inverse differences \(\varphi_{i}[x_{0},\ldots,x_{i}]\) (\(i=0,\ldots,n\)) are obtained from the recursion \[\begin{cases}\varphi_{0}[x_{k}]=f_{k}&k\geq 0\\ \varphi_{i+1}[x_{0},\ldots,x_{i},x_{k}]=\frac{x_{k}-x_{i}}{\varphi_{i}[x_{0}, \ldots,x_{i-1},x_{k}]-\varphi_{i}[x_{0},\ldots,x_{i}]}&k>i\end{cases}. \tag{2}\] It is well-known [1] that the construction of the inverse differences depends on the order in which the points \(x_{i}\) are taken to constuct (1). In some cases, one indeed encounters \(\varphi_{i}[x_{0},\ldots,x_{i}]=\infty\) due to division by zero in the recursion (2). This does not mean that the problem itself does not have a solution; a simple reordering of the points can typically resolve this. The documentation of the ThieleInterpolation routine warns the user for this situation, but rather than a reordering suggests a perturbation of the points. When division by zero is encountered a message as below is shown Error,(in CurveFitting:-ThieleInterpolation) denominator of zero was produced; try perturbing the data points It is not convenient to leave it up to the user to add perturbations or to reorder the points manually by trial and error. In this article we show that an ordering exists that ensures the existence of the interpolating Thiele continued fraction (1), meaning \(\varphi_{i}[x_{0},\ldots,x_{i}]\neq\infty\). ## 2 Adaptive selection The documentation of the ThieleInterpolation routine already hints to cases to avoid: _For example, a division-by-zero error is produced when two successive points have the same dependent value or when three successive points are collinear_. We can formalize this observation in the following Theorem. **Theorem 1**.: _If the distinct points \((x_{i})_{0\leq i\leq n}\) are ordered such that every two consecutive convergents of the continued fraction (1) are different, then \(\varphi_{i}[x_{0},\ldots,x_{i}]\neq\infty\)._ For a proof, we refer to [5, Theorem 3.2] where it is shown that the inverse differences can essentially be interpreted as the ratio of two (linearized) residuals of successive convergents up to some non-zero factor. In light of Theorem 1, the division-by-zero error is avoided if the continued fraction (1) can be constructed in such a way that every two consecutive convergents are different. One way to achieve this, is to choose the next point \(x_{i+1}\) in the construction of \(C_{i+1}(x)\) with \(0<i<n\) such that the current convergent \(C_{i}(x)\) does not interpolate in \(x_{i+1}\), i.e. \(C_{i}(x_{i+1})\neq f_{i+1}\). Hence, given \(C_{i}(x)\) and \((x_{0},\ldots,x_{i})\), we propose to reorder the remaining points \((x_{i+1},\ldots,x_{n})\) and determine \(C_{i+1}(x)\) such that \(|C_{i}(x_{i+1})-f(x_{i+1})|\) is maximal. In this way, \(C_{n}(x)\) is ultimate constructed in an adaptive greedy way by choosing in each step the point where for \(0<i<n\) the error between \(C_{i}(x)\) and \(f(x)\) is maximal. The next theorem shows that this strategy indeed succeeds in the avoidance of infinite inverse differences occurring in (1). **Theorem 2**.: _If the distinct points \((x_{i})_{0\leq i\leq n}\) are ordered using a strategy where in each step \(C_{j}(x_{j+1})\neq f(x_{j+1})\) with \(0\leq j<n\) then every two consecutive convergents of the continued fraction (1) are different and \(\varphi_{i}[x_{0},\ldots,x_{i}]\neq\infty\)._ Proof.: The proof is by induction. Let \(x_{0}\) be chosen with \(f(x_{i})\neq\infty\) and take \(x_{1}\) such that \(f(x_{0})\neq f(x_{1})\). Then from (2) we have \(\varphi_{0}[x_{0}]=f(x_{0})\neq\infty\) and \(\varphi_{1}[x_{0},x_{1}]=(x_{1}-x_{0})/\left(f(x_{1})-f(x_{0})\right)\neq\infty\). Clearly, also \(C_{0}(x)=\varphi_{0}[x_{0}]\not\equiv C_{1}(x)=\varphi_{0}[x_{0}]+(x-x_{0})/ \varphi_{1}[x_{0},x_{1}]\). Assume that the hypothesis holds up to \(0<j<n\). We show further below that this necessarily means that \(\varphi_{j+1}[x_{0},\ldots,x_{j+1}]\neq\infty\), but momentarily we take it for granted. Then assume by contradiction that for some \(0<j<n\), \[C_{j}(x)=\frac{A_{j}(x)}{B_{j}(x)}\,\equiv\,C_{j+1}(x)=\frac{A_{j+1}(x)}{B_{j+ 1}(x)}\] where for \(0\leq i\leq n\) the numerators \(A_{i}(x)\) and denominators \(B_{i}(x)\) satisfy the recurrence relation [6] \[\begin{pmatrix}A_{i}(x)\\ B_{i}(x)\end{pmatrix}=\begin{pmatrix}\varphi_{i}[x_{0},\ldots,x_{i}]A_{i-1}(x) +(x-x_{i-1})A_{i-2}(x)\\ \varphi_{i}[x_{0},\ldots,x_{i}]B_{i-1}(x)+(x-x_{i-1})B_{i-2}(x)\end{pmatrix}, \tag{3}\] with \[\begin{cases}A_{-2}(x)=0,&B_{-2}(x)=1\\ A_{-1}(x)=1,&B_{-1}(x)=0\;.\\ A_{0}(x)=\varphi_{0}[x_{0}]=f(x_{0}),&B_{0}(x)=1\end{cases}\] This implies that the polynomial \(\left[A_{j}B_{j+1}-A_{j+1}B_{j}\right](x)\equiv 0\). However, by construction \[-\left[f(x_{j+1})B_{j}(x_{j+1})-A_{j}(x_{j+1})\right]B_{j+1}(x_{j+1})+\left[f (x_{j+1})B_{j+1}(x_{j+1})-A_{j+1}(x_{j+1})\right]B_{j}(x_{j+1})\neq 0\] because neither \(f(x_{j+1})B_{j}(x_{j+1})-A_{j}(x_{j+1})\neq 0\) nor \(B_{j+1}(x_{j+1})\neq 0\). Of these last two inequalities, the first one follows from the selection of \(x_{i+1}\) with \(C_{j}(x_{j+1})\neq f(x_{j+1})\) and the fact that the only common factors of \(A_{j}(x)\) and \(B_{j}(x)\) can be interpolation points used so far (see for instance [2]). For the second inequality, if it was true that \(B_{j+1}(x_{j+1})=0\), then necessarily also \(A_{j+1}(x_{j+1})=0\). On the other hand (3) can be written as \[\begin{pmatrix}A_{j+1}(x)\\ B_{j+1}(x)\end{pmatrix}=\prod_{k=0}^{j}\begin{pmatrix}\varphi_{k}[x_{0},\ldots,x_{k}]&x-x_{k}\\ 1&0\end{pmatrix}\begin{pmatrix}\varphi_{j+1}[x_{0},\ldots,x_{j+1}]\\ 1\end{pmatrix}. \tag{4}\] Since \(x_{j+1}\neq x_{k}\) for \(k=0,\ldots,j\), none of the matrices in (4) are singular when putting \(x=x_{j+1}\), hence we can write \[\begin{pmatrix}\varphi_{j+1}[x_{0},\ldots,x_{j+1}]\\ 1\end{pmatrix}=\prod_{k=j}^{0}\begin{pmatrix}0&1\\ 1/(x_{j+1}-x_{k})&-\varphi_{k}[x_{0},\ldots,x_{k}]/(x-x_{k})\end{pmatrix} \begin{pmatrix}A_{j+1}(x_{j+1})\\ B_{j+1}(x_{j+1})\end{pmatrix}. \tag{5}\] But if \(B_{j+1}(x_{j+1})=A_{j+1}(x_{j+1})=0\), then the right-hand side of (5) cannot equal the left-hand side. Therefore \(B_{j+1}(x_{j+1})\neq 0\). What remains to be shown is that \(\varphi_{j+1}[x_{0},\ldots,x_{j+1}]=\infty\) cannot occur. Else from (2) and the induction hypothesis we have \(\varphi_{j}[x_{0},\ldots,x_{j-1},x_{j+1}]=\varphi_{j}[x_{0},\ldots,x_{i}]\neq\infty\). From application of [5, Theorem A.1] we can then write \[\varphi_{j}[x_{0},\ldots,x_{i-1},x_{j+1}]=-(x_{j+1}-x_{j-1})\frac{f(x_{j+1})B _{j-2}(x_{j+1})-A_{j-2}(x_{j+1})}{f(x_{j+1})B_{j-1}(x_{j+1})-A_{j-1}(x_{j+1})} \neq\infty.\] Hence, necessarily \(f(x_{j+1})B_{j-1}(x_{j+1})-A_{j-1}(x_{j+1})\neq 0\) so that application of [5, Theorem A.1] is also allowed for \(\varphi_{j+1}[x_{0},\ldots,x_{j+1}]\) and we have \[\varphi_{j+1}[x_{0},\ldots,x_{j+1}]=-(x_{j+1}-x_{j})\frac{f(x_{j+1})B_{j-1}(x_ {j+1})-A_{j-1}(x_{j+1})}{f(x_{j+1})B_{j}(x_{j+1})-A_{j}(x_{j+1})}. \tag{6}\] But, similarly as before, due to the selection \(C_{j}(x_{j+1})\neq f(x_{j+1})\), the denominator of (6) does not vanish and therefore also \(\varphi_{j+1}[x_{0},\ldots,x_{j+1}]\neq\infty\). **Remark 1.** Theorem 2 does not require the selection of the interpolation points to be greedy per se. Nevertheless, from a numerical point of view it remains important to construct the interpolant in as few steps as possible [1]. A greedy selection is a heuristic way to such an end. The AAA approach [3] for instance also employs such a strategy; the motivation there is not existence but rather for numerical purposes. The remaining freedom is in the choice of the first point \(x_{0}\). One option is to take a point where \(|f(x_{0})|\) is minimum. As such, at least one zero of \(f(x)\) is exactly represented when present in the data. **Remark 2.** Theorem 2 ensures that \(\varphi_{i}[x_{0},\ldots,x_{i}]\neq\infty\) for those inverse differences appearing in (1). This does not exclude the possibility that \(\varphi_{i}[x_{0},\ldots,x_{i}]=0\), nor does it prevent intermediate occurrence of \(\varphi_{i+1}[x_{0},\ldots,x_{i},x_{k}]=\infty\) for \(k>i+1\) in (2). Such intermediate non-finite values are not necessarily a concern to continue the recursion (2) using the IEEE 754 standard. In fact, they are necessary for the occurrence of \(\varphi_{i}[x_{0},\ldots,x_{i}]=0\). Such cases are also of no particular concern unless \(i=n\), that is, if it occurs for the last inverse difference in \(C_{n}(x)\). This situation can be avoided by adding a stopping criterion when constructing \(C_{n}(x)\). If the maximum absolute error in the remaining points is below a prescribed tolerance, say tol=5e-15, then the recursion is stopped. One way to implement this is \[\max|C_{i}(x_{k})-f(x_{k})|<\operatorname{tol}\times\max_{i<j\leq n}(|f(x_{j}) |)\qquad\text{for }i<k\leq n.\] Essentially it means that, up to the prescribed tolerance, the underlying function appears rational and there is no further accuracy gain to be made by adding more interpolation points. ## 3 Numerical example An important result in rational approximation theory is due to Newman [4], where it is shown that rational approximations can achieve root-exponential convergence \(\mathcal{O}(\exp(-C\sqrt{n})\) (with \(C>0\)) for \(f(x)=|x|\) with \(x\in[-1,1]\). This is much faster than what can be achieved with polynomials which converge at an algebraic rate \(\mathcal{O}(n^{-1})\) at best. The aim of this example is to construct Newman approximations, which are rational interpolants to \(f(x)=|x|\) in the \(2n+1\) points \[(-1,-\eta,\ldots,-\eta^{n-1},0,\eta^{n-1},\ldots,1),\qquad\text{with }\eta=e^{-1/\sqrt{n}}. \tag{7}\] The below Maple code sets up the interpolation data and calls the ThieleInterpolation routine >with(ArrayTools); with(CurveFitting); >N := 5; >xleft := j -> -(exp(-1/sqrt(N)))^(j-1); >xright:= j -> (exp(-1/sqrt(N)))^(N-j); >xdata := Concatenate(1, Vector(N, xleft), 0, Vector(N, xright)); >ydata := Vector(2*N+1, j-> abs(xdata[j])); >ThieleInterpolation(xdata, ydata, x) Because the ThieleInterpolation routine takes the points (7) from left to right, this leads to the division-by-zero error This is not surprising because the first points (defined by xleft) basically lie on the line \(y=-x\) which is already reconstructed by \(C_{1}(x)=-x\) from the first two points. Hence this implementation will fail for any \(n>0\). The situation is completely different when applying the adaptive Thiele interpolation approach. A prototype Maple implementation of the greedy strategy is given in Appendix A. Figure 0(a) shows the maximum error on \([-1,1]\) obtained from Thiele interpolation of \(f(x)=|x|\) in Newman points (7) for \(n=5,\ldots,50\). All obtained interpolants use all interpolation points in their construction, meaning that for \(n=50\) we have \(2n+1=101\) interpolation points and we construct \(C_{2n}(x)=C_{100}(x)\). The root-exponential behavior is clearly visible as a downward sloping straight line when plotting on log10 scale in function of \(\sqrt{n}\). The maximum error is calculated on a discrete grid of 10000 points between 0 and 0.01 (the maximum error of these interpolants typically occurs near \(x=0\) where \(f(x)=|x|\) is not differentiable). Figure 1: Adaptive Thiele interpolation of \(f(x)=|x|\) in Newman points (7) for various \(n=5,\ldots,50\). Left: (discrete) maximum error on \([-1,1]\). Right: 2-norm of the interpolation error in the Newman interpolation points (7). This discretization gives the impression that convergence is slower for odd \(n\) than for even \(n\). In fact, odd values of \(n\) give approximations with poles in \([-1,1]\), while the even \(n\) approximations are pole-free. These observations are in line with those observed in the recent AAA approach [3, see Fig. 6.10, p. 1511]. For this run we have put Digits:=16 to mimic (software) floating-point precision. Recall that by default Maple uses Digits:=10. As shown in Figure 0(b) the interpolation error in the Newman points remains highly accurate. ## 4 Concluding remarks We have shown how the breakdown in the ThieleInterpolation routine can be avoided using an adaptive ordering of the interpolation points. This actually renders Thiele interpolation into a practical tool for rational interpolation. Particularly when also their poles and zeros can be calculated in a simple fashion as shown in [5]. ## Appendix A Maple codes The ideas above are implemented in Maple with the below prototype code. Mind that the stopping condition is not included here. restart; with(ArrayTools); with(ListTools); with(LinearAlgebra); cfrac_eval := proc (aa, zz, xx) description "evaluate continued fraction"; local N, res, i; N := Dimension(aa); res := Vector(Dimension(xx), 0); for i from N by -1 to 2 do res := zip('/', '^[[^-](xx, zz[i-1]), '^[[^+'](aa[i], res)) end do; return( '^['+'](aa[1], res) ) end proc cfrac_interpolate := proc (xx, ff) description "adaptive continued fraction interpolation"; local N, rr, k, aa, zz, i, indx_keep, x, f; x := xx; f := ff; N := Dimension(xx); NumericEventHandler(division_by_zero = default); for k to N do if k = 1 then rr := f; i := min[index](abs(rr)); aa := Vector(1, rr[i]); zz := Vector(1, x[i]) else i := max[index](abs(cfrac_eval(aa, zz, x)-f)); rr := zip('/', '^[[^-](x, zz[k-1]), '^[[^-'](rr, aa[k-1])); aa := Append(aa, rr[i]); zz := Append(zz, x[i]) end if; indx_keep := subso(i = NULL, [seq(1.. Dimension(x))]); x := x[indx_keep]; f := f[indx_keep]; rr := rr[indx_keep] end do; return Concatenate(2, aa, zz) end proc ``` These procedure can then for instance be called on the previous Newman example. >coefs := cfrac_interpolate(evalf(xdata), evalf(ydata)) >x := evalf('<,>'(seq((1/1000)*i, i = -1000.. 1000))); >plot(x, cfrac_eval(Column(coefs, 1), Column(coefs, 2), x))
2307.05113
Piecing Together Clues: A Benchmark for Evaluating the Detective Skills of Large Language Models
Detectives frequently engage in information detection and reasoning simultaneously when making decisions across various cases, especially when confronted with a vast amount of information. With the rapid development of large language models~(LLMs), evaluating how these models identify key information and reason to solve questions becomes increasingly relevant. We introduces the DetectBench, a reading comprehension dataset designed to assess a model's ability to jointly ability in key information detection and multi-hop reasoning when facing complex and implicit information. The DetectBench comprises 3,928 questions, each paired with a paragraph averaging 190 tokens in length. To enhance model's detective skills, we propose the Detective Thinking Framework. These methods encourage models to identify all possible clues within the context before reasoning. Our experiments reveal that existing models perform poorly in both information detection and multi-hop reasoning. However, the Detective Thinking Framework approach alleviates this issue.
Zhouhong Gu, Lin Zhang, Jiangjie Chen, Haoning Ye, Xiaoxuan Zhu, Zihan Li, Zheyu Ye, Yan Gao, Yao Hu, Yanghua Xiao, Hongwei Feng
2023-07-11T08:45:46Z
http://arxiv.org/abs/2307.05113v3
Go Beyond The Obvious: Probing the gap of INFORMAL reasoning ability between Humanity and LLMs by Detective Reasoning Puzzle Benchmark ###### Abstract Informal reasoning ability is the ability to reason based on common sense, experience, and intuition. Humans use informal reasoning every day to extract the most influential elements for their decision-making from a large amount of life-like information. With the rapid development of language models, the realization of general artificial intelligence has emerged with hope. Given the outstanding informal reasoning ability of humans, how much informal reasoning ability language models have has not been well studied by scholars. In order to explore the gap between humans and language models in informal reasoning ability, this paper constructs a Detective Reasoning Benchmark, which is an assembly of 1,200 questions gathered from accessible online resources, aims at evaluating the model's informal reasoning ability in real-life context. Considering the improvement of the model's informal reasoning ability restricted by the lack of benchmark, we further propose a Self-Question Prompt Framework that mimics human thinking to enhance the model's informal reasoning ability. The goals of self-question are to find key elements, deeply investigate the connections between these elements, encourage the relationship between each element and the problem, and finally, require the model to reasonably answer the problem. The experimental results show that human performance greatly outperforms the SoTA Language Models in Detective Reasoning Benchmark. Besides, Self-Question is proven to be the most effective prompt engineering in improving GPT-4's informal reasoning ability, but it still does not even surpass the lowest score made by human participants. Upon acceptance of the paper, the source code for the benchmark will be made publicly accessible. \({}^{1}\)Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China \({}^{2}\)The Department of Electrical and Computer Engineering at the University of Waterloo \({}^{3}\)Fudan-Aishu Cognitive Intelligence Joint Research Center {zhgu22, zhli21, linzhang22}@m.fudan.edu.cn {hwfeng, shawyh}@fudan.edu.cn ## Introduction Reasoning ability is crucial for helping individuals making decisions, solving problems, and thinking critically based on the existing knowledge and situation [14, 17]. In Cognitive Psychology, "reasoning" is composed of formal reasoning and informal reasoning [11, 12, 13]. Formal reasoning is a systematic and logical process that follows a set of rules and principles, often used in mathematics and logic. While, informal reasoning is a less structured approach that relies on intuition, experience, and common sense to draw conclusions and solve problems, and is often used in everyday life [18, 1]. Notwithstanding the prevalence of formal reasoning in the field of machine learning research, the term "reasoning" mostly refer to informal reasoning in its common, everyday usage [19, 12]. The ability of Informal Reasoning is a fundamental human capability, permeating in people's daily lives [1, 13, 14]. For instance, on a hot summer day, Figure 1: A real response from ChatGPT. Despite employing thorough prompting, ChatGPT continues to exhibit a deficiency in understanding real-life scenarios. when observing many people sitting in a room with closed windows and doors, one can easily deduce that the room is air-conditioned. The process of deduction is based on the common sense of the reasoner and the experience of once sitting in air-conditioned rooms. This manifestation of inference abilities prevalent across diverse real-life scenarios showcases an array of cognitive skills, such as grasping common sense, abstracting problems, computing probabilities, discarding irrelevant information, and employing logical reasoning. Humans excel at doing informal reasoning, especially in a life scenarios, often extracting conclusions subconsciously within a brief time span [23]. As the capabilities of LLMs continue to scale these days, the crucial question arises: _Can artificial neural networks also display super-powerful informal reasoning capabilities just like or go beyond humanity?_ Limited attention has been devoted to exploring this question within the field of Natural Language Processing (NLP) [13, 14]. Currently, the majority of AI reasoning endeavors concentrate on formulaic reasoning [15, 16] or predicate reasoning [17]. Although these reasoning capacities aid the prevailing large language models (LLMs) in tackling abstract problems [18], they fall short in fostering a deeper comprehension of human experiences. To propel LLMs towards achieving Artificial General Intelligence (AGI), it is imperative to not only cultivate their reasoning capabilities in disciplines such as mathematics and physics, as well as equip them with the ability to reason within real-world contexts. Even the advanced GPT-4, which is considered to be one of the most powerful LLM, exhibits a lackluster performance in informal reasoning as illustrated in Fig. 1. Transitioning from mere recognition of real-life scenarios to the investigation of human-like reasoning in neural models, we underscore the significance of developing a novel reasoning dataset targeting everyday life scenarios. To narrow this gap, we introduce Detective Reasoning Puzzle Benchmark, which is an assembly of 1,200 questions gathered from accessible online resources, aims at evaluating the model's informal reasoning ability in real life context. Compared to recent popular domain knowledge reasoning datasets, the Detective Reasoning Puzzle Benchmark primarily comprises many the detective related questions, incorporating an abundance of information pertaining to character roles, human behaviors, mental activities, and environment description. Solving these problems requires models to discover clues from details in the context, and reasoning based on the connection between clues and the questions. In addition, we propose the Self-Question Prompt Framework to bridge the gap in LLMs' ability to carry out informal reasoning. To borrow the idea from the powerful informal reasoning ability of human, we recruited 50 participants to tackle 20 questions each from the Detective Reasoning Puzzle Benchmark, leading to a detailed study of the reasoning processes of both humans and the model. Most participants said they find the clues mainly based on intuitive without deeper thinking during the test, which is a typically informal reasoning process that sidesteps logical thinking, and instead uses experience and common sense to unearth critical elements in the context, promoting deep reflection, ultimately resulting in a reasoned outcome. By mirroring human intuitive thinking, Self-Questioning is composed of four steps to improve the model's ability for informal reasoning. The goals of these steps respectively are to find key elements, deeply investigate the connections between these elements, encourage the relationship between each element and the problem, and finally, require the model to reasonably answer the problem. Experimental results show that even the lowest score amongst participants (46.6%) significantly surpasses the best score from the LLMs questioned based on naive prompt (16.0%) within Detective Reasoning Puzzle Benchmark. Compared to the reasoning-enhanced prompts proposed by other researchers, our proposed self-question prompt generated the greatest performance improvement for GPT-4 (an increase of 17.4%). Although self-question prompt is a better way to excite the model's potential in doing informal reasoning, there is still a huge gap between the SoTA LLMs performance (33.4%) with the average human participants (74.2%). In conclusion, our study makes several key contributions: (1) To facilitate further detailed exploration into the informal reasoning abilities of large language models, we constructed a detective reasoning benchmark that includes 1,200 pieces of detective reasoning data. Solving these reasoning problems requires the model to be able to disregard irrelevant information, utilize common knowledge, identify incongruities in the details of the problem, and thereby arrive at a proper reasoning outcome. (2) We designed a self-questioning prompt framework that can reliably induce the model to identify irrelevant information, draw the clue from the context, and perform reasoning based on the clues drawn from the details in context. (3) We collected actual human responses to the Detective Reasoning Benchmark and carried out numerous experiments using LLMs. Regarding these experimental outcomes, we conducted extensive analysis and discussion, which includes the gap between existing LLMs and human in informal reasoning, the superiority of self-questioning prompts, and how LLMs can supplement informal reasoning abilities. ## Related Works ### Informal Reasoning When humans engage in intuitive reasoning and experiential reasoning, their brains use two different modes of thinking: formal and informal reasoning [14]. Informal reasoning is a fast, automatic mode of thinking that does not require deep thought, but is based on past experience and feelings. Formal reasoning is a slower, more conscious mode of thinking that requires deep thought and logical reasoning [1]. These two modes of thinking together make up the dual-reasoning theory, and researchers generally believe that informal reasoning is usually fast, provides a sense of confidence, reflects a large amount of information processing, and is most likely to provide accurate judgments when based on relevant experience learning [15]. and Gero 2019; Doherty and Carroll 2020; Grayd 2020; Sloman, Patterson, and Barbey 2021). Unlike formal reasoning, informal reasoning is inefficient and does not compete for central working memory resources. It provides default responses that can be intervened by efficient, reflective reasoning. However, in the psychological literature on reasoning and decision-making, intuition has also been blamed for a series of cognitive biases (Ellis 2018; Khatri and Ng 2000). Although informal reasoning is often used by humans in their daily lives, it has received little attention in the machine learning community in recent years (Huang and Chang 2022). Traditional common sense reasoning benchmarks, such as HellaSwag (Zellers et al. 2019) and WinoGrande (Sakaguchi et al. 2021), have been designed in a way that does not reflect human usage habits. They directly construct question-answer pairs about common sense knowledge for evaluation, greatly reducing the difficulty to the baby-level (Hendrycks et al. 2021). Besides, the greatest need for humans to rely on informal reasoning is to quickly extract important information from a large amount of information in real-life scenarios (Huang and Chang 2022). The most nearly proposed informal reasoning dataset is True Detective (Del and Fishel 2023), which is also a detective related dataset without open source until now and consist of only 191 questions without manual annotation. Our Detective Reasoning Puzzle Benchmark contains 1,200 manual annotated detective reasoning-related puzzles, which involve a large number of life-like scenarios in free-text form. As shown in Figure 2, Detective Reasoning Puzzle is more challenging for language models than previous well-known benchmarks. Solving these puzzles truly tests the model's human-like informal reasoning ability. The introduction of Detective Reasoning Puzzle has bridged the gap between existing informal reasoning benchmarks and the needs of humans to apply this ability. ### Prompt Engineering The ability of generative language models is largely influenced by the prompt words (Liu et al. 2023). After the introduction of LLMs, scholars have continuously improved the paradigm of prompt words to enhance the model's ability in various tasks (Mialon et al. 2023). These improvements mainly focus on enhancing the language model's ability in formal reasoning and can be roughly divided into two types: Chain-of-Thought and Ensemble. People generally use Chain-of-Thought (CoT) to evoke the inherent step-by-step reasoning ability of large language models (LLMs), enabling them to formulate intermediate reasoning chains that are essential for solving problems (Brown et al. 2020). Ensemble mainly allows the model to generate multiple answers and select the most likely answer based on the answers (Fu et al. 2022; Wang et al. 2022). Researches results show that many current CoT and model ensemble schemes have significantly improved the formal reasoning ability of language models (Wei et al. 2022; Zhang et al. 2022; Wang et al. 2023). Considering that there has been no challenging informal reasoning benchmark proposed in the past, this has restricted scholars from improving the informal reasoning ability of models. Self-Question mainly draws the ideas from CoT. Compared with simply modifying existing CoT schemes to enhance the model's informal reasoning ability, Self-Question restores the intermediate process of human intuitive reasoning, which is more in line with the application scenarios of informal reasoning and can obtain better answers. ## Benchmark Construction ### Data Collection We construct Detective Reasoning Puzzle Benchmark based on openly available content online. The goal is to collect free-text reasoning datasets that contains information about real-life scenarios. So we collect detective reasoning puzzle which mainly encompass a wide range of real-life scenarios, characterized by their extensive involvement of common-sense, encyclopedic, and idiom knowledge. Besides logically combining and reasoning between clues, these questions require models to possess a robust perception of real-life scenarios and a certain understanding of human behavior and psychology. In this manner, models can effectively eliminate irrelevant information from questions and identify appropriate clues to solve problems. ### Data Preprocessing To ensure the compatibility of the Detective Reasoning Puzzle Benchmark with diverse baselines, each question is represented in a JSON format, which comprises four elements: "context", "question", "hint", and "analysis" as show in Fig. 3. We elected to exclude questions incapable of being resolved solely through textual information, where demanding visual or audio information to answer the question. Besides, we also exclude questions which comprised of many numerous mathematical formulas. The reason behind this is due to the fact that these types of questions evaluated the formal reasoning capacity of the models, which does not align with the objective of the Detective Reasoning Puzzle Benchmark. The examples of exactly what kind of questions are excluded are listed in Sec. in Appendix. Online puzzles often intertwine the questions with the narrative's description. We separate each puzzle's description into the "context" of the story and the "question" in order Figure 2: Performance on commonsense benchmark (HellaSwag), an Hard version of Winograd Schema Challenge (WinoGrande), and the Detective Reasoning Benchmark. In previous work, language models have been able to achieve extremely high performance. Although our Detective Reasoning Benchmark is not a multiple-choice test, even the most powerful GPT-4 can only achieve an accuracy rate of 16%. to provide a clearer presentation for each puzzle. Acknowledging the marked improvement achieved through process supervision, we leveraged GPT-4 to generate a reasoning process for each question, given an answer and the question itself. This generated reasoning process underwent manual check for correction and was denoted by a "hint" field, the detail about our human annotators and manual check are described in Sec. in Appendix. Notably, answers often adopt a narrative style, potentially leading to confusion during validation. We hence shortened the unprocessed answer by eliminating its narrative elements, and the refined answers are indicated in the "answer" field. In this way, we acquire 1,200 detective reasoning puzzle questions. We randomly choose 33.4%, 33.3%, and 33.2% of the puzzles to respectively form the training, validation, and test set. The test set was handed to human participants, with an average of 2 human responses per puzzle without hint being recorded, serving as a means of assigning a level of difficulty to each test puzzle. The detail statistic information is listed in Tab 1. ## Self-Question Prompt As shown in Figure 4, Self-Question consists of four stages, namely Detail Detection, Detail Connection, Answer Inspiration, and Output Command. These four components guide the model in its thinking process, sequentially stimulating a deeper understanding of the text as well as allowing the model to identify key and distractor information. Finally, using all previous thought processes, it generates correct answers for the questions. The detail prompts we provide in each stage are listed in Appendix in Tab. 5. **Detail Detection:** The objective of this stage is to ask model to generate as much detail and facts based on the given context as possible, especially for the details that are not explicit in the raw context. This would notably enhance the model's understanding to the real-life context and provide the essential information foundation for the subsequent reasoning process of the model. **Detail Connection:** The aim of this stage is to enable the model to understand how the textual information is assembled together, further asking models to generate more details based on the give details. By doing so, more deeper details are mined out as the evidence for deeper reasoning. **Answer Inspiration:** The objective of this stage is to find clues from the existing information that can aid in answering the question. This operation, on one hand, can enhance the model's predictive power, and on the other hand, it can help the model to precisely pinpoint the final answer, greatly reducing the possibility of random search. **Weighted Reasoning:** At this stage, the model is prompted to prioritize the information deduced from the previous three stages, then take the information in the original article into consideration, and output the answer after a comprehensive reasoning. The objective of this stage is to ensure that, when producing the answer, the model can effectively utilize the reasoning content for the final response, thereby improving the reliability of the answer. This not only escalates the model's judgment but also enhances the accuracy and credibility of the model-generated answer. ## Human Test To delve into the specifics of how humans do informal reasoning and to gather benchmark results, we incorporated 50 human participants to tackle questions within the test set \begin{table} \begin{tabular}{l l} \hline \hline Statistic & Number \\ \hline Total questions & 1,200 \\ \hline Training questions & 401 \\ Validation questions & 400 \\ Test questions & 399 \\ \(*\)_Simple Test question_ & 223 \\ \(*\)_Medium Test question_ & 94 \\ \(*\)_Hard Test question_ & 82 \\ \hline Context Length (Average/Max) & 104.9 / 1379 \\ \(*\)_Test_ & 97.6 / 387 \\ Question Length (Average/Max) & 17.6 / 39 \\ \(*\)_Test_ & 12.4 / 18 \\ Hint Length (Average/Max) & 71.3 / 176 \\ \(*\)_Test_ & 47.9 / 101 \\ Analysis Length (Average/Max) & 84.2 / 960 \\ \(*\)_Test_ & 66.3 / 176 \\ \hline \hline \end{tabular} \end{table} Table 1: Key statistics for Detective Reasoning Puzzle Benchmark. Figure 3: The example of the question in Detective Reasoning Puzzle Benchmark of Detective Reasoning Puzzles. The total duration of the test was three hours, which the participants were permitted to depart upon early completion. The 50 participants comprised undergraduate and graduate students from universities in China, each of whom was compensated with wages surpassing the local minimum hourly rate. The benchmark is translated into Chinese language for human test, and the answers provided by participants are also in Chinese. Leveraging the existing online question-and-answer system 1, we assembled the questions so that each participant was able to respond online, allowing us to conveniently track the time each question was answered. Footnote 1: [https://www.wjx.cn/](https://www.wjx.cn/) In order to provide human baseline results, as well as to validate the effect of Hint on human performance, we divided 50 subjects into two groups: The first group is primary for posing baseline result, which consist of 40 participants who were provided with the "Context" and "Question" from the test puzzles. In this group, we assigned each participant 20 questions, each test question was answered by at least two people. We ensure the randomness of question allocation, whilst maintaining an approximately equal length for all questions answered by an individual. The second group was used to validate the effect of Hint on human outcomes, which consisted of 10 participants, who, in comparison to the first group, received additional "Hints" in their puzzles. We utilized only 100 questions from the test set in this group to ensure that each question used for testing was answered by at least two people. It is important to note that we understand the importance of ensuring an equal number of participants in both test groups to yield more representative results. However, our approach was constrained by the wages we could offer, as well as the necessity to ensure that each test question was answered by at least two people in the first group. Consequently, this necessitated the adoption of a seemingly biased grouping strategy. ## Experiments ### Experiment Setup **Models:** Referring to the Alpaca-Eval leaderboard July 27, 2023, we select four best-performing models in four different parameter scales: GPT-4 (OpenAI 2023b), GPT3.5-turbo (OpenAI 2023a), Vicuna-33b-v1.3 (Zheng et al., 2023), WizardLM-13B-V1.2 (Xu et al., 2023), and Vicuna-7b-v1.3 (Zheng et al., 2023). In these, we used the official openai API for GPT-4 and GPT-3.5-turbo between July 10th and 29th, 2023, with their parameter counts being 1800B and 175B respectively. Vicuna-33b-v1.3, WizardLM-13B-V1.2, and Vicuna-7b-v1.3 are models with parameters of 33B, 13B and 7B respectively. In table 2, we use GPT-4 to denote GPT-4 model, GPT-3.5 to denote GPT-3.5-turbo, Vicuna-33B to represent Vicuna-33b-v1.3, WizardLM to represent WizardLM-13B-v1.2, and Vicuna-7B to represent Vicuna-7b-v1.3. **Baselines:** We picked out some methods of constructing prompts that can be used to enhance the model's informal reasoning capabilities for comparison. Naive is simply input the "Context" and "Question" to the LLMs, and ask for answer. Self-CoT (Kojima et al., 2022) is a simple prompt trick that add "let's think step by step" to the prompt to ask LLMs step by step output the reasoning process. Auto-CoT (Zhang et al., 2022) automatically construct demonstration in CoT format. So it is worth noted that Auto-CoT is not suitable in 0-shot setting, so we only test the performance in 5-shot setting. Cheat-Naive use "Hint" as one of the input. Since "Hint" will not incur other information, only tell LLMs where is key information, so we regard Figure 4: The illustration of Self-Question Prompt framework. During the Detail Detection and Detail Connection phase, the model is prompted to extract maximum clues from the primary context. In the Answer Inspiration phase, the model is tasked with discerning the correlation between these clues and the initial question. Throughout the the Output Command stage, the model is required to generate an ideal response that draws upon both the context and the cognitive processes aforementioned. Cheat-Naive is the upper bound of the existing Language Models. Self-Consistency Wang et al. (2022) propose to use voting supersedes the simple, naive greedy decoding used in earlier CoT prompting to output the mostly generated result by the model. In the experiment, we generate ten different outputs using LLMs, and we ask model to generate the final result based on all the outputs. Complexity-CoT Fu et al. (2022) uses the longest reasoning steps among all the outputs produced for a question to generate the final output. Plan-and-Solve CoT (PS-CoT) Wang et al. (2023) functions by first deconstructing the question before seeking its solution. Self-Question is a method that we propose, in which, we require the model to first delve deeply into the information already available in the original text, ponder the link between this information and the problem, and finally ask the model to integrate its existing thoughts to formulate a response. **Demonstration:** Furthermore, we investigate the influence of in-context learning on the performance of the model. To accomplish this, we carried out tests on each of the examination questions using demonstrations sampled from the training set. For each test sample, we randomly selected 5 training samples to form a demonstration. In this way, different models using various prompt settings would utilize the same 5-shot sample-based demonstration for the same question. For Naive and Cheat-Naive approaches, we directly concatenate the sampled questions with the "Answer" or "Hint" to form demonstrations. For Auto-CoT, we ask LLMs to generate the reasoning process of few-shot samples in the format of CoT, and concatenate the CoT to the samples as demonstration. For examples about how we construct demonstrations are listed in Sec. - in Appendix ### Overall Performance The detailed results are shown in Tab. 2, and here we give our analysis towards the experiment results as follow: effect of utilizing "Self-Question" is not as optimal as "PS-CoT". However, when models possess a greater volume of parameters and more potent capabilities, the enhancement resulting from using "Self-Question" is substantial. This suggests that, with a suitable prompt design, powerful LLM can compensate, to a certain degree, for the lack of life experience and an inability to carry out intuitive reasoning. At present, "Self-Question" is regarded as the best prompt for enhancing a model's capacity for informal reasoning. Nonetheless, it's worth noting, even for GPT-4, the score achieved after deploying "Self-Question" (35.8%) still hasn't surpassed the lowest human score (47.3%). ### Difference Between Human and GPT-4 **Difference in Performance:** We decompose the factors that may affect the performance of humans and GPT-4, and draw the following conclusions based on the content shown in Figure 5: Firstly, the length of the question does not affect the accuracy of humans, but it significantly slows down the response speed of human respondents. From this, it can be inferred that humans spend a lot of time not on reasoning, but on reading the question. Secondly, the length of the question severely affects the performance of GPT-4 under the Naive setting. This may be because the longer the question, the more distracting information it contains, and language models (LM) are extremely poor at handling such distractions. Lastly, Self-Question can stably and significantly improve the performance of answering long questions, but it may have some negative impact on short questions. **Difference in Verification:** We manually annotated 60 results generated by GPT-4 and compared them with the results annotated by GPT-4. As shown in Figure 6, the overall consistency is 91.7%. Among them, the consistency between the model and humans is better when the length of the standard answer is shorter. However, when the length of the standard answer is longer, there may be some discrepancies between the model and humans. This is mainly because informal reasoning itself is a probabilistic inference. When the standard answer is too long, GPT-4 may consider that it is not necessary to answer so precisely in some places to be considered correct, while human annotators will score strictly according to the standard answer. ### Ablation Study The results from our ablation study are presented in Table 3. We denote the "Weighted Reasoning" stage with WR, the "Answer Inspiration" stage with AI, and the "Detail Connection" stage with DC. It is observed that each component of Self-Question plays a role in improving the informal reasoning ability of LMs. The contribution of Detail Detection evidences that language models do not inherently utilize common-sense knowledge. The enhancement brought about by Detail Connection underscores that language models generally do not engage in profound thinking. The improvement yielded by Answer Inspiration suggests that while language models cannot directly eliminate irrelevant information, they do possess the capability to do so. The advancement provided by Weighted Output elucidates that language models typically treat all input prompts impartially. The improvements brought about by the four components of the Self-Question method serve to highlight the shortcomings of the existing LLMs in conducting informal reasoning. ### Case Study The case study is conducted as detailed in Appendix in Sec.. Two cases are listed in Table 7 and Table 8, and the analysis towards the cases is as follow: Since there is no much difference between self-CoT and naive in reasoning, which results Figure 5: The correlation between GPT-4’s performance on Self-Question Prompts and human performance with the length of the Question, Hint, Ground Truth, and Reasoning Length. And the correlation between the time expenditure of human reasoning with the length of the Question, Hint, Ground Truth, and Reasoning Length. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Detective Reasoning} \\ & Easy & Middle & Hard & All \\ \hline _Self-Question_ & & & & \\ \hline w/o None & 41.3 & 27.7 & 30.5 & 35.8 \\ w/o WR & 41.7 & 24.5 & 26.8 & 34.6 \\ w/o WR,AI & 35.9 & 21.4 & 17.1 & 28.6 \\ w/o WR,AI,DC & 30.0 & 16.0 & 8.5 & 22.3 \\ \hline _Others_ & & & & \\ \hline Naive & 22.4 & 11.7 & 3.66 & 16.0 \\ Self-CoT & 17.5 & 8.5 & 2.4 & 12.4 \\ PS-CoT & 24.7 & 12.8 & 13.4 & 19.6 \\ \hline \end{tabular} \end{table} Table 3: The ablation study of various design in self-question prompt framework, and the comparison with other baselines. Figure 6: The correlation between the consistency of GPT-4 scores and human scores to the length of ground truth. The average consistency shown in the figure is 91.7%. in similar outcomes. After injecting the Hint, the model can accurately answer the questions due to the Hint's ability to eliminate distractions and highlight suspicious points. PS-CoT, by extracting key variables from the context, can avoid the influence of distracting content, thus achieving good results. However, Self-Question, with its superior ability to perform deep reasoning and associations, can achieve better results in the example requiring deep reasoning. ## Conclusion In this paper, we dive deep into how Large-Scale Language Models handle informal reasoning ability, the everyday reasoning ability used by human in daily life. We construct a Detective Reasoning Puzzle Dataset to propose a benchmark for evaluating this ability. Based on this benchmark, we invite participants to propose a human baseline for informal reasoning, and borrow the intuitive reasoning ability from human to devise Self-Question Prompt Framework, which greatly improve the informal reasoning ability of LLMs. Experiments show that humans are way ahead of the LLMs in informal reasoning. Althought Self-Question Prompt compensate a little for this gap, the informal ability of human still seems unreachable by the LLMs.
2305.13108
Debiased Automatic Speech Recognition for Dysarthric Speech via Sample Reweighting with Sample Affinity Test
Automatic speech recognition systems based on deep learning are mainly trained under empirical risk minimization (ERM). Since ERM utilizes the averaged performance on the data samples regardless of a group such as healthy or dysarthric speakers, ASR systems are unaware of the performance disparities across the groups. This results in biased ASR systems whose performance differences among groups are severe. In this study, we aim to improve the ASR system in terms of group robustness for dysarthric speakers. To achieve our goal, we present a novel approach, sample reweighting with sample affinity test (Re-SAT). Re-SAT systematically measures the debiasing helpfulness of the given data sample and then mitigates the bias by debiasing helpfulness-based sample reweighting. Experimental results demonstrate that Re-SAT contributes to improved ASR performance on dysarthric speech without performance degradation on healthy speech.
Eungbeom Kim, Yunkee Chae, Jaeheon Sim, Kyogu Lee
2023-05-22T15:09:27Z
http://arxiv.org/abs/2305.13108v3
# Debiased Automatic Speech Recognition for Dysarthric Speech ###### Abstract Automatic speech recognition systems based on deep learning are mainly trained under empirical risk minimization (ERM). Since ERM utilizes the averaged performance on the data samples regardless of a group such as healthy or dysarthric speakers, ASR systems are unaware of the performance disparities across the groups. This results in biased ASR systems whose performance differences among groups are severe. In this study, we aim to improve the ASR system in terms of group robustness for dysarthric speakers. To achieve our goal, we present a novel approach, sample reweighting with sample affinity test (Re-SAT). Re-SAT systematically measures the debiasing helpfulness of the given data sample and then mitigates the bias by debiasing helpfulness-based sample reweighting. Experimental results demonstrate that Re-SAT contributes to improved ASR performance on dysarthric speech without performance degradation on healthy speech. Eungbeom Kim\({}^{1\star}\), Yunkee Chae\({}^{1\star}\), Jaeheon Sim\({}^{1}\), Kyogu Lee\({}^{1,2}\)\({}^{1}\)IPAI, \({}^{2}\)AIIS, Seoul National University, Seoul, Republic of Korea {eb.kim, yunkimo95, sjhoney0112, kglee}@snu.ac.kr **Index Terms**: speech recognition, debiasing, dysarthric speech ## 1 Introduction Automatic speech recognition (ASR) performance across groups should be fair regardless of disorder, race, age, and dialect for a trustworthy and inclusive system. Although ASR has been improved along with the success of deep learning, ASR systems based on deep learning tend to be vulnerable to biases [1, 2]. Consequently, the bias in ASR systems interferes with the ASR system's trustworthiness by causing poor worst group performance. Applying ASR on dysarthria, which is a type of motor speech disorder, also suffers from the lack of group robustness between healthy and dysarthric speakers. In other words, a dysarthric speaker group has a lower performance than a healthy speaker group on ASR. This is because dysarthric speech has low intelligibility, whereas deep learning has a tendency to be easily fitted to shortcuts [3, 4]. However, most of the previous dysarthric speech recognition studies mainly focus on the sole performance of the ASR system on dysarthric speech. In this study, we aim to improve the performance of ASR on dysarthric speech from a debiasing point of view. To the best of our knowledge, this is the first work to endeavor to make a debiased ASR system for dysarthric speech. In this regime, we focus on a debiasing method based on sample reweighting. Recently, sample reweighting such as [5, 6] is proposed as a promising solution to handle the bias problem. Sample reweighting methods mainly consist of 1) estimating the helpfulness of a sample for debiasing and 2) reweighting the sample based on helpfulness. To estimate the debiasing helpfulness, [5] defines _bias-conflicting sample_ and _bias-aligned sample_ which denote an incorrectly classified sample and a correctly classified sample from an unintended decision rule of a biased model, respectively. The strategies utilizing those definitions are based on the assumption that upweighting the data samples with a large loss from the biased model, i.e. bias-conflicting samples, leads to improved generalization performance of underfitted groups. This assumption, however, hardly holds in real-world problems. For example, although an outlier might have a large loss value, it is an undesirable strategy to upweight the outlier. Therefore, we define a new taxonomy to directly categorize the samples by debiasing helpfulness; _bias-blocking sample_ denotes a sample that mitigates the model's bias, and _bias-accelerating sample_ denotes a sample that accelerates the model's bias when the model is trained on each sample. To estimate whether a given sample is _bias-blocking sample_, we propose a sample affinity test (SAT) which systematically measures debiasing helpfulness. SAT is based on the novel metric dubbed sample affinity, which denotes a training effect of the given sample to the other samples' loss. This is inspired by task affinity [7] which estimates inter-task affinity using the loss shift of each task after lookahead training of a given task. Intuitively, SAT measures debiasing helpfulness using sample affinity of the given sample on the bias-conflicting sample set, based on a one-step lookahead training of a given sample. Unlike the loss-based methods, SAT can filter unhelpful bias-accelerating samples even if the samples are bias-conflicting. Based on SAT, we propose a novel sample reweighting method, Re-SAT, to challenge the debiasing problem of dysarthric speech recognition. Re-SAT consists of four sequential components for implementation. The first component estimates bias-conflicting samples based on loss, following [6]. Secondly, the SAT component is activated based on the estimated bias-conflicting samples to accurately measure the debiasing effect of each sample. In the third component, the SAT result is normalized through sorting and mapped to the weights. Finally, Re-SAT trains the models with the reweighted samples. In summary, we observe the biased performance of ASR on disordered speech and analyze the ASR system through the lens of debiasing. Furthermore, we present sample affinity test (SAT) to directly measure the debiasing helpfulness of samples. We also propose Re-SAT, a novel method for sample reweighting that is available under the group label-free environment and mitigates the margin between the ASR performances of healthy speakers and dysarthric speakers. ## 2 Related work **Dysarthric speech recognition** ASR on dysarthric speech still remains a challenging problem. Since the disordered speech dataset has limited scalability due to the difficulty of data collection, data augmentation on dysarthric speech has been widely studied [8, 9, 10, 11, 12]. Other investigations adopt pre-trained self-supervised learning models [13, 14, 15, 16] for improved disordered speech recognition. On the other hand, [17, 18, 19] utilize some prior knowledge that a dysarthric speaker has distinct characteristics compared to healthy or other dysarthric speakers by leveraging speaker adaptation using the spectrogramal level. In this study, we approach ASR on dysarthric speech from a different point of view, debiasing, which has never been used to the best of our knowledge. **Debiasing** Debiasing is a challenging but essential area for fair and inclusive deep learning. Group robustness, which denotes an ability to perform satisfactorily across different groups in terms of the given task, is one of the key factors of a debiased model. We focus on group robustness without group annotations, for a wider real-world application. Sample reweighting is one of the promising methods for debiasing. While empirical risk minimization (ERM) uniformly averages all of the performances across the given samples, sample reweighting methods such as Learning from Failure (LIF) [5] and Just Train Twice (JTT) [6] upweight the samples that are expected to belong to the poor performance group called bias-conflicting samples. These methods are based on the intuition that upweighting the bias-conflicting samples with large losses leads to a debiased model, which is not always true. In particular, we test ASR systems on dysarthric speakers, each of who can be regarded as a respective group due to the variety of dysarthria. In this complex real-world environment, reweighting the samples using a fine-grained metric is an important issue. For this reason, we aim to propose direct and accurate criteria for reweighting beyond the estimation of bias-conflicting samples based on loss. **Task affinity** Given a set of tasks in a multi-task learning setup, task grouping aims to find the clusters of tasks that should be trained simultaneously for improved performance. Task affinity [7] is proposed to address the task grouping problems. To examine the task affinity of task \(i\) on task \(j\), they leverage a lookahead update on task \(i\). Then, they compute the loss shift after the lookahead update to explore the effect of task \(i\) on \(j\). Inspired by task affinity, we propose sample affinity to investigate the inter-sample effect for debiasing. ## 3 Method In this section, we introduce our sample reweighting with sample affinity test (Re-SAT), which is proposed to challenge the ASR on dysarthric speech. Re-SAT consists of four components: 1) bias-conflicting sample estimation, 2) sample affinity test, 3) normalized reweighting and 4) training, as shown in Figure 1. The data samples are reweighted through the first three components, and then Re-SAT trains the model with the reweighted data samples in the 4) training block. In summary, Re-SAT upweights the bias-blocking samples and downweights the bias-accelerating samples using sample affinity test for debiasing. Details are as follows. ### Bias-conflicting sample estimation At the first step, Re-SAT estimates the bias-conflicting samples using the losses of samples in each batch. Re-SAT regards the samples with the largest \(K\) losses in a batch as the bias-conflicting samples. Unlike the previous research [6] which fixes the estimation results, Re-SAT keeps estimating the bias-conflicting samples during training to accurately reflect the current state of the model. That is, the estimation result for each sample can be modified throughout training. Although Re-SAT estimates the bias-conflicting samples, Re-SAT does not map the estimation results for reweighting because the large loss does not guarantee the debiasing ability, as we introduced in the above sections. We address this issue by designing a novel criterion that directly estimates the bias-blocking samples, for debiasing ability. ### Sample affinity test We propose a novel sample affinity test (SAT) to evaluate the debiasing ability of samples to filter the bias-accelerating samples. Given the model \(f_{\theta}\), the samples in the mini-batch \(x_{1},...,x_{N}\), and the estimated bias-conflicting samples \(\hat{b}_{1},...,\hat{b}_{K}\), SAT utilizes single step lookahead updating with respect to \(x_{i}\) as \[\theta_{x_{i}}^{LA}=\theta-\eta\nabla L(x_{i};f_{\theta}), \tag{1}\] for \(i=1,...,N\) where \(\eta\) is learning rate, \(N\) is batch size, and \(L\) is a loss function. After the lookahead updating process, SAT compares the loss of 1) the lookahead model \(\theta_{x_{i}}^{LA}\) and 2) the original model \(\theta\) on the bias-conflicting samples to calculate the averaged sample affinity: \[\text{SA}(x_{i}\rightarrow\{\hat{b}_{1},...,\hat{b}_{K}\};f_{\theta})=\frac{1} {K}\sum_{\forall k}\left(1-\frac{L(\hat{b}_{k};f_{\theta_{x_{i}}^{LA}})}{L( \hat{b}_{k};f_{\theta})}\right). \tag{2}\] We present sample affinity inspired by task affinity [7] for debiasing. Note that SAT computes the sample affinity every step to take into account the current state of the model, while task affinity is averaged across the whole training step. ### Normalized reweighting Sample affinity on the bias-conflicting samples approximates the bias-blocking effect of a given sample. However, sample affinity is not an absolute but relative score which depends on the bias-conflicting sample set, the current state of the model, and the maturity of the model. Therefore, Re-SAT does not directly map the sample affinity to weights. Instead, Re-SAT normalizes the sample affinity by extracting the rank of the sample affinity with respect to the descending order within the batch. This normalization stabilizes unwanted shifts in the learning rate. For sorted samples \(x_{1},...,x_{r},...,x_{N}\) with respect to sample affinity in descending order, Re-SAT finally reweights the samples using the function \(w:\mathbb{N}\rightarrow\mathbb{R}^{+}\), which is defined as follows: \[w(r)=\frac{\exp(s(N-r)/(N-1))}{\sum_{r=1}^{N}\exp(s(N-r)/(N-1))} \tag{3}\] for \(r=1,...,N\) where \(r\) is the rank of the sample \(x_{r}\) and \(s\) is the constant hyperparameter. For our experiments, we set batch size \(N=32\) and \(s=4\). ### Training In this component, Re-SAT trains the model \(f_{\theta}\) with the reweighted data samples. Unlike the vanilla training that averages the loss of each data sample, Re-SAT leverages weighted average for total loss as: \[\theta^{\prime}=\theta-\eta\nabla\frac{1}{N}\sum_{r=1}^{N}w(r)L(x_{r};f_{\theta}). \tag{4}\] for sorted samples \(x_{1},...,x_{N}\) with respect to sample affinity \(\text{SA}(x_{i})\) in descending order. ## 4 Experiments ### Dataset UASpeech corpus [20] is used for our experiments. UASpeech corpus is one of the largest English dysarthric speech datasets, composed of speech from 15 dysarthric speakers and 13 healthy control speakers. Each dysarthric speaker is classified into very low, low, mid, and high levels of intelligibility. The dataset consists of three blocks B1, B2, and B3. Each block contains 155 common words and 100 uncommon words spoken by dysarthric and healthy control speakers. We used block 1 and block 3 of all healthy and dysarthric speech recorded on microphone M5 as a training set, and all data in block 2 as the test set. Note that we used not only dysarthric speech but also healthy speech as the test set. Basically, UASpeech provides the speech data denoised by _noisereduce_[21, 22]; therefore we used this version for our experiments. We excluded the uncommon words from both the training and test set and no additional data augmentation was conducted. ### Model We fine-tuned _Whisper_[23], the recent state-of-the-art ASR model, trained on a large amount of labeled audio-transcription data. It employs a simple encoder-decoder Transformer architecture. We used the _Whisper-tiny_, pre-trained on English only with 39M parameters, which is available at HuggingFace transformers repository [24]. ### Experimental Setups We trained the model using AdamW optimizer [25] with a learning rate of \(10^{-5}\), weight decay of 0.1, and batch size of 32 for 30 epochs. For the proposed method, Re-SAT, we investigated the effect of the number of bias-conflicting samples in batch as \(K\in\{2,4,8,16\}\). For comparison, we tested the JTT method [6] with upweighting value 25 and bias-conflicting sample identification epoch 3. For the ablation studies, we tested the model named Re-Loss, which utilizes the loss-based sample reweighting instead of sample affinity and follows Re-SAT in other settings, to figure out the impact of the sample affinity test. ## 5 Results We investigated with the effect of the proposed method, Re-SAT, by comparing it to empirical risk minimization (ERM) and JTT for debiasing. In contrast to most of the dysarthric speech recognition studies that focus only on dysarthric speech, we investigated the effect of Re-SAT to ASR system on healthy speech for a fair comparison. This is important for debiasing because the performance disparity on dysarthric speech and healthy speech determines the group robustness of the systems. Results in Table 1 show word error rate (WER) of the models on each speaker and intelligibility group. It is shown that Re-SAT contributes to performance improvements for each group compared to ERM by relatively decreasing 12.02% (54.66 \(\rightarrow\) 48.09), 11.10% (19.19 \(\rightarrow\) 17.06), 7.22% (13.57 \(\rightarrow\) 12.59), and 13.06% (6.43 \(\rightarrow\) 5.59) in terms of the averaged WER for the very low, low, mid, and high intelligibility group, respectively. Surprisingly, Re-SAT also surpasses ERM on the healthy speaker group. From a speaker-wise view, it is observed that the performances of 12 out of 15 speakers show enhanced results in terms of WER on the Re-SAT-based ASR model. These results demonstrate the robust performance gain of Re-SAT even un Figure 1: Illustration of Re-SAT for debiased dysarthric speech recognition. Re-SAT consists of 1) bias-conflicting sample estimation, 2) sample affinity test, 3) normalized reweighting, and 4) training blocks. der a diverse group environment. On the contrary, the other debiasing method, JTT, increases the averaged WER on healthy speech, low and high intelligibility speech although JTT reduces the very low and mid intelligibility group's WER, compared to those of ERM. Figure 2 shows the rank sorted in descending order of training loss within the batch with respect to each intelligibility group. For very low, low, and mid-intelligibility groups that are vulnerable to bias, the increasing and large rank value is desired because it means that the loss for the biased group is decreased, even though it is approximated by training loss. Interestingly, debiasing of JTT works only on the single worst (very low) intelligibility group and even deteriorates the low and mid groups' ranks. This problem can be caused by the structure of JTT's 1) binary classification of bias-conflicting samples and 2) which are fixed at an early stage. On the other hand, Re-SAT leverages 1) fine-grained reweighting and 2) updated bias-conflicting set through training and shows desirable results across diverse groups. Figure 2 also demonstrates that ERM is inappropriate for debiasing. Although the ranks on very low, low, and mid intelligibility groups of ERM increase at the early stage for all training schemes, the ranks decrease at the late stage of ERM. Even worse, the healthy speaker's rank shows the highest rank compared to other methods, which means the biased performance of ERM. We also explore the Re-Loss, which substitutes the sample affinity-based reweighting of Re-SAT with loss-based reweighting, to investigate the importance of the sample affinity test. Although Re-Loss outperforms ERM and JTT by a large margin, Re-SAT still shows a surpassing performance. This demonstrates the precise bias-blocking sample estimation ability of Re-SAT, as we expected. Intuitively, selecting decent bias-conflicting samples takes an important role in Re-SAT. For this reason, we investigate the size of the selected bias-conflicting sample set \(K\) in Re-SAT as shown in Table 2. Re-SAT shows stable improvements across the intelligibility groups where \(K=2,4,8\) compared to ERM for all intelligibility groups. This supports our design choice of Re-SAT which re-estimates the bias-conflicting samples through training. Even though Re-SAT selects small bias-conflicting samples, Re-SAT can re-estimate the new underfitted data sample as a bias-conflicting sample through training. On the contrary, Re-SAT with \(K=16\) even degrades the ASR results, since setting a large \(K\) as 16 interferes with the accurate estimation of the bias-conflicting samples. ## 6 Conclusions In this study, we address a debiasing problem of ASR on dysarthric speech towards a fair and inclusive ASR system. In contrast to the previous research that focuses only on the ASR performance for dysarthric speakers, we explore the fairer validation system by analyzing the performance of healthy and dysarthric speech at the same time. To achieve our goal, we propose a novel debiasing method based on sample reweighting, Re-SAT. The ASR system using Re-SAT surpasses the other baselines across the diverse dysarthric speakers and shows a robust performance gain over various hyperparameters. For future work, it is worth exploring the integrated system of Re-SAT and other dysarthric speech recognition systems by leveraging the versatile structure of Re-SAT. \begin{table} \begin{tabular}{l|l|c c c c} & & \multicolumn{4}{c}{WER (\%)} \\ \hline & Speaker & \multirow{2}{*}{ERM} & \multirow{2}{*}{JTT} & \multirow{2}{*}{Re-Loss} & Re-SAT \\ & (Intelligibility \%) & & & & (Ours) \\ \hline & M04 (2\%) & 84.39 & **79.87** & 82.45 & 80.26 \\ & F03 (6\%) & 44.33 & 42.21 & 40.28 & **38.25** \\ & M12 (7\%) & 44.41 & 42.04 & 41.72 & **35.38** \\ & M01 (17\%) & 50.97 & 51.77 & 47.10 & **44.19** \\ \cline{2-6} & Avg. & 54.66 & 52.46 & 51.50 & **48.09** \\ \hline & M07 (28\%) & 16.22 & 23.78 & 16.96 & **16.04** \\ & F02 (29\%) & 19.72 & 22.86 & 16.59 & **15.94** \\ & M16 (43\%) & 22.04 & 23.55 & 20.32 & **19.57** \\ \cline{2-6} & Avg. & 19.19 & 23.39 & 17.84 & **17.06** \\ \hline & M05 (58\%) & **12.63** & 13.55 & 12.90 & 12.72 \\ & M11 (62\%) & 14.30 & 12.80 & 12.04 & **10.11** \\ & F04 (62\%) & 13.88 & 13.88 & **13.22** & 14.64 \\ \cline{2-6} & Avg. & 13.57 & 13.44 & 12.75 & **12.59** \\ \hline & M09 (86\%) & 9.86 & 11.15 & **8.66** & 8.76 \\ & M14 (90\%) & 12.17 & 13.73 & 11.98 & **9.86** \\ & M10 (93\%) & 3.50 & 3.32 & **2.40** & 2.76 \\ & M08 (95\%) & 4.61 & 5.53 & 4.70 & **3.78** \\ & F05 (95\%) & **2.03** & 4.33 & **2.03** & 2.76 \\ \cline{2-6} & Avg. & 6.43 & 7.61 & 5.95 & **5.59** \\ \hline Avg. & 21.49 & 22.25 & 20.15 & **19.93** \\ \hline Avg. & 3.81 & 4.21 & 3.75 & **3.40** \\ (Healthy) & & & & & \\ \hline Avg. & 12.93 & 13.51 & 12.20 & **12.08** \\ \cline{2-6} & & & & & \\ \end{tabular} \end{table} Table 1: ASR results in terms of Word Error Rate (WER) on UASpeech corpus. VL/LM/H refer to very low/ind/high intelligibility groups, respectively. The percentage of intelligibility is measured based on how accurately naive human listeners can transcribe isolated words produced by speakers, according to [20]. \(K\) of Re-SAT is set to 4. \begin{table} \begin{tabular}{l|l l l l|l|l|l} & \multicolumn{4}{c|}{Intelligibility group} \\ \hline K & VL & L & M & H & \begin{tabular}{l} Avg. \\ (D) \\ \end{tabular} & \begin{tabular}{l} Avg. \\ (H) \\ \end{tabular} & Avg. \\ \hline ERM & 54.66 & 19.19 & 13.57 & 6.43 & 21.49 & 3.81 & 12.93 \\ \hline 2 & 50.12 & 17.49 & 14.15 & 5.75 & 19.93 & 3.72 & 12.08 \\ 4 & **48.09** & **17.06** & **12.59** & **5.59** & **19.05** & **3.40** & **11.47** \\ 8 & 49.44 & 17.61 & 12.62 & 6.36 & 19.75 & 3.84 & 12.05 \\ 16 & 55.31 & 21.10 & 13.60 & 6.91 & 22.21 & 3.96 & 13.36 \\ \hline \end{tabular} \end{table} Table 2: Comparison on hyperparameter \(K\) with word error rate (WER). D and H refer to dysarthric and healthy speech, respectively. Figure 2: Illustrations of the averaged loss ranks sorted by descending order in the mini-batch for each intelligibility group. The x-axis denotes training epochs and the y-axis denotes the loss rank in mini-batch.
2308.15585
On hyperovals in $Q^+(6,4)$
According to a computer search conducted by the author and described in [7], in $Q^+(6, 4)$ there are two types of hyperovals, having 72 and 96 points, respectively. Here we give geometric descriptions for these examples.
Dmitrii V. Pasechnik
2023-08-29T19:28:54Z
http://arxiv.org/abs/2308.15585v1
# On hyperovals in \(Q^{+}(6,4)\) ###### Abstract. According to a computer search described in [8], in \(Q^{+}(6,4)\) there are two types of hyperovals, having 72 and 96 points, respectively. Here we give geometric descriptions for these examples. ## 1. Introduction A hyperoval in a partial linear space is a subset of points intersecting each line in either \(0\) or \(2\) points. Classical examples are hyperovals in projective planes of order \(2^{k}\). In particular, hyperovals in the projective plane \(PG(2,4)\) over \(\mathbb{F}_{4}\) (i.e. of order \(4\)) appear as building blocks of Witt designs, leading to Mathieu sporadic simple groups \(M_{22}\), \(M_{23}\) and \(M_{24}\). More generally, hyperovals in polar spaces over \(\mathbb{F}_{4}\) lead to more sporadic simple groups, these of Fischer, see [8], where this has been investigated, in part relying on computer searchers, and further, to Baby Monster, see [7]. One of these searches in [8] succeeded in enumerating the hyperovals in \(Q^{+}(6,4)\), the line Grassmanian \(\mathcal{L}\) of \(PG(3,4)\). There are two examples (up to the group action), on 72 and on 96 points. In this note we provide a geometric interpretation of the examples of hyperovals in \(\mathcal{L}\) found there. Other hyperovals studied in [8] were further investigated in [4, 3]. Combinatorially, hyperovals in \(\mathcal{L}\) are locally \(5\times 5\)-grid graphs, recently studied in [1], a particular type of extended generalised quadrangles, see e.g. [2, 9]. Recall that the _lines_ of the Grassmannian \(\mathcal{L}\) consist of the \(5\) lines of \(\Pi\) through a point \(p\) on a plane \(P\). We will refer to them as _pencils_ and denote them by \((p,P)\), for \(p\in P\), to avoid confusion between the lines of \(\Pi:=PG(3,4)\) an the lines of \(\mathcal{L}\). That is, a hyperoval of \(\mathcal{L}\) is a set of lines of \(\Pi\) that intersects each pencil in \(0\) or \(2\) lines. ## 2. Geometric constructions **The 72-point example.** Let \(H\) be a hyperbolic quadric in \(\Pi\), that is, the quadric of \(+\) type, with the automorphism group \(PGO_{4}^{+}(4)\). There are two classes, each of size \(5\), of mutually skew lines on \(H\); together they form a \(5\times 5\) grid, each of the classes covers the \(25\) points of \(\Pi\) on \(H\), and, dually, there are \(25\) planes of \(\Pi\) intersecting \(H\) in \(9\) points (which lie on the union of two intersecting lines on \(H\)). See e.g. [6, Sect. 15.3] for details. The remaining \(60\) planes of \(\Pi\) intersect \(H\) in the \(5\) point of a conic; dually, each point of \(\Pi\) not on \(H\) lies on \(5\) planes intersecting \(H\) in \(9\) points. Out of \(357\) lines of \(\Pi\), \(10\) lie on \(H\), there is also a non-empty set \(\mathcal{O}\) of lines that do not intersect \(H\). Each plane \(P\) intersecting \(H\) in a conic \(C\) contains \(6\) of lines in \(\mathcal{O}\). Indeed, there are \(10\) lines intersecting \(C\) in two points, and \(5\) intersecting \(C\) in one point; the remaining \(6\) do not intersect \(C\), and thus do not interect \(H\). We have \(60\times 6/5=72=|\mathcal{O}|\). Dually, each point outside \(H\) is on \(6\) lines from \(\mathcal{O}\). Let \(L\) be a pencil \((p,P)\). Let \(\ell\in L\cap\mathcal{O}\). Then \(P\) intersects \(H\) in a conic \(C\), and as \(p\) is on exactly two lines in \(P\) missing \(C\), we see that there is exactly one more line in \(L\cap\mathcal{O}\). Hence any pencil of \(\mathcal{L}\) intersects \(\mathcal{O}\) in \(0\) or \(2\) elements, and we have proved the following. **Proposition 1**.: _The \(72\) lines skew to a hyperbolic quadric in \(\Pi\) form a hyperoval in \(\mathcal{L}\). _ **The 96-point example**.: Let \(\hat{S}\) be a regular line spread (also known as _elliptic congruence_, as it is related to a class of elliptic quadrics, with \(\hat{S}\) tangent lines to any of them) in \(\Pi\). See e.g. [6, Sect. 17.1] for details. In particular, \(\hat{S}\) consists of \(17\) lines covering all the \(85\) points of \(\Pi\), and dually, each plane of \(\Pi\) contains a line in \(\hat{S}\). Let \(s\in\hat{S}\), and denote \(S:=\hat{S}\setminus\{s\}\). The lines of \(\Pi\) are partitioned into \(S\), the set \(s^{\perp}\) - the \(101\) lines equal to or intersecting \(s\), and the set \(A\) of the remaining \(240\) lines. The stabiliser of \(\hat{S}\) in \(PGL_{4}(4)\) acts1 as \(PGL_{2}(16)=PGO_{4}^{-}(4)\) on \(\hat{S}\), and the subgroup \(G_{S}\) fixing \(s\) in this action acts as \(AGL_{2}(16)=2^{4}:15\) on \(S\). Observe that \(G_{S}\) acts transitively on \(A\); moreover, \(G_{S}\) has a subgroup \(G_{S}^{*}\) of index \(3\), which has \(3\) orbits \(A_{1}\), \(A_{2}\), \(A_{3}\) on \(A\), each of size \(80\). With \(\zeta\in\mathbb{F}_{16}^{*}\) of (multiplicative) order \(15\), \(G_{S}^{*}\) may be assumed to be Footnote 1: This action has a kernel of order \(5\), acting transitively on the points of each line. \[G_{S}^{*}:=\left\langle\begin{pmatrix}\zeta^{3}&0\\ 0&1\end{pmatrix},\begin{pmatrix}1&0\\ 0&\zeta^{3}\end{pmatrix},\begin{pmatrix}1&0\\ 1&1\end{pmatrix}\right\rangle,\] and \(\zeta^{3}\mapsto\begin{pmatrix}\omega&1\\ \omega^{2}&1\end{pmatrix}\), with \(\omega\in\mathbb{F}_{4}^{*}\) of order \(3\), specifies an embedding of \(G_{S}^{*}\) into \(PSO_{4}^{-}(4)\). Note that \(|G_{S}^{*}|=2^{4}.5^{2}=400\). We are going to show that \(\mathcal{O}^{\prime}:=S\cup A_{1}\) is a hyperoval of \(\mathcal{L}\). Any pencil \((p,P)\) of \(\mathcal{L}\), such that either \(p\in s\) or \(s\in P\), does not contain any element of \(S\cup A_{1}\). Thus we need to consider the pencils \((p,P)\), such that \(p\not\in s\not\in P\). Such a pencil contains a line \(p_{s}\in s^{\perp}\) joining \(p\) to a point on \(s\). There exists unique \(\ell\in S\) in \(P\). Let \(p\in\ell\). Then the remaining \(3\) lines of \((p,P)\) lie in one orbit, \(A\), of \(G_{S}\). As \(3\) does not divide \(|G_{S}^{*}|\), these remaining \(3\) lines lie in different orbits of \(G_{S}^{*}\). Thus \(A_{1}\) intersects \((p,P)\) in exactly one line \(\ell^{\prime}\), and the \(2\) lines in \(\mathcal{O}^{\prime}\cap(p,P)\) are \(\ell\) and \(\ell^{\prime}\). It remains to deal with the case where \(p\not\in\ell\). There are \(4\) lines in \((p,P)\) which could potentially intersect \(\mathcal{O}^{\prime}\). By the choice of \(\ell\), this intersection is contained in \(A_{1}\). In the previous case, we have established that \(p_{k}\in\ell\) is incident to unique \(\ell_{k}\in A_{1}\), for \(1\leq k\leq 5\). It turns out that \(\ell,\ell_{1},\dots,\ell_{5}\) form a dual hyperoval in \(P\). Indeed, the stabiliser of \(P\) in \(G_{S}^{*}\) is of order \(5\). An element of order \(5\) in the automorphism group of \(P\) fixes a point, which must be \(s\cap P\), a line not containing this point, which must be \(\ell\), and its orbits on the lines not through the fixed points are three dual conics, one of which consists of \(\ell_{1},\dots,\ell_{5}\). Therefore \(p\) is incident to either exactly \(2\) lines from \(\ell_{1},\dots,\ell_{5}\), or to none of them. Hence **Proposition 2**.: _The \(96\) lines in \(\mathcal{O}^{\prime}\) form a hyperoval in \(\mathcal{L}\). _ **Remark 3**.: The full automorphism group of the locally \(5\times 5\)-grid graph associated the \(96\)-line example is \(8\) times bigger than \(G_{S}^{*}\). In particular, there is an automorphism swapping the two classes of the \(6\)-cliques corresponding to points and hyperplanes of \(\Pi\) not on \(s\), and the Galois group of \(\mathbb{F}_{4}\). ## 3. Concluding remarks Combining the computer computations and the above observations, one establishes the following. **Proposition 4**.: _Let \(\ell\) be an element of a hyperoval in \(\mathcal{L}\). Then there exists a hyperbolic quadric \(H\) in \(\Pi\) so that \(\ell^{\perp}\) in the collinearity graph of \(\mathcal{L}\) coincides with the \(\ell^{\perp}\) in the \(72\)-point hyperoval associated with \(H\)._ With Proposition 4 at hand, it should be is possible to provide a complete classification of the hyperovals in \(\mathcal{L}\). The author observed and Antonio Pasini confirmed in a personal communication that the \(72\)-point construction generalises for any \(PG(3,2^{k})\), with \(k>2\), providing a diagram geometry with diagram \(o-L-o==o\), where \(L\) denotes certain partial linear space of lines in \(PG(2,2^{k})\) missing a hyperoval. ### Acknowledgement We thank Antonio Pasini for a helpful discussion. The author was supported by EU OpenDreamKit Horizon 2020 project. GAP [5] was used to carry out various experiments and tests.
2308.05796
Crystalline-electromagnetic responses of higher order topological semimetals
Previous work has shown that time-reversal symmetric Weyl semimetals with a quadrupolar arrangement of first-order Weyl nodes exhibit a mixed crystalline-electromagnetic response. For systems with higher order Weyl nodes, which are attached to both surface and hinge Fermi arcs, additional phenomena appear on surfaces of codimension $n>1$, such as electromagnetic responses of the hinges. Here we construct a model possessing a quadrupole of higher order Weyl nodes to study the interplay between higher order topology and mixed crystalline-electromagnetic responses. We show that the higher order nature of the Weyl nodes yields a dipole of Dirac nodes on certain surfaces, leading to a mixed crystalline-electromagnetic \emph{surface} response that binds charge to dislocations and momentum-density to magnetic fields. In addition, we show that the model possesses a bulk quadrupole moment of crystal-momentum that provides a link between the bulk and surface responses of the system.
Mark R. Hirsbrunner, Alexander D. Gray, Taylor L. Hughes
2023-08-10T18:00:01Z
http://arxiv.org/abs/2308.05796v1
# Crystalline-electromagnetic responses of higher order topological semimetals ###### Abstract Previous work has shown that time-reversal symmetric Weyl semimetals with a quadrupolar arrangement of first-order Weyl nodes exhibit a mixed crystalline-electromagnetic response. For systems with higher order Weyl nodes, which are attached to both surface and hinge Fermi arcs, additional phenomena appear on surfaces of codimension \(n>1\), such as electromagnetic responses of the hinges. Here we construct a model possessing a quadrupole of higher order Weyl nodes to study the interplay between higher order topology and mixed crystalline-electromagnetic responses. We show that the higher order nature of the Weyl nodes yields a dipole of Dirac nodes on certain surfaces, leading to a mixed crystalline-electromagnetic _surface_ response that binds charge to dislocations and momentum-density to magnetic fields. In addition, we show that the model possesses a bulk quadrupole moment of crystal-momentum that provides a link between the bulk and surface responses of the system. ## I Introduction Topological semimetals (TSMs) possess quasi-topological terms in their bulk electromagnetic responses that are governed by the configuration of their nodal points or lines in momentum space [1; 2; 3; 4; 5; 6; 7; 8]. In particular, the responses of point node TSMs are proportional to the chirality-weighted momentum space multipole moments of the nodal points, i.e., monomials of their momentum-space location weighted by their chirality or helicity. For example, in the simplest case of a time-reversal breaking Weyl semimetal (WSM) with two nodes, the magnitude of the bulk anomalous Hall conductivity is proportional to the dipole moment of the Weyl nodes in momentum space [9; 10; 11; 12]. Additionally, these bulk responses are often necessary to compensate for anomalous surface states, such as chiral Fermi arcs in time-reversal breaking WSMs [10]. In recent years the field of TSMs has grown to include higher order TSMs (HOTSMs) that are characterized by spectral features and other phenomena on surfaces of codimension \(n>1\). The nodal points of HOTSMs differ from conventional TSMs in that they are attached to both surface and hinge Fermi arcs. Heuristically such a node separates gapped momentum space planes that differ in both Chern number _and_ some form of 2D higher order topology. The family of HOTSMs is quite diverse, including higher order analogs of Dirac and Weyl semimetals [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], nodal line semimetals [28], nodal superconductors [29; 30; 31; 32], non-Hermitian TSMs [33; 34; 35; 36], and periodically driven Floquet TSMs [37; 38; 39; 40; 41]. In some instances, HOTSMs possess additional boundary states and/or electromagnetic responses beyond first-order TSMs. For example, second order WSMs exhibit both surface Fermi arcs and hinge states that generate competing surface and hinge responses [23] in which the bulk charge bound to a magnetic flux (via the anomalous Hall effect) is constrained by the charge bound to hinges parallel to the flux. Similarly, conventional type-I Dirac semimetals (DSMs) have a bulk spin-Hall-like response determined by the momentum-space dipole moment of the Dirac nodes [12], while some higher order DSMs also possess a bulk electric quadrupole moment that generates a surface polarization response [13]. In parallel to these developments of HOTSMs, recent studies have shown that TSMs can exhibit mixed crystalline-electromagnetic responses in addition to purely electromagnetic responses. These mixed crystalline-electromagnetic responses are often probed by subjecting systems to dislocation and disclination defects [42]. TSMs typically possess interesting response phenomena to such defects because the TSM nodal surfaces are protected by translation symmetry and, in some cases, rotation symmetries [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. For example, time-reversal symmetric WSMs with a quadrupole arrangement of Weyl nodes in momentum space have electric charge bound to screw dislocations and crystal momentum bound to magnetic flux [47; 48; 53]. Motivated by these unusual electromagnetic responses, here we take the first steps toward understanding the mixed crystalline-electromagnetic responses of higher order TSMs. In Section II we introduce a model of a TSM with a quadrupole arrangement of higher order Weyl nodes and characterize its topological features. In Section III we show that this model possesses a rank-2 mixed crystalline-electromagnetic response similar to that found in Refs. [47; 48] for quadrupolar arrangements of first-order Weyl nodes. Furthermore, we demonstrate that the higher order nature of the Weyl nodes in our model leads to an additional _surface_ crystalline-electromagnetic response arising from the presence of a dipole of surface Dirac nodes. In Section IV we show that this model can possess a bulk quadrupole moment of equilibrium crystal momentum. We show that the magnitude of this quadrupole moment of momentum is determined by both the momentum-space quadrupole moment of the bulk Weyl nodes and the momentum-space dipole moment of the surface Dirac nodes. This result is a generalization of the notion of characterizing TSMs via multipole moments of the nodal point distribution to the broad class of HOTSMs. In Section V we conclude with a discussion of future directions for this work. ## II Model In this section we construct a model of a time-reversal symmetric Weyl semimetal in which higher order Weyl nodes are arranged in a quadrupole pattern. We discuss bulk indicators of the topology of this model and the associated bulk, surface, and hinge spectra. Consider the following Bloch Hamiltonian, \[\begin{split} H(\mathbf{k})&=\sin(k_{x})\sin(k_{y}) \Gamma_{1}+\sin(k_{z})\Gamma_{2}\\ &+(m+\cos(k_{x})+\beta\cos(k_{z}))\,\Gamma_{3}\\ &+(m+\cos(k_{y})+\beta\cos(k_{z}))\,\Gamma_{4}\\ &+i\gamma\Gamma_{1}\Gamma_{2},\end{split} \tag{1}\] where \(\Gamma_{i}\) is a set of five anti-commuting \(4\times 4\) matrices. We use the basis \(\Gamma_{0}=\sigma_{2}\otimes\sigma_{0}\), \(\Gamma_{1}=\sigma_{1}\otimes\sigma_{1}\), \(\Gamma_{2}=\sigma_{1}\otimes\sigma_{2}\), \(\Gamma_{3}=\sigma_{1}\otimes\sigma_{3}\), and \(\Gamma_{4}=\sigma_{3}\otimes\sigma_{0}\), where \(\sigma_{i}\) are the Pauli matrices. This Hamiltonian possesses a range of symmetries: spinless time-reversal symmetry (TRS), \(\mathcal{T}=K\mathbb{I}\), two-fold rotation symmetry about each axis, \(C_{2x}=C_{2y}=\Gamma_{1}\Gamma_{2}\), \(C_{2z}=\mathbb{I}\), mirror symmetry about the \(x=y\) and \(x=-y\) planes, \(M_{1,1}=M_{1,-1}=(\Gamma_{3}-\Gamma_{4})\Gamma_{0}/\sqrt{2}\), the product of four-fold rotation and reflection along the \(z\)-axis, \(C_{4z}M_{z}=(\Gamma_{3}+\Gamma_{4})/\sqrt{2}\), and the product of inversion and chiral symmetry, \(P\Xi=\Gamma_{2}\). We first consider the bulk energy spectrum of \(H(\mathbf{k})\) in the special case \(m+\beta=-1\) and \(\gamma=0\), for which a quadratic band crossing (QBC) appears at \(\Gamma\), as shown in Fig. 1a. While \(m\) and \(\beta\) can be tuned to generate QBCs at other high-symmetry points of the BZ, we only consider parameter ranges that place the QBC at \(\Gamma\). Departing from this starting point by tuning \(\gamma\) away from zero splits the QBC into four Weyl nodes that move apart along the \(k_{x}\) and \(k_{y}\) axes. We show the finite-\(\gamma\) spectrum in Fig. 1b which clearly depicts the Weyl nodes on the \(\Gamma X\) and \(\Gamma Y\) lines. To identify the bulk topology, we recall that Weyl nodes act as quantized sources of Berry curvature. As such, the Chern number of any surface in momentum space that encloses a single Weyl node is \(C=\pm 1\), where the sign is determined by the chirality \(\chi\) of the node. Consequently, we can foliate the Brillouin zone into families of fixed momentum planes, and planes that are separated by a Weyl node must have Chern numbers differing by \(\chi\). This planar family picture is very convenient and we denote the Hamiltonian restricted to two-dimensional momentum planes normal to the \(k_{i}\) axis as \(H(\mathbf{k};k_{i})\). In Fig. 1c we plot the Chern numbers of \(H(\mathbf{k},k_{i})\) for \(i=x,y,z\) as functions of \(k_{i}\) with \(m=-0.3\), \(\beta=-0.7\) and \(\gamma=0.5\). The discrete jumps in Chern number at the Weyl nodes indicate that the chiralities of the nodes on the \(k_{x}\) and \(k_{y}\) axes are negative and positive, respectively. The fixed-momentum planes having non-vanishing Chern number generate chiral edge modes along open boundaries. The collection of these edge states comprise the surface Fermi arcs that connect projections of the Weyl nodes in the surface BZ. In Fig. 2a we plot the surface spectrum of \(H(\mathbf{k})\) with open boundary conditions along the \(z\)-direction. At zero energy there are a pair of intersecting Fermi arcs, which we depict in blue, on the surface normal to the \(z\)-direction, with one nodal arc on the \(k_{x}\)-axis and another arc on the \(k_{y}\)-axis. At energies above or below \(E=0\) the Fermi arcs form portions of a hyperbola that originate at the positive chirality nodes, nearly meet at the origin, and then turn in opposite directions to eventually terminate at the negative chirality nodes. Indeed, the dispersion around \(\Gamma\) is that of a saddle point \(E=k_{x}k_{y}\), hence this model is another realization of a surface rank-2 chiral fermion [47]. For comparison, in Fig. 2b we plot the surface spectrum with open boundaries in the \(x\)-direction with \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=0.5\). We find that the Fermi arcs that appear on the \(x\)-normal surface originate at \(\Gamma\) and terminate at the positive- and negative-momentum projections of the Weyl nodes on the \(k_{y}\)-axis. On a \(y\)-normal surface the relative chirality of the nodes switches, but the Fermi arcs are identical because of the mirror and rotational symmetries of \(H(\mathbf{k})\). We can characterize arrangements of Weyl nodes by calculating the momentum-space multipole moments of the nodes weighted by the node chiralities. In particular, we define the Weyl dipole \(P_{i}\) and Weyl quadrupole \(Q_{ij}\) moments as \[P_{a}=\sum_{n}\chi^{n}k_{a}^{n},\quad Q_{ab}=\sum_{n}\chi^{n}k_{a}^{n}k_{b}^{n}, \tag{2}\] where \(n\) indexes the nodes. The Weyl dipole moment of \(H(\mathbf{k})\), which is proportional to the anomalous Hall conductivity, vanishes as required by TRS. In contrast, we find that the diagonal quadrupole moments \(Q_{xx}\) and \(Q_{yy}\) are non-vanishing, and the mirror symmetries along the \(x=y\) and \(x=-y\) axes require them to have the same magnitude and opposite sign. We consider these moments because, as mentioned above, the dipole moment is directly related to the anomalous Hall coefficient, and recent works have shown that the quadrupole moment characterizes mixed crystalline-electromagnetic responses, e.g., screw dislocations bind electric charge, and magnetic flux binds crystal momentum [48; 47]. Below we show that the Weyl nodes in our model are, in fact, higher order Weyl nodes, and investigate the mixed crystalline-electromagnetic responses that arise from quadrupole arrangements of higher order Weyl nodes. We have mentioned that first order Weyl nodes represent a transition (as a function of momentum) between insulator phases on planes of the foliated BZ where the Chern number differs by the Weyl chirality. In contrast, higher order Weyl nodes separate insulators that differ by both a Chern number and some type of 2D higher order topology. In our case the higher order topology is that of a quadrupole insulator (QI) [55; 56; 57; 58; 59; 60; 61; 62; 63; 64]. Depending on the symmetry, such QI phases can be either bulk obstructed or boundary obstructed [65; 66; 67; 68; 69; 65; 69], and they are characterized by a quantized bulk electric quadrupole moment \(q_{xy}=e/2\) and a quantized, vanishing bulk charge polarization. The bulk electric quadrupole moment is defined as \(q_{xy}=p_{x}^{0}+p_{y}^{0}-Q_{corner}\mod 1\), where \(p_{x}^{0}\) and \(p_{y}^{0}\) are the electric polarizations on \(\hat{y}\)- and \(\hat{x}\)-normal surfaces, respectively, and \(Q_{corner}\) is the charge localized on a corner where two such surfaces meet. One typical manifestation of a bulk electric quadrupole moment \(q_{xy}\) is a set of corner charges in systems with open boundary conditions in both the \(x\)- and \(y\)-directions. For our model these corner charges are accompanied by a set of four mid gap corner modes, the occupation of which determines the pattern of signs of the corner charges. To show that the Weyl nodes in our model are higher order we need a procedure to diagnose the QI topology. One approach is to study the pair of Berry phases \((p_{x}^{\nu_{y}},p_{y}^{\nu_{x}})\) of the hybrid Wannier bands \(v_{y}(k_{x})\) and \(v_{x}(k_{y})\)[55; 56]. These Berry phases, which are referred to as nested Wilson loops, indicate the QI phase with non-vanishing \(q_{xy}\) when they are both non-trivial, i.e., when \((p_{x}^{\nu_{y}},p_{y}^{\nu_{x}})=(1/2,1/2)\). The symmetry restrictions required to quantize the nested Wilson loops are more stringent than those required to enforce a non-vanishing, quantized quadrupole moment, so this approach can be applied only in a reduced parameter region of our model. Typically, a pair of mirror symmetries is needed to quantize the nested Wilson loops, but our putative QI insulator Hamiltonians \(H(\mathbf{k},k_{x})\) and \(H(\mathbf{k},k_{y})\) instead possess pairs of mirror _times time-reversal_ symmetries. These symmetries are represented by \(M_{x/y}\mathcal{T}=\mathbb{I}_{4\times 4}\) and \(M_{z}\mathcal{T}=\Gamma_{1}\Gamma_{2}\), and descend from the the \(C_{2x/y}\), \(C_{2z}\), and \(\mathcal{T}\) symmetries of \(H(\mathbf{k})\). These mirror times time-reversal symmetries quantize the bulk quadrupole moment but do not quantize the nested Wilson loops. We can make progress by noting that these symmetries are elevated to conventional mirror symmetries in Figure 1: (a) The band structure of \(H(\mathbf{k})\) along high-symmetry lines in the \(k_{z}=0\) plane with \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=0\). The band structure possesses a QBC at \(\Gamma\) and is otherwise gapped. (b) The band structure of \(H(\mathbf{k})\) with \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=0.5\). The finite value of \(\gamma\) splits the QBC into four Weyl nodes, two on the \(k_{x}\) axis and two on the \(k_{y}\) axis. (c) The Chern number of \(H(\mathbf{k};k_{x})\) (solid blue), \(H(\mathbf{k};k_{y})\) (dashed red), and \(H(\mathbf{k};k_{z})\) (dot-dashed green) as functions of the perpendicular momentum with \(m-0.3\), \(\beta=-0.7\), and \(\gamma=0.5\). The changes in the Chern number as the perpendicular momenta are tuned through Weyl nodes indicates that the nodes along \(k_{x}\) and \(k_{y}\) are of negative and positive chirality, respectively. (d) The nested Wilson loops \(p_{x}^{\nu_{y}}\) (blue crosses) and \(p_{y}^{\nu_{x}}\) (blue circles), bulk gap (solid red line), and surface gap (dashed red line) of \(H(\mathbf{k};k_{x})\) with \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=0.0\). (e) The finite-\(\gamma\) phase diagram of \(H(\mathbf{k};k_{x})\) with \(m=-0.3\), and \(\beta=-0.7\). The solid and dashed black lines indicate bulk and surface gap closings of \(H(\mathbf{k};k_{x})\), respectively. The light green region is adiabatically connected to the \(\gamma=0\) QI phase and therefore has \(C=0\) and \(q_{xy}=e/2\). The red and blue regions are Chern insulator phases with \(C=\pm 1\), and the white regions are trivial. the limit \(\gamma=0.\) Hence the \(\gamma=0\) limit permits the computation of the bulk quadrupole moment via the nested Wilson loops. We present the results of this computation in Fig. 1d where we plot the bulk gap, surface gap, and nested Wilson loops of \(H(\mathbf{k},k_{x})\) with \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=0\) as a function of \(k_{x}\). We find that the bulk gap closes at \(k_{x}=0\), corresponding to the QBC at \(\Gamma\). Interestingly, the surface gap closes at a pair of momenta \(k_{x}=\pm k_{0}\), far away from the location of the bulk gap closing. For \(0<|k_{x}|<|k_{0}|\), both nested Wilson loops are quantized to \(1/2\), confirming the presence of a non-trivial QI phase for each fixed-\(k_{x}\) plane in this interval. One of the two nested Wilson loops changes values at the surface gap closing at \(|k_{x}|=k_{0}\), leaving the region \(|k_{x}|>k_{0}\) with only a single non-trivial nested Wilson loop, indicating a phase with vanishing quadrupole moment for all fixed-\(k_{x}\) planes in this interval. While we can only calculate the quantized nested Wilson loops for \(\gamma=0\), we can go beyond the \(\gamma\neq 0\) case by using an adiabatic argument. As long as the crystal symmetries that quantize \(q_{xy}\) and the \(x,y\) components of the (bulk) polarization are maintained, the bulk quadrupole moment can change only at bulk or surface gap closing points. Thus, knowing the results for \(\gamma=0\), we can determine the bulk quadrupole moment at finite \(\gamma\) via a straightforward adiabatic argument. At any momentum \(k_{x}\) for which \(H(\mathbf{k},k_{x})\) realizes the \(\gamma=0\) QI phase, the Hamiltonian will remain in the QI phase at finite \(\gamma\) as long as there are no intervening bulk or surface gap closings, and the quantizing symmetry is maintained. We plot the locations of the bulk and surface gap closings of \(H(\mathbf{k},k_{x})\) in Fig. 1e as a function of \(k_{x}\) and \(\gamma\) with \(m=-0.3\) and \(\beta=-0.7\). The splitting of the QBC into Weyl nodes nucleates a pair of \(C=-1\) and \(C=+1\) Chern insulator phases on opposite sides of \(k_{x}=0\), indicated in blue and red, respectively. The locations of the surface gap closings do not depend on \(\gamma\), so the QI remains intact for \(k_{\text{Weyl}}<|k_{x}|<k_{0}\), where \(k_{\text{Weyl}}\) is the location of the Weyl node on the \(k_{x}\) axis. Similar results obtain when we consider the 2D, fixed-momentum phases as a function of \(k_{y}\) instead of \(k_{x}.\) This confirms that the Weyl nodes in this system separate Chern insulator phases from QI phases and are higher order Weyl nodes. The surface gap closings that bound the QI phases of \(H(\mathbf{k},k_{x})\) appear as a pair of surface Dirac cones at opposite values of \(k_{x}\) on the \(k_{z}=\pi\) boundary of the \(y\)-normal surface BZ. An analogous pair of surface Dirac cones appears on \(x\)-normal surface BZs owing to the rotation and mirror symmetries of \(H(\mathbf{k})\). We plot the \(x\)-normal surface spectrum with \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=-0.5\) in Fig. 2b, in which the surface Dirac cone at positive \(k_{y}\) is visible and depicted in red. With open boundary conditions along both the \(y\)- and \(z\)-directions, the hinge spectrum of \(H(\mathbf{k})\), shown in Fig. 2c, exhibits a pair of mid-gap flat bands in the hinge BZ spanning between the projections of the bulk Weyl nodes and the surface Dirac nodes. These mid-gap hinge arcs originate from the mid-gap corner modes of the QI phase. We find identical results for hinges parallel to \(\hat{y}\) as ensured by the mirror and rotation symmetries of \(H(\mathbf{k})\). In the next section we study the mixed crystalline-electromagnetic responses that arise from such quadrupole arrangements of higher order Weyl nodes, with \(H(\mathbf{k})\) serving as an explicit realization. Additionally, in Sec. IV we study some further consequences of the mid-gap hinge states. ## III Mixed Charge-Momentum Responses It was recently shown that semimetals hosting a quadrupole configuration of Weyl nodes exhibit a mixed charge-momentum response that binds crystal momentum to magnetic flux and electric charge to screw dislocations [47; 48]. Here we confirm that the Hamiltonian Eq. (1) also exhibits this response. Furthermore, we show that the higher order nature of our model's Weyl nodes leads to an additional _surface_ mixed charge-momentum response. This surface response manifests as crystal momentum bound to magnetic flux and electric charge bound to dislocations. The mixed charge-momentum response of topological semimetals hosting a Weyl quadrupole is captured by the effective action \[S[A,\mathfrak{e}]=-\frac{e}{8\pi^{2}}\int d^{4}x\,\epsilon^{\mu\nu\rho\sigma}Q _{ab}\mathfrak{e}_{\mu}^{a}A_{\nu}\partial_{\rho}\mathfrak{e}_{\sigma}^{b}, \tag{3}\] where \(Q_{ab}\) is the quadrupole moment of the Weyl nodes, \(A_{\mu}\) is the electromagnetic gauge field, and \(\mathfrak{e}_{\mu}\) are the translation gauge fields [70; 71; 72; 73; 50; 74]. For our model we simplify this action by noting that only the diagonal elements of the quadrupole moment of \(Q_{ab}\) are non-vanishing \(Q_{xx}=-Q_{yy}\equiv\bar{Q}\). One response encoded by this action is the binding of momentum density to magnetic flux that points along the \(x\)- or \(y\)-directions, \[\mathcal{J}_{a}^{0}=\frac{e\bar{Q}}{8\pi^{2}}B_{a}\left(\delta_{ax}-\delta_{ay }\right), \tag{4}\] where the bound momentum points along the magnetic field and the momentum density of the electrons is defined as \[\mathcal{J}_{a}^{0}=\frac{1}{e}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\,k_{a}j^{ 0}(\mathbf{k}). \tag{5}\] There is also a conjugate response wherein charge is bound to screw dislocations that have Burgers vectors in the \(xy\)-plane, \[j^{0}=\frac{e\bar{Q}}{8\pi^{2}}\left(\mathcal{B}_{x}^{x}-\mathcal{B}_{y}^{y} \right). \tag{6}\] In these two response equations \(B_{a}\) are the components of the magnetic field, \(\mathcal{B}_{i}^{i}=\epsilon^{ijk}\partial_{j}\mathfrak{e}_{k}^{i}\) is the torsional magnetic field induced by a screw dislocation along the \(i\)-axis, and we set the diagonal components of the translation gauge field equal to their background values, i.e., \(\epsilon_{x}^{x}=\epsilon_{y}^{y}=\epsilon_{z}^{z}=1\), which encode the existence of the discrete Bravais lattice. This mixed-charge momentum response can be straightforwardly understood as a consequence of the arrangement of non-trivial Chern insulator phases on planes in the foliated BZ. For simplicity, let us first consider planes normal to \(k_{x}\) and denote the locations of the Weyl nodes away from \(k_{x}=0\) on the \(k_{x}\)-axis as \(\pm k_{0}\). As shown in Fig. (c)c, the Chern number of \(H(\mathbf{k};k_{x})\) is \(C=-1\) for \(-k_{0}<k_{x}<0\), \(C=1\) for \(0<k_{x}<k_{0}\), and \(C=0\) elsewhere. Consider inserting a magnetic flux \(\Phi\) in the \(yz\)-plane. Let us assume that this flux line preserves translation symmetry along \(\hat{x}\). Then the net response of the system to the magnetic flux is the response of \(H(\mathbf{k},k_{x})\) summed over \(k_{x}\). The trivial phases of \(H(\mathbf{k},k_{x})\) are inert to the flux, but the Chern insulator phases bind charge \(q=C\Phi/\Phi_{0}\) to the flux, where \(\Phi_{0}\) is the quantum of magnetic flux [75]. The charge density bound to the flux by the \(C=1\) and \(C=-1\) phases of \(H(\mathbf{k},k_{x})\) are opposite, so no net charge is accumulated. However, the crystal momentum-per-length bound to the flux is non-vanishing: \[\int dydz\,\mathcal{J}_{x}^{0}=\frac{e\bar{Q}}{8\pi^{2}}\Phi. \tag{7}\] The dual response of charge bound to a screw dislocation along the \(\hat{x}\)-direction can be understood through similar reasoning. As with the Aharonov-Bohm effect for electrons near a magnetic flux line, electrons encircling a screw dislocation acquire a phase. In the magnetic flux case the phase is proportional to a product of the charge and flux, \(\varphi\propto e\Phi.\) In the translation flux case, the phase is the dot product of the crystal momentum of the electron (translation charge) and the Burgers vector of the dislocation (translation flux), \(\varphi=\mathbf{k}\cdot\mathbf{b}\), where \(\mathbf{b}=(b_{x},0,0)\) in this case. Since the phase acquired upon encircling the screw dislocation is proportional to \(k_{x}\), the \(C=1\) and \(C=-1\) phases of \(H(\mathbf{k},k_{x})\) bind _equal_ charge (in both sign and magnitude) to the defect, yielding no bound crystal momentum density. However, there is a non-vanishing bound charge-per-length: \[\int dydz\,j_{0}(\mathbf{r})=\frac{eQ_{xx}}{8\pi^{2}}b_{x}. \tag{8}\] The response to threading magnetic flux or screw dislocations along other directions can be interpreted similarly. That is, one can determine the arrangement of the Chern insulator phases perpendicular to the chosen direction \(\hat{n}\) by projecting the Weyl nodes onto that axis in momentum space. Then one can apply the flux insertion method above to determine the response. As an additional example, this model has the interesting characteristic that for \(\hat{n}=\hat{x}\pm\hat{y}\) and \(\hat{n}=\hat{z}\), the response is zero because the Weyl nodes project onto the given axes in opposite-chirality pairs, yielding \(C=0\) for all momenta. Since our model \(H(\mathbf{k})\) has Weyl nodes arranged in a quadrupolar pattern we expect to find it has the responses encoded by Eq. 3. Here we verify that \(H(\mathbf{k})\) exhibits the mixed charge-momentum response described above by numerically calculating both the electric charge density bound to screw dislocations and the momentum density bound to magnetic fluxes. We consider a system with periodic boundary conditions in all directions and choose a configuration to preserve translation symmetry along \(\hat{x}\), which is necessary to permit calculation of the crystal momentum density along \(\hat{x}\). As such, we treat Figure 2: The (a) \(z\)- and (b) \(x\)-normal surface band structures of Eq. 1 along high-symmetry lines with \(m=-0.3\), \(\beta=-0.7\) and \(\gamma=0.5\), using 30 lattice sites in the open direction. The \(z\)-normal surface has a cross of Fermi arcs connecting the projections of the Weyl nodes on both the \(k_{x}\) and \(k_{y}\) axes. The \(x\)-normal surface possesses Fermi arcs between the projections of the Weyl nodes on the \(\Gamma-Y\) line and a pair of Dirac cones on the BZ boundary. Bands containing Fermi arcs are drawn in blue and the surface Dirac cones are indicated with red. The spectrum of the \(y\)-normal surface is identical to the \(x\)-normal surface. (c) The spectrum of \(H(\mathbf{k})\) with open boundary conditions along the \(y\)- and \(z\)-directions, 25 lattice sites along each open direction, \(m=-0.3\), \(\beta=-0.7\), and \(\gamma=0.5\). The zero-energy modes arise from the quadrupole phases of \(H(\mathbf{k};k_{x})\) and are localized to the hinges. The dashed red lines indicate the bounds of the zero-energy hinge modes. The hinge spectrum along the \(y\)-direction is identical. the \(x\)-direction in momentum space with \(N_{k}=40\), and use a lattice of dimension \(N_{y}\times N_{z}=40\times 40\) in the \(y\)- and \(z\)-directions. We insert oppositely-signed flux lines, either electromagnetic or translational, along \(\hat{x}\) at sites \((y,z)=(20,10)\) and \((20,30)\). To generate the fluxes we include the magnetic flux \(\Phi\) via a Peierls phase, i.e., multiplying all hopping terms that cross the line connecting the two flux lines by the phase \(\exp{(2\pi i\Phi/\Phi_{0})}\). Because the translation gauge fields couple to momentum rather than charge, the translational magnetic field of a screw dislocation is accounted for by modifying the Peierls phases used for the magnetic flux to be a product of the crystal momentum along \(\hat{x}\) and the translational flux of the dislocation, \(k_{x}\Phi^{T}\)[70, 76, 77]. The modified Peierls phase captures the phase acquired by an electron having crystal momentum \(k_{x}\) encircling the screw dislocation and translating by \(\Phi^{T}\) sites in the \(\hat{x}\)-direction. Using this setup, we calculate the momentum density bound to magnetic flux as a function of the flux as shown in Fig. 3a. Similarly, in Fig. 3d we plot the charge bound to a screw dislocation as a function of the translational magnetic flux. Both plots demonstrate the expected linear relationship between charge/momentum and flux with slope \(e\bar{Q}/8\pi^{2}\), corroborating that the Hamiltonian \(H(\mathbf{k})\) possesses the response predicted by the effective action in Eq. (3). Let us comment about the data points represented by open circles in Fig. 3d. In order to be commensurate with the lattice, the torsional flux \(\Phi^{T}\) should take integer values and is equivalent to the Burgers vector of the screw dislocation. The open circle data points are non-integer translation fluxes that can be inserted into our momentum-dependent Peierls factors, but the interpretation in terms of an elastic lattice defect is less clear. Now that we have confirmed the expected bulk responses we can move on to identify the surface responses. Indeed, as a consequence of our Weyl nodes being higher order, we expect that even regions of the surface BZ that do not harbor gapless surface states may contribute to surface responses. Indeed, since the \(\hat{x}\)- and \(\hat{y}\)-normal surfaces of the model Hamiltonian host a pair of Dirac nodes we expect to find a 2D surface response analogous to a 2D Dirac semimetal. As such, these surfaces possess a mixed charge-momentum response similar to that of the bulk described by the effective action [12]: \[S[A,\mathfrak{e}]=\frac{e\mathcal{P}_{a}}{4\pi}\int d^{4}x\epsilon^{\mu\nu\rho }\,\mathfrak{e}_{\mu}^{a}\partial_{\nu}A_{\rho}. \tag{9}\] Here the response coefficient is the Berry curvature dipole moment, given by [12, 53, 73] \[\mathcal{P}_{a}=\frac{1}{\pi}\int_{\mathrm{BZ}}d^{2}\mathbf{k}k_{a}\mathcal{F }(\mathbf{k}), \tag{10}\] where \(\mathcal{F}\) is the Berry curvature and the integration is restricted to the surface BZ. When the surface has time-reversal and inversion symmetry, this action implies that the system has a charge polarization \(p^{a}=\frac{e}{4\pi}\epsilon^{ab}\mathcal{P}_{b}\)[12] (which resides on the surface of our 3D system). To illustrate a particular response let us focus on the response of the \(y\)-normal surface, for which \(\mathcal{P}_{x}\neq 0\) and \(\mathcal{P}_{z}=0\) (the \(x\)-normal surface has an analogous response by symmetry). The mixed charge-momentum response captured by the effective action Eq. (9) binds momentum density to magnetic flux, \[\mathcal{J}_{x}^{0}=-\frac{e}{4\pi}\mathcal{P}_{x}B_{y}, \tag{11}\] and binds electric charge to dislocations, \[j^{0}=-\frac{e}{4\pi}\mathcal{P}_{x}\left(\partial_{x}\mathfrak{e}_{z}^{x}- \partial_{z}\mathfrak{e}_{x}^{x}\right). \tag{12}\] Here we verify that the surfaces of our model have these responses via direct numerical calculation. We consider a system with open boundary conditions in the \(\hat{y}\)-direction and periodic boundary conditions in the \(\hat{x}\)- and \(\hat{z}\)-directions. We treat the \(\hat{x}\)-direction in momentum space and calculate the momentum density bound to magnetic flux (using \(N_{k}=40\) momentum points), and the charge bound to dislocations (using \(N_{k}=100\) momentum points). The other two directions we leave in position space and use a lattice of dimension \(N_{y}\times N_{z}=30\times 30\) and \(N_{y}\times N_{z}=40\times 40\) for each of the calculations respectively. To avoid difficulties arising from the divergent Berry curvature distribution of Dirac nodes, we also include an inversion-breaking perturbation \(H^{\prime}=-\nu\frac{i}{2}\Gamma_{2}\Gamma_{3}\) with \(\nu=0.5\). To calculate the \(k_{x}\) momentum density on a \(\hat{y}\)-normal surface it is necessary to maintain translation symmetry along \(\hat{x}\). To do so we introduce the magnetic field via two strips of magnetic flux lines extending in the \(x\)-direction, each with opposite field orientations \(\pm B_{y}\hat{y}\). For the \(N_{y}\times N_{z}=40\times 40\) lattice these strips are located at \(z_{1}=10\) and \(z_{2}=30\), while for the \(N_{y}\times N_{z}=30\times 30\) lattice they are located at \(z_{1}=8\) and \(z_{2}=23\). The geometry of this magnetic field configuration is depicted in Fig. 3c. We include dislocation flux, \(\Phi_{D}\), in an analogous translation symmetry-preserving manner by using the generalized, momentum-dependent Peierls factors mentioned above for the bulk response. We can think of this translation magnetic field as a non-vanishing strain configuration between the \(z_{1}\) and \(z_{2}\) planes that immediately relaxes back to the unstrained lattice outside this interval. This configuration induces opposite dislocation flux densities at the boundaries of the strained region. We schematically depict this geometry in Fig. 3f, where the red (blue) plaquettes contain positive (negative) dislocation fluxes, and the black arrows indicate hopping terms to which we apply the momentum-dependent Peierls phases. Because we are interested in a surface response, we must use a layer-resolved Berry curvature to calculate the response coefficient \(\mathcal{P}_{\alpha}\) for just the top (or bottom) surface of the system. The layer-resolved Berry curvature can be obtained by combining the projector onto the occupied subspace, defined as \[P(\mathbf{k})=\sum_{\epsilon_{i}(\mathbf{k})<0}\left|u_{i}(\mathbf{k})\right\rangle \left\langle u_{i}(\mathbf{k})\right| \tag{13}\] where \(H(\mathbf{k})\left|u_{i}(\mathbf{k})\right\rangle=\epsilon_{i}(\mathbf{k}) \left|u_{i}(\mathbf{k})\right\rangle\), and the projector onto the \(n^{\text{th}}\) layer of the lattice, \(P_{n}\), via the formula [78] \[\mathcal{F}_{n}^{ab}(\mathbf{k})=\text{Tr}\left[P(\mathbf{k})\partial_{k_{i}}P (\mathbf{k})P_{n}\partial_{k_{j}}P(\mathbf{k})\right]. \tag{14}\] Using this formalism, the surface response coefficient is given by the momentum-space dipole moment of the layer-resolved Berry curvature summed over half the sites in the open direction: \[\mathcal{P}_{x}=\frac{1}{\pi}\sum_{n=1}^{N_{y}/2}\int_{\text{BZ}}dk_{x}dk_{z} \,k_{x}\mathcal{F}_{n}^{xz}(k_{x},k_{z}). \tag{15}\] The momentum and charge bound to the surface by dislocations and magnetic flux are calculated in a similar manner, i.e., layer contributions are summed over half of the sites in the open direction, \[j^{0}=\sum_{n=1}^{N_{y}/2}j^{0}(n),\quad\mathcal{J}_{x}^{0}=\sum_{n=1}^{N_{y}/ 2}\mathcal{J}_{x}^{0}(n). \tag{16}\] After carrying out these calculations, we show the \(x\)-momentum density bound to a strip of magnetic flux lines as a function of the magnetic flux in Fig. 3b and plot the charge density bound to a strip of dislocations as a function of translation flux in Fig. 3e. The momentum density bound to magnetic flux is linear in the magnetic flux with the correct proportionality constant \(e\mathcal{P}_{x}/4\pi\). Here the value of \(\mathcal{P}_{x}\) is determined by directly calculating Eq. 15, which determines the slopes of the dashed Figure 3: (a) The crystal momentum bound to magnetic flux by Eq. (1) on a lattice of dimension \(N_{y}\times N_{z}=40\times 40\), and \(N_{k_{x}}=40\). (b) The surface crystal momentum density \(\mathcal{J}_{x}^{0}\) bound to magnetic flux along the \(y\)-direction for a system size of \(N_{y}\times N_{z}=30\times 30\) and \(N_{k_{x}}=40\). (c) The flux geometry we use to calculate the momentum and charge density response on \(x\)- and \(y\)-normal surfaces. Red (blue) coloration indicates either magnetic or dislocation flux pointing along the \(+y\)-direction (\(-y\)-direction). (d) The electric charge bound to a screw dislocation with \(N_{y}\times N_{z}=40\times 40\), and \(N_{k_{x}}=40\). Empty and filled circles indicate fractional and integer torsional fluxes. (e) The surface electric charge density bound to dislocation flux for a system size of \(N_{y}\times N_{z}=40\times 40\) and \(N_{k_{x}}=100\). Here \(\Phi_{\text{tot}}^{D}\) is the total dislocation flux integrated along the \(x\)-direction, corresponding to the difference in system size along the \(x\)-direction between the strained and unstrained regions in units of the unstrained lattice constant. Empty and filled circles indicate fractional and integer dislocation fluxes. (f) The dislocation flux geometry used to calculate the surface charge response. The red (blue) plaquettes correspond to positive (negative) dislocation flux and the black arrows indicate the hoppings that acquire a momentum-dependent Peierls phase due to the strain of the lattice. All results presented here are calculated using the parameters \(m=\beta=-0.5\) and \(\gamma=0.5\). The red dotted lines in (a), (b), (d), and (e) each have a slope of one and indicate the analytic result. lines in Figs. 3b and 3e. For a small dislocation flux value, \(\Phi^{T}=1\), the charge density bound to dislocations matches the prediction of the effective action, but this relation becomes non-linear at higher values of \(\Phi^{T}\) because of stronger lattice effects. As mentioned above for the bulk calculations, the open circles in Fig. 3e represent non-integral dislocation fluxes that are mathematically obtainable via our momentum-dependent Peierls factors, though their physical interpretation as a lattice defect is not clear. ## IV Momentum-weighted quadrupole moment As one final physical phenomenon associated to our system of a quadrupole of higher order Weyl nodes, let us consider what is happening at the hinges. Because some regions of momentum-space harbor higher order topology in our model, we expect to find hinge modes and/or fractional charge per unit length along the hinge. Indeed, the hinge phenomena in our system are associated with the momentum planes that harbor a 2D QI. At half-filling, the sign of the electric quadrupole moment of these planes is ambiguous when the symmetries protecting the topology are enforced, i.e., the value \(q_{xy}=e/2\) is equivalent to \(q_{xy}=-e/2\). In the case of the QI phases of \(H(\mathbf{k};k_{x})\) and \(H(\mathbf{k};k_{y})\), the relevant quantizing symmetries are the pair of mirror times time-reversal symmetries. To choose the sign of the quadrupole moment we want to weakly break both of these symmetries, but preserve the product, i.e., preserve \(C_{2}\) symmetry so that no electric dipole moment is allowed. Operationally, for a system with open boundaries, the symmetry breaking provides a prescription of how to fill the low-energy hinge states that is consistent with the sign of the quadrupole moment. Interestingly, our model has two distinct possible choices of symmetry breaking that we discuss below. One possible choice of symmetry breaking is the perturbations \(H^{\prime}(\mathbf{k})=\delta\sin(k_{x/y})\Gamma_{0}\), which accomplish the required symmetry breaking for \(H(\mathbf{k};k_{x/y})\) respectively. Since \(\Gamma_{0}\) is odd under time-reversal, this term is time-reversal invariant, but it breaks both mirror symmetries since \(\sin(k_{x/y})\) is odd under mirror \(M_{x/y}\). As a consequence, this perturbation endows the positive- and negative-momentum intervals of the QI phases with the same sign of quadrupole moment. Hence the positive and negative momentum intervals add together to yield a finite bulk electric quadrupole moment (per \(xz\) cross sectional area), \[\begin{split} Q^{\text{bulk}}_{xz}&=L_{y}\int \frac{dk_{y}}{2\pi}q_{xz}(k_{y})\\ &=\pm\frac{eL_{y}}{2\pi}\left(k_{\text{Dirac}}-k_{\text{Weyl}} \right),\end{split} \tag{17}\] where the sign is determined by the sign of \(\delta\). The analogous quantity in the \(yz\) plane is defined as \[\begin{split} Q^{\text{bulk}}_{yz}&=L_{x}\int \frac{dk_{x}}{2\pi}q_{yz}(k_{x})\\ &=\mp\frac{eL_{x}}{2\pi}\left(k_{\text{Dirac}}-k_{\text{Weyl}} \right).\end{split} \tag{18}\] The magnitude of the bulk electric quadrupole is determined solely by the separation between the projections onto the hinge BZ of the bulk Weyl nodes and surface Dirac nodes, \(k_{\text{Weyl}}\) and \(k_{\text{Dirac}}\), as these control the portion of the BZ that is occupied by the QI phase. Next we consider a second possible symmetry breaking perturbation \(H^{\prime\prime}(\mathbf{k})=\delta\Gamma_{0}.\) This term preserves the mirror symmetries but breaks time-reversal symmetry. It has the effect of endowing the positive- and negative-momentum intervals of QI phases with _opposite_ quadrupole moments. The resulting bulk electric quadrupole moment vanishes, since it receives equal and opposite contributions from each momentum interval. Instead the system realizes quadrupole moments of _crystal momentum_ density that have not previously been considered, \[\begin{split} K^{y}_{xz}&=\frac{L_{y}}{e}\int \frac{dk_{y}}{2\pi}k_{y}q_{xz}(k_{y}),\\ K^{x}_{yz}&=\frac{L_{x}}{e}\int\frac{dk_{x}}{2\pi }k_{x}q_{yz}(k_{x}).\end{split} \tag{19}\] The bulk crystal momentum quadrupole moment density manifests as momentum density bound to hinges, as shown in Fig. 4, where the momentum points along the hinges. Similar to the bulk electric quadrupole moment, the magnitude of the bulk crystal-momentum quadrupole moment is determined by the locations of the bulk Weyl and surface Dirac nodes, \[K^{y}_{xz}=\pm\frac{L_{y}}{4\pi}\left(k_{\text{Dirac}}^{2}-k_{\text{Weyl}}^{2 }\right), \tag{20}\] where the overall sign is again determined by the sign of \(\delta\). It is interesting to note that this quantity can be concisely expressed in terms of the Weyl quadrupole moment \(\bar{Q}\) and the surface Dirac dipole moment \(\mathcal{P}_{x}\), \[K^{y}_{xz}=\pm\frac{L_{y}}{\pi}\left(\mathcal{P}_{y}^{2}-2\bar{Q}\right), \tag{21}\] and therefore acts as a link between the bulk and surface mixed crystalline-electromagnetic responses. This is analogous to the response of higher order Weyl dipole systems, in which the extent of the Fermi arcs on the surface and the arcs on the hinge must satisfy a sum rule [23]. We note that the bulk quadrupole moment of crystal momentum density is well-defined only when the bulk electric quadrupole moment vanishes, as its value can otherwise be arbitrarily changed by shifts of the BZ origin \(\mathbf{k}\rightarrow\mathbf{k}+\mathbf{k}^{\prime}\). This is exactly what happens when we choose the \(H^{\prime\prime}\) perturbation since the total bulk quadrupole moment vanishes. The invariance of the bulk quadrupole moment of crystal momentum under shifts of the BZ can also be seen from the definition in Eq. (21). The Weyl quadrupole moments \(Q_{xx}\) and \(Q_{yy}\) are invariant under such shifts because the \(C_{2z}\) symmetry of the Hamiltonian forces the Weyl dipole moments in the \(k_{x}\)-\(k_{y}\) plane to vanish, and the surface Dirac dipole moments \(\mathcal{P}_{x/y}\) are invariant because the product of the \(M_{1,\pm 1}\) and \(C_{4}M_{z}\) symmetries form a surface \(M_{z}\) symmetry that forces the surface Chern number to vanish. ## V Conclusion In this work we made the first steps towards understanding the interplay between higher order topology and mixed crystalline-electromagnetic responses. By constructing and analyzing an explicit model, we showed that elevating a quadrupole arrangement of Weyl nodes, which is known to exhibit a bulk mixed crystalline-electromagnetic response, to higher order Weyl nodes produces an additional mixed crystalline-electromagnetic _surface_ response. We further demonstrated that the surface response originates from the higher order QI phases of the Hamiltonian in the foliated BZ. We additionally found that adding symmetry breaking perturbations can produce bulk quadrupole moments of either electric charge or crystal momentum, depending on the particular perturbation chosen. These results motivate a number of different directions for future research. Of primary importance is identifying promising material platforms in which these mixed crystalline-electromagnetic responses can be observed. The response we predict in this work requires the system to possess both a bulk Weyl quadrupole moment and a surface Dirac dipole moment. As for the crystal symmetry ingredients, for the Weyl quadrupole moment to be well defined, the Weyl dipole moments in the plane of the quadrupole must vanish, which can be guaranteed by mirror symmetry or a set of \(C_{2}\) symmetries (time reversal symmetry would also suffice, although that would prevent observation of the momentum quadrupole). The surface Dirac dipole moment similarly requires the surface Chern number to be zero, which can be enforced by the presence of a surface mirror symmetry or a time reversal symmetry (as long as the bulk is not a 3D topological insulator). These symmetries, along with the possible breaking of TRS either by magnetic ordering or an applied magnetic field, are necessary to observe the mixed crystalline-electromagnetic response. Combining these symmetry requirements with the tools provided by topological quantum chemistry may provide a route to identifying materials that host this mixed crystalline-electromagnetic response [79; 80]. There are a number of systems that are likely to host similar types of mixed responses and warrant further study. Higher order analogs of two-dimensional Dirac quadrupole semimetals and three-dimensional nodal line semimetals [53] are particularly promising, as are higher order nodal superconductors [29; 30; 31; 32] and higher order non-Hermitian TSMs [33; 34; 35; 36]. Furthermore, there are promising metamaterial platforms in which one could generate our model. Both Weyl points [81] and higher order quadrupole topology [82; 83; 84] have each been demonstrated separately in experiment, so combining the two is plausibly achievable. In these systems it may even be possible to extract information about the crystal momentum, as was recently accomplished in a topoelectric circuit experiment studying higher rank surface states [85]. Interestingly, our model also presents a platform in which to study quantum oscillations, as the combination of surface Fermi arcs and zero-energy hinge arcs may provide unusual circuits for electrons to traverse [86]. The properties of these systems in strong magnetic fields may also be a fruitful line of pursuit as the zeroth Landau level of the bulk Weyl nodes must coordinate with the zeroth Landau level of the surface Dirac fermions. We leave these studies to future work. ## Acknowledgements We thank Oleg Dubinkin and Julian May-Mann for helpful discussions. We acknowledge the NSF-Supported UIUC REU program under award numbers PHY-1950744 and PHY-2244433. TLH and MH thank ARO MURI W911NF2020166 for support.
2308.14843
Robust Activity Recognition for Adaptive Worker-Robot Interaction using Transfer Learning
Human activity recognition (HAR) using machine learning has shown tremendous promise in detecting construction workers' activities. HAR has many applications in human-robot interaction research to enable robots' understanding of human counterparts' activities. However, many existing HAR approaches lack robustness, generalizability, and adaptability. This paper proposes a transfer learning methodology for activity recognition of construction workers that requires orders of magnitude less data and compute time for comparable or better classification accuracy. The developed algorithm transfers features from a model pre-trained by the original authors and fine-tunes them for the downstream task of activity recognition in construction. The model was pre-trained on Kinetics-400, a large-scale video-based human activity recognition dataset with 400 distinct classes. The model was fine-tuned and tested using videos captured from manual material handling (MMH) activities found on YouTube. Results indicate that the fine-tuned model can recognize distinct MMH tasks in a robust and adaptive manner which is crucial for the widespread deployment of collaborative robots in construction.
Farid Shahnavaz, Riley Tavassoli, Reza Akhavian
2023-08-28T19:03:46Z
http://arxiv.org/abs/2308.14843v1
**Robust Activity Recognition for Adaptive Worker-Robot Interaction using Transfer Learning** ## Abstract Human activity recognition (HAR) using machine learning has shown tremendous promise in detecting construction workers' activities. HAR has many applications in human-robot interaction research to enable robots' understanding of human counterparts' activities. However, many existing HAR approaches lack robustness, generalizability, and adaptability. This paper proposes a transfer learning methodology for activity recognition of construction workers that requires orders of magnitude less data and compute time for comparable or better classification accuracy. The developed algorithm transfers features from a model pre-trained by the original authors and fine-tunes them for the downstream task of activity recognition in construction. The model was pre-trained on Kinetics-400, a large-scale video-based human activity recognition dataset with 400 distinct classes. The model was fine-tuned and tested using videos captured from manual material handling (MMH) activities found on YouTube. Results indicate that the fine-tuned model can recognize distinct MMH tasks in a robust and adaptive manner which is crucial for the widespread deployment of collaborative robots in construction. ## Introduction Human activity recognition (HAR) has gained significant traction in recent years due to its potential applications in various fields, including healthcare, sports, security, and construction. In the construction industry, the accurate and real-time recognition of workers' activities is imperative for ensuring safety, improving productivity, and optimizing resource allocation. HAR can be achieved using wearable sensors or vision-based methods, with both approaches showing promising results in human-robot interaction (HRI) research (Liu et al., 2022; Zhang et al., 2017). Researchers have employed multiple tools, including wearable sensors, cameras, and other types of sensing devices, to detect human activities for HAR in various domains such as healthcare, sports, and construction. Vision-based methods, which involve analyzing visual data from cameras to detect and identify human activities, are one of the most popular approaches for HAR. In the construction domain, Luo et al. developed a vision-based system that uses a single camera to monitor workers' activities and detect unsafe behavior (Luo et al., 2018). Similarly, Escorcia et al. proposed a system that uses a combination of RGB and depth sensors to recognize construction workers' activities. Another common approach for HAR is using wearable sensors, such as accelerometers and gyroscopes, to capture human movements (Escorcia et al., 2012). Such sensors have been used extensively in construction to monitor workers' activities and detect unsafe behavior. For instance, Kim and Cho. developed a wearable sensor-based system that uses machine learning to recognize construction workers' activities and detect unsafe behavior (K. Kim and Cho, 2020). However, existing approaches to HAR often fail to capture the heterogeneity of activity types, environments, and subjects. This is because the models are often trained on limited datasets and may not be able to perform well in different environments or with different subjects. This has resulted in machine learning models that lack robustness, adaptability, generalizability, and reconfigurability when applied in conditions different from those in which they were trained (Zhang et al., 2022). This limitation poses a significant challenge for the deployment of collaborative robots in construction, as they require robust and adaptive HAR models. Transfer learning (TL) is a promising approach that can address this challenge, where knowledge gained from a source domain can be transferred to a target domain to improve performance. Within the construction research domain, precious studies have successfully used TL to detect objects such as guardrails, hard hats, and equipment (H. Kim et al., 2018; Kolar et al., 2018; Shen et al., 2021). Nevertheless, the use of TL for HAR and specifically toward deploying it in worker-robot interaction applications has never been investigated before. This paper presents a TL methodology for activity recognition of construction workers interacting with collaborative robots. To achieve TL in video-based activity recognition, we used X-CLIP (Expanding Contrastive Language-Image Pre-training) (Ni et al., 2022), a model developed by Microsoft that extends the functionality of the original CLIP model by OpenAI (Radford et al., 2021). X-CLIP was specifically designed for video recognition tasks and has demonstrated excellent performance in various video and text-based tasks. By leveraging its powerful multimodal learning capabilities, X-CLIP is expected to provide superior performance in the activity recognition task for construction workers interacting with collaborative robots. In this paper, first, we utilize a pre-trained model from a large-scale video-based HAR dataset, Kinetics-400, which has not been used before in the context of construction activity recognition. Second, we fine-tune the pre-trained model on a small number of construction-specific activities, which require minimal annotation efforts and computational resources, making it more feasible for real-world deployment. Third, we demonstrate the effectiveness of our approach in recognizing manual material handling activities in construction, which is crucial for enabling the deployment of collaborative robots in this domain. ## Methodology The proposed methodology for fine-tuning a general activity recognition model for MMH activities using X-CLIP is shown in Figure 1 and described below. **Data Collection**. The first step in developing the model is to collect video data of construction MMH activities involving workers. These videos will be used for training and testing the model. The videos should cover a wide range of scenarios, such as different workers, different types of material, and different environments. This resulted in having a diverse dataset of video data covering various scenarios, which is crucial for training and testing the model to develop a construction MMH activity recognition system. We sourced the videos from YouTube, selecting 65 videos covering four distinct MMH tasks. The videos include scenarios with and without collaborative robots. Our preliminary analysis indicated that videos with and without robot collaboration do not have a significant impact on the classification accuracy. To label the videos, we manually annotated each video with the corresponding activity. We defined a set of four MMH activities that are common in construction sites: carrying a load, loading a load, pushing a load, and pulling a load. The X-CLIP model we adopted and advanced further requires 32 frames from the video as input, so we randomly sampled 32 frames from a defined start and end point of the activity in the video. The inclusion of start and end points of activities in videos also allowed us to use the same video for multiple examples for our model, increasing the size of the dataset. The videos had various resolutions and aspect ratios, but on average were 480x360 pixels and had a frame rate of 30 frames per second. We selected the specific portion of the videos where the activity was happening and processed the video into an array of video frames, sampling 32 frames from each video. Because of this sampling technique, we ensure the portion of video selected is no more than 16 seconds long so that the resulting selected frames are no more than half a second apart in real-time. This ensures that the activity is accurately portrayed and that the frames can be interpreted by the model as sequential. **Preprocessing.** The model processes images only of size 224\(\times\)224, so before being fed into the model, the images are cropped to appropriate sizes. During training, the model also augments each frame, resizing images, flipping them, and jittering the color values. This is done to help the model generalize better during training. During validation or inference, the model only crops the video to 224x224 and normalizes color values. **Training the X-CLIP model.** X-CLIP is a state-of-the-art multimodal model that was developed by Microsoft and is trained on a large corpus of text and video pairs from the internet (Ni et al., 2022). It combines the power of natural language processing (NLP) and computer vision to learn joint representations of text and videos. During training, X-CLIP learns to associate a text Figure 1: **An overview of the developed methodology (Image courtesy of Construction Robotics (Construction Robotics 2022) and used with permission).** description with a video, and vice versa, by optimizing a contrastive loss function that maximizes the similarity between positive pairs of text and videos and minimizes the similarity between negative pairs. Figure 2 shows a schematic overview of the model architecture. The X-CLIP model operates on the principle of contrasting label embeddings and video embeddings using a similarity score. Embeddings are an n-dimensional vector that contains the semantic information necessary to sufficiently capture an input. To attain these embeddings, images and text are passed through their respective encoders and the output is the associated embedding vector. A similarity score indicates the likelihood of a label corresponding to a particular video, with higher similarity scores indicating a higher probability of correspondence. The model employs a combination of encoders and transformers to embed both video and text inputs (Ni et al., 2022). In essence, the model compares video frame embeddings to a set of label embeddings, and uses a transformer to aggregate predictions for each video frame, ultimately outputting the label that most likely corresponds to the video. After collecting and preprocessing the data, we used the X-CLIP model for fine-tuning the activity recognition model on videos of MMH tasks. **Fine-tuning the X-CLIP model**. Fine-tuning helps the model learn the specific features of the construction activities and improve its accuracy in recognizing these activities. To adapt X-CLIP for MMH as opposed to generic activity recognition, we fine-tuned the model on our dataset of construction MMH videos. Fine-tuning involves updating the weights of the pre-trained model marginally, using our MMH training data to learn better representations that are specific to our task. The innovation of this methodology lies in the fact that for any downstream HAR task, a small team can leverage the power of the pretrained model for any specific set of activities. We utilized the pre-trained model to reduce training time and computational costs as well as to leverage the generic activity recognition knowledge the model learns in pre-training. The pre-trained model generalizes to classes it did not see during training due to the model learning a label-agnostic embedding space. This allows us to create our own dataset of videos with labels specific to our use case. For this paper, we analyze common MMH tasks. The model is fine-tuned by using Figure 2: X-CLIP architecture (Ni et al., 2022). a very low learning rate of \(\alpha=\mathit{10^{-6}}\) and training the model on very few examples (~8 videos per class) for 5 epochs with a batch size of 8. Because the model is pre-trained, it extracts relevant visual and semantic information from the beginning of the fine-tuning process and only requires small updates to the weights to learn a better representation of both the label and accompanying videos without the need for many examples of new activities. **Evaluation**. The model's performance was evaluated using a held-out split of our dataset of construction MMH activities, consisting of 60% of the total data. We report accuracy, precision, and recall metrics for each experiment, and train using cross-entropy loss (Haurilet et al., 2019). The dataset includes activities that the model was not pre-trained on to assess the model's ability to recognize new activities after fine-tuning. The pre-trained model was trained on Kinetics-400, a popular human activity recognition benchmark dataset with 400 distinct activities. Because of the model's use of label embeddings rather than strict activity labels, the MMH activities we selected are not exactly present in the Kinetics-400 dataset, but more general labels are present in the dataset that capture the general idea of the activities we tasked the model to classify. An example of this is that in Kinetics-400, there are labels such as deadlifting or lifting a hat, while we used the general label of lifting materials. ## Results and Discussion The model was fine-tuned to recognize four distinct MMH tasks: carrying a load, loading a load, pushing a load, and pulling a load. We labeled each video with the activity that was being performed and compared the predicted activity to the ground truth. The overall accuracy of the model, (i.e., the number of correct predictions divided by total predictions on the test set) in recognizing the four activities improved from 46% to 69% (Table 1) after fine-tuning the model for 5 epochs with a learning rate of \(\alpha=\mathit{10^{-6}}\). In Table 1, we show key model metrics to evaluate the performance of the model before and after fine-tuning. Since the activities of pushing and pulling are mistaken for one another, we also tested the model when combining the activities under one label, "pushing or pulling," and observed an increase in accuracy from 69% to 90%. These results show three things: first that X-CLIP has a robust understanding of human activities and can achieve 46% accuracy on MMH activities it was not pre-trained on, second that fine-tuning the model for a desired activity increases classification accuracy with very few training examples, and third that though it was pre-trained on 400 \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & Precision & Recall & F1 Score & Accuracy \\ \hline Pre-trained & 0.37 & 0.46 & 0.41 & 46\% \\ \hline Fine-tuned & 0.74 & 0.69 & 0.72 & 69\% \\ \hline Fine-tuned (pushing and pulling combined) & 0.91 & 0.90 & 0.90 & 90\% \\ \hline \end{tabular} \end{table} Table 1: Model metrics before and after fine-tuning. activities, there are blind spots and activities the model will perform poorly on after fine-tuning (e.g. pulling). These results demonstrate the effectiveness of our TL methodology using X-CLIP fine-tuning for activity recognition in the construction industry. The ability to recognize MMH activities in real-time can enable the deployment of collaborative robots to assist workers in such activities, leading to increased efficiency and safety in construction sites. X-CLIP was pretrained on Kinetics-400, and of the 400 activities, there are a few labels similar to our own. Namely, deadlifting, lifting a hat, carrying weight, pushing a cart, and pushing a wheelbarrow. While these labels are not directly related to the MMH activities we analyze in this paper, it can be seen why the model performs better at classifying activities similar to these, since that is what the original model was trained to do on a very large dataset. The only labels related to pulling in Kinetics-400 are "pull ups" and "pulling espresso shot," both of which use the word pull in a different context than we use it in for MMH tasks. Because it was not trained to understand this activity, fine-tuning is not enough to increase the accuracy of pulling classification. An important point to make here is that the textual content of the label is significant in the video labels and can affect model accuracy by up to 21% (Table 2). X-CLIP works by learning an embedding vector that contains the meaning of the label and learning a different embedding vector for the video content. These vectors are compared by means of a cosine similarity score, being trained to maximize the similarity between the label embedding corresponding to the activity in the video. This means that different labels give different accuracies. In Table 2, we show a few different combinations of labels and the resulting accuracies obtained after fine-tuning the model on the given labels and corresponding videos. For the baseline, we see much lower accuracy which is related to the model not being familiar with the meaning of the word "load" as it is paired with the accompanying videos. X-CLIP utilizes CLIP's original text encoder which was trained on a vast corpus of image-text pairs of which we do not have access to, but we can assume from the millions of pairs, the text encoder built up a sufficient representation of the English language, but there may be gaps such as using "load" as a noun, a more niche meaning of the word. Because of this, it is a poor choice of words for our \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & Label 1 & Label 2 & Label 3 & Accuracy \\ \hline Baseline & Lifting a load & Carrying a load & Pushing or pulling a load & 72\% \\ \hline Variation & Lifting a box & Carrying a box & Pushing or pulling a box & 82\% \\ 1 & Lifting & Carrying & Pushing or pulling & 69\% \\ \hline Variation & A photo of someone & A photo of someone carrying materials & A photo of someone pushing or pulling a box & **90\%** \\ 3 & Lifting a box up & A photo of someone carrying materials & A photo of someone pushing or pulling a box & **90\%** \\ \hline \end{tabular} \end{table} Table 2: Model accuracy as a function of labels. model's labels. Using the word "box" in place of load is a more likely combination of words to be found in the original CLIP dataset, and the model performs better, despite the word "box" not appearing in the Kinetics-400 dataset. The model extracts roughly the same visual features independently of the labels provided, it is only that the label embeddings are contingent upon the quality of data in CLIP's original image-text pair dataset. The accuracy of 90% achieved in our study for activity recognition of construction workers interacting with collaborative robots is comparable to some previous studies in the field of construction activity recognition using video data. For instance, a review paper by Sherafat et al. reported accuracies ranging from 54% to 96% for different construction activities using video-based methods (Sherafat et al., 2020). Nevertheless, previous studies did not employ TL and as such, required significantly more data and computational resources for training and used less generalizable model architectures. Fine-tuning large, generalizable models enables teams with small budgets to leverage previous investments in large models on a bespoke downstream task. It is also important to note that previous studies primarily focused on recognizing routine daily human activities, which are often more distinguishable from each other than MMH tasks in construction. In contrast, our study targets the recognition of MMH tasks, which are highly similar and require a more nuanced approach to achieve accurate recognition. Despite this added complexity, our TL methodology achieved a promising result of 90%, highlighting its potential for real-world applications in the construction industry. ## Conclusion In this study, we presented a TL methodology for activity recognition of construction workers interacting with collaborative robots using the X-CLIP model. The proposed methodology fine-tunes a pre-trained model to accurately classify activities in a new environment and was tested using videos captured from construction MMH activities involving workers and robots. The results showed that the system could successfully recognize distinct MMH tasks, even though such activities were absent in the dataset the model was pre-trained on. This indicates the potential of the proposed methodology in recognizing a wide range of construction activities with high accuracy, which is imperative for the widespread deployment of collaborative robots in construction. The proposed TL methodology utilizing the X-CLIP model provides a promising approach for activity recognition in the construction industry. However, there are limitations that our team is working to address in future work. Further research is needed to evaluate the effectiveness of the proposed methodology in different construction environments and to assess its ability to adapt to new activity types and subjects. Future research can also explore ways of improving the accuracy and robustness of activity recognition models in construction, including using more diverse and larger datasets, more sophisticated feature extraction techniques, and more advanced machine learning algorithms. Finally, different data modalities can help in distinguishing between activities that share similarities in terms of worker body movement. ## Acknowledgment The presented work has been supported by the U.S. National Science Foundation (NSF) CAREER Award through the grant # CMMI 2047138. The authors gratefully acknowledge the support from the NSF. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily represent those of the NSF.
2307.15300
Pairs Trading: An Optimal Selling Rule with Constraints
The focus of this paper is on identifying the most effective selling strategy for pairs trading of stocks. In pairs trading, a long position is held in one stock while a short position is held in another. The goal is to determine the optimal time to sell the long position and repurchase the short position in order to close the pairs position. The paper presents an optimal pairs-trading selling rule with trading constraints. In particular, the underlying stock prices evolve according to a two dimensional geometric Brownian motion and the trading permission process is given in terms of a two-state {trading allowed, trading not allowed} Markov chain. It is shown that the optimal policy can be determined by a threshold curve which is obtained by solving the associated HJB equations (quasi-variational inequalities). A closed form solution is obtained. A verification theorem is provided. Numerical experiments are also reported to demonstrate the optimal policies and value functions.
Ruyi Liu, Jingzhi Tie, Zhen Wu, Qing Zhang
2023-07-28T04:35:00Z
http://arxiv.org/abs/2307.15300v1
# Pairs Trading: An Optimal Selling Rule with Constraints ###### Abstract The focus of this paper is on identifying the most effective selling strategy for pairs trading of stocks. In pairs trading, a long position is held in one stock while a short position is held in another. The goal is to determine the optimal time to sell the long position and repurchase the short position in order to close the pairs position. The paper presents an optimal pairs-trading selling rule with trading constraints. In particular, the underlying stock prices evolve according to a two dimensional geometric Brownian motion and the trading permission process is given in terms of a two-state {trading allowed, trading not allowed} Markov chain. It is shown that the optimal policy can be determined by a threshold curve which is obtained by solving the associated HJB equations (quasi-variational inequalities). A closed form solution is obtained. A verification theorem is provided. Numerical experiments are also reported to demonstrate the optimal policies and value functions. **Key words:** pairs trading, trading constraints, markov chain, quasi-variational inequalities ## 1 Introduction Pairs trading involves the simultaneous trading of two stocks as a pair. Typically, a pairs position involves taking a long position in one stock and a short position in the other. The focus of this paper is on determining the optimal timing for closing an existing pairs position. Specifically, we aim to identify the most effective strategy for folding the position and exiting the market. Pairs trading is closely related to the timing of the optimal investments studied in McDonald and Siegel [9]. In particular, they considered the optimal timing of investment in an irreversible project. Two main variables in their model are the value of the project and the cost of investing. They demonstrated one should defer the investment until the present value of the benefits from the project exceed the investment cost by a certain margin. They transfer the control problem in selecting the optimal trading time to the one of determining the optimal value in space. Further studies along this line were carried out by Hu and Oksendal [7] to specify precise optimality conditions and to provide new proof of the following variational inequalities among others. Their results can be easily interpreted in terms of the pairs-trade selling rule when treating the project value as the long position and investment cost as the short position. In this paper, we extend these results to incorporate markets with Markov trading constraints. In particular, a sequence of trading windows is imposed. One can only buy/sell stocks when the windows are open. We focus on a simple and easily implementable strategy, and its optimality and the sufficient conditions for a closed-form solution. Mathematical trading rules have been studied extensively in the literature, and various approaches have been proposed to determine optimal trading strategies. Zhang [16] considers a selling rule that involves two threshold levels, a target price, and a stop-loss limit, and aims to find the optimal threshold levels that maximize the expected profit. To achieve this, Zhang solves a set of two-point boundary value problems, which are a type of differential equation with conditions specified at two endpoints. The resulting optimal threshold levels can then be used to construct a profitable trading strategy. In [6], Guo and Zhang study the optimal selling rule under a model with switching Geometric Brownian motion, which is a type of stochastic process commonly used to model asset prices. To determine the optimal threshold levels, they use a smooth-fit technique that involves solving a set of algebraic equations. This approach allows them to obtain explicit expressions for the threshold levels, which can be used to design a profitable trading strategy. Dai et al. [1] proposed a trend-following rule based on a conditional probability indicator. They showed that the optimal trading rule can be determined by solving the associated Hamilton-Jacobi-Bellman equations, which are a type of partial differential equation commonly used in finance. Iwarere and Barmish [8] developed a similar idea using a confidence interval approach, while Merhi and Zervos [11] studied an investment capacity expansion/reduction problem using dynamic programming under a geometric Brownian motion market model. In the context of mean reversion trading, Zhang and Zhang [17] obtained a buy-low and sell-high policy by characterizing the 'low' and 'high' levels in terms of mean reversion parameters. Song and Zhang [13] studied pairs trading under a mean reversion model. It is shown that the optimal trading rule can be determined by threshold levels that can be obtained by solving a set of algebraic equations. A set of sufficient conditions are also provided to establish the desired optimality. Deshpande and Barmish [2] developed a control-theoretic approach to pairs trading that relaxes the requirement for spread functions. They demonstrated that their trading algorithm generates positive expected returns. Other pairs trading methods can be found in Elliott et al. [4] and Whistler [15]. More recently, Tie et al. [14] proposed an optimal pairs trading rule aimed at maximizing a discounted payoff function by sequentially initiating and closing positions of the pair. Using a dynamic programming approach under a geometric Brownian motion model, they showed that buying and selling decisions can be determined by two threshold curves in closed form. They also proved the optimality of their trading strategy. Studies of trading rules with trading constraints are important from application point of view. Market liquidity is one of the causes of limited trading windows. Poor liquidity restricts one's ability to trade freely. Such trading conditions have greater impacts on small cap stocks. It affects even more on pairs trading because both buying and selling have to executed all together. Mathematical treatment on trading constraints can be found in Dupuis and Wang [3] in which they used a Poisson process to capture permissible trading moments. They obtained a closed form solution under a one-dimensional geometric Brownian motion. Further studies can be found in Menaldi and Robin [10]. They treated related optimal stopping with a class of general Markov-Feller processes and focused on theoretical characterizations of optimal stopping. In this paper, we consider a pairs selling rule under a two-dimensional geometric Brownian motion with trading constraints. We model the sequence of trading windows in terms of a two-state Markov chain. One can only buy/sell stocks when the trading windows are open. We focus on such constrained optimal stopping. We generalize the results of Hu and Oksendal [7] by imposing trading constraints. We show that the optimal selling rule can be determined by a single threshold curve. We also establish sufficient conditions that guarantee the optimality of the selling policy. In addition, we report our numerical experiments to demonstrate our results. This paper is organized as follows. In SS2, we formulate the pairs trading problem under consideration. In SS3, we study the associated HJB equations and their solutions. In SS4, we provide a verification theorem that guarantee the optimality of our selling rule. In SS5, we consider asymptotic properties of the trading switching policies as the trading constraint jump rates go to infinity. Numerical examples are given in SS6. Some concluding remarks are given in SS7. We postpone the proof of lemma 3.1 to the appendix. ## 2 Problem Formulation Our pairs trading involves two stocks: \(\mathbf{S}^{1}\) and \(\mathbf{S}^{2}\). Let \(\{X_{t}^{1},t\geq 0\}\) denote the prices of stock \(\mathbf{S}^{1}\) and \(\{X_{t}^{2},t\geq 0\}\) that of stock \(\mathbf{S}^{2}\). They satisfy the following stochastic differential equation: \[d\left(\begin{matrix}X_{t}^{1}\\ X_{t}^{2}\end{matrix}\right)=\left(\begin{matrix}X_{t}^{1}\\ &X_{t}^{2}\end{matrix}\right)\left[\left(\begin{matrix}\mu_{1}\\ \mu_{2}\end{matrix}\right)dt+\left(\begin{matrix}\sigma_{11}&\sigma_{12}\\ \sigma_{21}&\sigma_{22}\end{matrix}\right)d\left(\begin{matrix}W_{t}^{1}\\ W_{t}^{2}\end{matrix}\right)\right], \tag{1}\] where \(\mu_{i}\), \(i=1,2\), are the return rates, \(\sigma_{ij}\), \(i,j=1,2\), the volatility constants, and \((W_{t}^{1},W_{t}^{2})\) a 2-dimensional standard Brownian motion. The liquidity process \(\alpha_{t}\) is assumed to be a two-state Markov chain with state space \(\mathcal{M}=\{0,1\}\). We impose the following trading constraint: One can only buy/sell stocks when \(\alpha_{t}=1\). Let \(Q\) be the generator of \(\alpha_{t}\) given by \(Q=\left(\begin{array}{cc}-\lambda_{0}&\lambda_{0}\\ \lambda_{1}&-\lambda_{1}\end{array}\right)\), with \(\lambda_{0}>0\) and \(\lambda_{1}>0\). We assume \(\alpha_{t}\) and \((W_{t}^{1},W_{t}^{2})\) are independent. In this paper, we consider a pairs selling rule. We assume the corresponding pair's position consists of a one-share long position in stock \(\mathbf{S}^{1}\) and a one-share short position in stock \(\mathbf{S}^{2}\). This condition can be easily relaxed; see Tie et al. [14] for details. The problem is to determine an optimal stopping time \(\tau\) (subject to trading constraints) to fold the pairs position by selling \(\mathbf{S}^{1}\) and buying back \(\mathbf{S}^{2}\). Let \(K\) denote the transaction cost percentage (e.g., slippage and/or commission) associated with stock transactions. For example, the proceeds to close the pairs position at \(t\) is \((1-K)X_{t}^{1}-(1+K)X_{t}^{2}\). For ease of notation, let \(\beta_{\mathrm{b}}=1+K\) and \(\beta_{\mathrm{s}}=1-K\). Let \(\mathcal{F}_{t}=\sigma\{(X_{r}^{1},X_{r}^{2},\alpha_{r}):\ r\leq t\}\). We consider admissible stopping times \(\mathcal{S}=\{\tau:\mathcal{F}_{t}\) stopping times such that \(\tau<\infty\) only when \(\alpha_{\tau}=1\}\). Given the initial state \((x_{1},x_{2})\), \(\alpha=0,1\), and the admissible selling time \(\tau\), the corresponding reward function \[J(x_{1},x_{2},\alpha,\tau)=E\big{[}e^{-\rho\tau}(\beta_{\mathrm{s}}X_{\tau}^{ 1}-\beta_{\mathrm{b}}X_{\tau}^{2})\big{]}, \tag{2}\] where \(\rho>0\) is a given discount factor. The problem is to find an admissible stopping time \(\tau\) to maximize \(J\). Let \(V_{\alpha}(x_{1},x_{2})\) denote the corresponding value function: \[V_{\alpha}(x_{1},x_{2})=\sup_{\tau}J(x_{1},x_{2},\alpha,\tau). \tag{3}\] We impose the following conditions throughout this paper. **(A1)**\(\rho>\mu_{1}\) and \(\rho>\mu_{2}\). Under these conditions, we have the lower and upper bounds for \(V\): \[-\beta_{\rm b}x_{2}\leq V_{\alpha}(x_{1},x_{2})\leq\beta_{\rm s}x_{1}. \tag{4}\] Actually, for any \(\tau\in{\cal S}\), we have \(-\beta_{\rm b}E\big{[}e^{-\rho\tau}X_{\tau}^{2}\big{]}\leq J(x_{1},x_{2}, \alpha,\tau)\leq\beta_{\rm s}E\big{[}e^{-\rho\tau}X_{\tau}^{1}\big{]}\). The rest follows from Dynkin's formula \[E[e^{-\rho\tau}X_{\tau}^{j}]=\left(x_{j}+E\int_{0}^{\tau}e^{-\rho t}X_{t}^{j}(- \rho+\mu_{j})dt\right)\leq x_{j},\ {\rm for}\ j=1,2.\] In addition, in view of the value function definition, we have \[V_{1}(x_{1},x_{2})\geq J(x_{1},x_{2},1,\tau)|_{\tau=0}=\beta_{\rm s}x_{1}-\beta _{\rm b}x_{2}.\] ## 3 HJB Equations In this paper, we follow the dynamic programming approach and study the associated HJB equations. Let \[{\cal A}=\frac{1}{2}\left\{a_{11}x_{1}^{2}\frac{\partial^{2}}{\partial x_{1}^ {2}}+2a_{12}x_{1}x_{2}\frac{\partial^{2}}{\partial x_{1}\partial x_{2}}+a_{2 2}x_{2}^{2}\frac{\partial^{2}}{\partial x_{2}^{2}}\right\}+\mu_{1}x_{1}\frac{ \partial}{\partial x_{1}}+\mu_{2}x_{2}\frac{\partial}{\partial x_{2}},\] where \(a_{11}=\sigma_{11}^{2}+\sigma_{12}^{2},\ a_{12}=\sigma_{11}\sigma_{21}+\sigma _{12}\sigma_{22}\), and \(a_{22}=\sigma_{21}^{2}+\sigma_{22}^{2}\). The associated HJB equations have the form: For \(x_{1},x_{2}>0\), \[\left\{\begin{array}{l}[(\rho+\lambda_{0})-{\cal A}]v_{0}(x_{1},x_{2})= \lambda_{0}v_{1}(x_{1},x_{2}),\\ \min\left\{[\rho+\lambda_{1})-{\cal A}]v_{1}(x_{1},x_{2})-\lambda_{1}v_{0}(x_ {1},x_{2}),\ v_{1}(x_{1},x_{2})-\beta_{\rm s}x_{1}+\beta_{\rm b}x_{2}\right\}= 0.\end{array}\right. \tag{5}\] To solve the above HJB equations, we first convert them into single variable equations. Let \(y=x_{2}/x_{1}\) and \(v_{i}(x_{1},x_{2})=x_{1}w_{i}(x_{2}/x_{1})\), for some function \(w_{i}(y)\) and \(i=0,1\). Then we have by direct calculation that \[\begin{array}{l}\frac{\partial v_{i}}{\partial x_{1}}=w_{i}(y)-yw_{i}^{ \prime}(y),\ \frac{\partial v_{i}}{\partial x_{2}}=w_{i}^{\prime}(y),\\ \frac{\partial^{2}v_{i}}{\partial x_{1}^{2}}=\frac{y^{2}w_{i}^{\prime\prime}( y)}{x_{1}},\ \frac{\partial^{2}v_{i}}{\partial x_{2}^{2}}=\frac{w_{i}^{\prime\prime}(y)}{x_{1} },\ {\rm and}\ \frac{\partial^{2}v_{1}}{\partial x_{1}\partial x_{2}}=-\frac{yw_{i}^{ \prime\prime}(y)}{x_{1}}.\end{array}\] Write \({\cal A}v_{i}\) in terms of \(w_{i}\) to obtain \[{\cal A}v_{i}=x_{1}\left\{\frac{1}{2}\left[a_{11}-2a_{12}+a_{22}\right]y^{2}w_ {i}^{\prime\prime}(y)+(\mu_{2}-\mu_{1})yw_{i}^{\prime}(y)+\mu_{1}w_{i}(y) \right\}.\] Then, the HJB equations can be given in terms of \(y\) and \(w_{i}\) as follows: \[\left\{\begin{array}{l}[(\rho+\lambda_{0})-{\cal L}]w_{0}(y)=\lambda_{0}w_{1 }(y),\\ \min\left\{[(\rho+\lambda_{1})-{\cal L}]w_{1}(y)-\lambda_{1}w_{0}(y),\ w_{1}(y)- \beta_{\rm s}+\beta_{\rm b}y\right\}=0,\end{array}\right. \tag{6}\] where \[{\cal L}[w_{i}(y)]=\sigma y^{2}w_{i}^{\prime\prime}(y)+(\mu_{2}-\mu_{1})yw_{i} ^{\prime}(y)+\mu_{1}w_{i}(y),\] with \(\sigma=(a_{11}-2a_{12}+a_{22})/2\geq 0\). We only consider the case when \(\sigma\neq 0\). If \(\sigma=0\), the problem reduces to a first-order case and can be treated in a similar way. ### Solution of the HJB Equation Intuitively, one should close the pairs position when \(X_{t}^{1}\) is large and \(X_{t}^{2}\) is small. Namely, we expect a threshold \(k\) so that the pairs position is to be closed when \(y=x_{2}/x_{1}\leq k\). In view of this, we focus on searching for such \(k\) so that the second equation in (6) has the form \(w_{1}(y)=\beta_{\mathrm{s}}-\beta_{\mathrm{b}}y\), for \(y\in(0,k)\); and \([(\rho+\lambda_{1})-\mathcal{L}]w_{1}(y)=\lambda_{1}w_{0}(y)\), for \(y\in(k,\infty)\), Since \(w_{0}(y)\) satisfies \[[(\rho+\lambda_{0})-\mathcal{L}]w_{0}(y)=\lambda_{0}w_{1}(y)\] for all \(y\), we can combine these equations on the interval \((k,\infty)\) and get a system of \(w_{0}(y)\) and \(w_{1}(y)\): \[[(\rho+\lambda_{1})-\mathcal{L}]w_{1}(y)=\lambda_{1}w_{0}(y)\quad\text{and} \quad[(\rho+\lambda_{0})-\mathcal{L}]w_{0}(y)=\lambda_{0}w_{1}(y) \tag{7}\] We can reduce the system into a single equation about \(w_{1}(y)\) (or \(w_{0}(y)\)); \[\{[(\rho+\lambda_{0})-\mathcal{L}][(\rho+\lambda_{1})-\mathcal{L}]-\lambda_{ 0}\lambda_{1}\}w_{1}(y)=0.\] The above equation can be simplified to \[[(\mathcal{L}-\rho)(\mathcal{L}-\rho-\lambda_{0}-\lambda_{1})]w_{1}(y)=0.\] This equation is a Cauchy-Euler type equation and its solutions are the linear combination of \(y^{\delta}\) with \(\delta\) satisfying the polynomial equation: \[[\sigma\delta(\delta-1)+(\mu_{2}-\mu_{1})\delta+\mu_{1}-\rho][\sigma\delta( \delta-1)+(\mu_{2}-\mu_{1})\delta+\mu_{1}-\rho-\lambda_{0}-\lambda_{1}]=0. \tag{8}\] Simplify further and we can find \(\delta\) explicitly: \[\left[\delta^{2}-\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)\delta-\frac{ \rho-\mu_{1}}{\sigma}\right]\left[\delta^{2}-\left(1+\frac{\mu_{1}-\mu_{2}}{ \sigma}\right)\delta-\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{1}}{\sigma} \right]=0.\] There are four real roots \(\delta_{1}\), \(\delta_{2}\), \(\delta_{3}\) and \(\delta_{4}\) (by direct calculation \(\delta_{1}<0,\delta_{3}<0\) and \(\delta_{2}>1,\delta_{4}>1\)) given by \[\begin{split}\delta_{1}&=\frac{1}{2}\Bigg{(}1+ \frac{\mu_{1}-\mu_{2}}{\sigma}-\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma} \right)^{2}+\frac{4(\rho-\mu_{1})}{\sigma}}\Bigg{)},\\ \delta_{2}&=\frac{1}{2}\Bigg{(}1+\frac{\mu_{1}-\mu_{ 2}}{\sigma}+\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4( \rho-\mu_{1})}{\sigma}}\Bigg{)},\\ \delta_{3}&=\frac{1}{2}\Bigg{(}1+\frac{\mu_{1}-\mu_{ 2}}{\sigma}-\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4( \rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}}\Bigg{)},\\ \delta_{4}&=\frac{1}{2}\Bigg{(}1+\frac{\mu_{1}-\mu_{ 2}}{\sigma}+\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4( \rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}}\Bigg{)}.\end{split} \tag{9}\] Using the inequalities in (4), we have \(-\beta_{\mathrm{b}}y\leq w_{i}(y)\leq\beta_{\mathrm{s}}\). In view of this, the general solution of \(w_{1}(y)\) for \(y\in(k,\infty)\) should be of the form: \[w_{1}(y)=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}, \tag{10}\] for some constants \(C_{1}\) and \(C_{3}\). Once we find \(w_{1}(y)\), \(w_{0}(y)\) is given by \[w_{0}(y)=\frac{[(\rho+\lambda_{1})-\mathcal{L}]w_{1}(y)}{\lambda_{1}}=w_{1}(y)- \frac{1}{\lambda_{1}}(\mathcal{L}-\rho)w_{1}(y).\] Note that \(w_{1}(y)=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}\) and \[(\mathcal{L}-\rho)(y^{\delta_{1}})=0\quad\text{and}\quad(\mathcal{L}-\rho- \lambda_{0}-\lambda_{1})y^{\delta_{3}}=0.\] This yields \[w_{0}(y) =w_{1}(y)-\frac{1}{\lambda_{1}}(\mathcal{L}-\rho)w_{1}(y)\] \[=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}-\frac{1}{\lambda_{1}}( \mathcal{L}-\rho)[C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}]\] \[=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}-\frac{1}{\lambda_{1}}( \mathcal{L}-\rho)[C_{3}y^{\delta_{3}}]\] \[=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}-\frac{\lambda_{0}+ \lambda_{1}}{\lambda_{1}}C_{3}y^{\delta_{3}}\] \[=C_{1}y^{\delta_{1}}-\frac{\lambda_{0}}{\lambda_{1}}C_{3}y^{ \delta_{3}}.\] Let \(\eta=(\lambda_{0}/\lambda_{1})\). Then, we have \[w_{0}(y)=C_{1}y^{\delta_{1}}-\eta C_{3}y^{\delta_{3}}\quad\text{and}\quad w_{ 1}(y)=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}} \tag{11}\] on the interval \((k,\infty)\). On the interval \((0,k)\), \(w_{1}(y)=\beta_{\text{s}}-\beta_{\text{b}}y\) and \[[(\rho+\lambda_{0})-\mathcal{L}]w_{0}(y)=\lambda_{0}w_{1}(y)=\lambda_{0}(\beta _{\text{s}}-\beta_{\text{b}}y).\] A particular solution for \[(\rho+\lambda_{0}-\mathcal{L})w_{0}(y)=\lambda_{0}(\beta_{\text{s}}-\beta_{ \text{b}}y)\] can be given by \(w_{0}(y)=a_{0}\beta_{\text{s}}-a_{1}\beta_{\text{b}}y\), with \[a_{0}=\frac{\lambda_{0}}{\rho+\lambda_{0}-\mu_{1}}\quad\text{and}\quad a_{1}= \frac{\lambda_{0}}{\rho+\lambda_{0}-\mu_{2}}. \tag{12}\] To find a general solution to the above non-homogeneous equation, we only need to add the homogeneous equation \((\rho+\lambda_{0}-\mathcal{L})w_{0}=0\) to the above particular solution. This is also of Cauchy-Euler type and its solution is of the form \(y^{\gamma}\). Then \(\gamma\) must be the roots of the quadratic equation \[\sigma\gamma(\gamma-1)+(\mu_{2}-\mu_{1})\gamma-(\rho+\lambda_{0}-\mu_{1})=0.\] and are given by \[\gamma_{1} =\frac{1}{2}\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}-\sqrt{\left(1 +\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho+\lambda_{0}-\mu_{1})} {\sigma}}\right)<0, \tag{13}\] \[\gamma_{2} =\frac{1}{2}\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}+\sqrt{\left(1 +\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho+\lambda_{0}-\mu_{1})} {\sigma}}\right)>1.\] The general solution for \(w_{0}(y)\) on \((0,k)\) is given by \[w_{0}=C_{2}y^{\gamma_{2}}+a_{0}\beta_{\mathrm{s}}-a_{1}\beta_{\mathrm{b}}y, \tag{14}\] for some constant \(C_{2}\). Here, we skipped the term \(y^{\gamma_{1}}\) because the solution should be bounded at \(y=0\). We summarize the computation so far as follows. The solutions of the HJB equations (6) have the following form: \[w_{0}(y) =\left\{\begin{array}{ll}C_{2}y^{\gamma_{2}}+a_{0}\beta_{\mathrm{s }}-a_{1}\beta_{\mathrm{b}}y&\mbox{ if }0<y<k,\\ C_{1}y^{\delta_{1}}-\eta C_{3}y^{\delta_{3}}&\mbox{ if }y\geq k,\\ \end{array}\right. \tag{15}\] \[w_{1}(y) =\left\{\begin{array}{ll}\beta_{\mathrm{s}}-\beta_{\mathrm{b}}y& \mbox{ if }0<y<k,\\ C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}&\mbox{ if }y\geq k.\end{array}\right.\] In what follows, we use a smooth-fit approach to determine the parameter values \(k\), and \(C_{j}\), \(j=1,2,3\). ### Smooth-fit conditions. Smooth-fit conditions in connection with optimal stopping typically require the value functions to be continuously differentiable. First, the continuous differentiability of \(w_{1}\) at \(y=k\) yields \[\beta_{\mathrm{s}}-\beta_{\mathrm{b}}k= C_{1}k^{\delta_{1}}+C_{3}k^{\delta_{3}}, \tag{16}\] \[-\beta_{\mathrm{b}}k= C_{1}\delta_{1}k^{\delta_{1}}+C_{3}\delta_{3}k^{\delta_{3}}.\] Similarly, continuous differentiability of \(w_{0}\) at \(y=k\) leads to \[C_{2}k^{\gamma_{2}}+a_{0}\beta_{\mathrm{s}}-a_{1}\beta_{\mathrm{b}}k= C_{1}k^{\delta_{1}}-\eta C_{3}k^{\delta_{3}}, \tag{17}\] \[C_{2}\gamma_{2}k^{\gamma_{2}}-a_{1}\beta_{\mathrm{b}}k= \delta_{1}C_{1}k^{\delta_{1}}-\eta\delta_{3}C_{3}k^{\delta_{3}}.\] Here we have four equations with four unknown parameters \(k\), \(C_{j}\) for \(j=1,2,3\). Let \[\Phi(k;\delta_{1},\delta_{3})=\begin{pmatrix}k^{\delta_{1}}&k^{\delta_{3}}\\ \delta_{1}k^{\delta_{1}}&\delta_{3}k^{\delta_{3}}\end{pmatrix}.\] Its inverse is given by \[\Phi^{-1}(k;\delta_{1},\delta_{3})=\frac{1}{\delta_{3}-\delta_{1}}\begin{pmatrix} \delta_{3}k^{-\delta_{1}}&-k^{-\delta_{1}}\\ -\delta_{1}k^{-\delta_{3}}&k^{-\delta_{3}}\end{pmatrix}.\] Using \(\Phi\), we rewrite (16) and (17): \[\Phi(k;\delta_{1},\delta_{3})\begin{pmatrix}C_{1}\\ C_{3}\end{pmatrix}=\begin{pmatrix}\beta_{\mathrm{s}}-\beta_{\mathrm{b}}k\\ -\beta_{\mathrm{b}}k\end{pmatrix}\mbox{ and }\Phi(k;\delta_{1},\delta_{3}) \begin{pmatrix}C_{1}\\ -\eta C_{3}\end{pmatrix}=\begin{pmatrix}C_{2}k^{\gamma_{2}}+a_{0}\beta_{ \mathrm{s}}-a_{1}\beta_{\mathrm{b}}k\\ C_{2}\gamma_{2}k^{\gamma_{2}}-a_{1}\beta_{\mathrm{b}}k\end{pmatrix}.\] It follows that \[\begin{pmatrix}C_{1}\\ C_{3}\end{pmatrix}=\Phi^{-1}(k;\delta_{1},\delta_{3})\begin{pmatrix}\beta_{ \mathrm{s}}-\beta_{\mathrm{b}}k\\ -\beta_{\mathrm{b}}k\end{pmatrix}, \tag{18}\] and \[\begin{pmatrix}C_{1}\\ C_{3}\end{pmatrix}=\begin{pmatrix}1&0\\ 0&-\eta^{-1}\end{pmatrix}\Phi^{-1}(k;\delta_{1},\delta_{3})\begin{pmatrix}C_{2}k ^{\gamma_{2}}+a_{0}\beta_{\mathrm{s}}-a_{1}\beta_{\mathrm{b}}k\\ C_{2}\gamma_{2}k^{\gamma_{2}}-a_{1}\beta_{\mathrm{b}}k\end{pmatrix}. \tag{19}\] Eliminate \(C_{1}\) and \(C_{3}\) to obtain \[\Phi(k;\delta_{1},\delta_{3})\begin{pmatrix}1&0\\ 0&-\eta\end{pmatrix}\Phi^{-1}(k;\delta_{1},\delta_{3})\begin{pmatrix}\beta_{ \mathrm{s}}-\beta_{\mathrm{b}}k\\ -\beta_{\mathrm{b}}k\end{pmatrix}=\begin{pmatrix}C_{2}k^{\gamma_{2}}+a_{0} \beta_{\mathrm{s}}-a_{1}\beta_{\mathrm{b}}k\\ C_{2}\gamma_{2}k^{\gamma_{2}}-a_{1}\beta_{\mathrm{b}}k\end{pmatrix}. \tag{20}\] Some simple calculations yield \[\Phi(k;\delta_{1},\delta_{3})\begin{pmatrix}1&0\\ 0&-\eta\end{pmatrix}\Phi^{-1}(k;\delta_{1},\delta_{3})=\frac{1}{\delta_{3}- \delta_{1}}\begin{pmatrix}\delta_{3}+\eta\delta_{1}&-(1+\eta)\\ \delta_{1}\delta_{3}(1+\eta)&-(\delta_{1}+\eta\delta_{3})\end{pmatrix}.\] We note that this matrix is independent of \(k\). This reduces (20) to \[\frac{1}{\delta_{3}-\delta_{1}}\begin{pmatrix}\delta_{3}+\eta\delta_{1}&-(1+ \eta)\\ \delta_{1}\delta_{3}(1+\eta)&-(\delta_{1}+\eta\delta_{3})\end{pmatrix}\begin{pmatrix} \beta_{\rm s}-\beta_{\rm b}k\\ -\beta_{\rm b}k\end{pmatrix}=\begin{pmatrix}C_{2}k^{\gamma_{2}}+a_{0}\beta_{ \rm s}-a_{1}\beta_{\rm b}k\\ C_{2}\gamma_{2}k^{\gamma_{2}}-a_{1}\beta_{\rm b}k\end{pmatrix}.\] This leads to two equations: \[\frac{(\delta_{3}+\eta\delta_{1})(\beta_{\rm s}-\beta_{\rm b}k)+(1 +\eta)\beta_{\rm b}k}{\delta_{3}-\delta_{1}}=C_{2}k^{\gamma_{2}}+a_{0}\beta_{ \rm s}-a_{1}\beta_{\rm b}k,\] \[\frac{\delta_{1}\delta_{3}(1+\eta)(\beta_{\rm s}-\beta_{\rm b}k)+ (\delta_{1}+\eta\delta_{3})\beta_{\rm b}k}{\delta_{3}-\delta_{1}}=C_{2}\gamma_ {2}k^{\gamma_{2}}-a_{1}\beta_{\rm b}k.\] Eliminating \(C_{2}k^{\gamma_{2}}\), we obtain an equation containing only \(k\): \[\frac{[(\delta_{3}+\eta\delta_{1})(\beta_{\rm s}-\beta_{\rm b}k)+ (1+\eta)\beta_{\rm b}k]\gamma_{2}-[\delta_{1}\delta_{3}(1+\eta)(\beta_{\rm s} -\beta_{\rm b}k)+(\delta_{1}+\eta\delta_{3})\beta_{\rm b}k]}{\delta_{3}- \delta_{1}}\] \[= a_{0}\gamma_{2}\beta_{\rm s}-a_{1}(\gamma_{2}-1)\beta_{\rm b}k.\] This leads to the solution \[k=\frac{\delta_{1}\delta_{3}(1+\eta)-(\delta_{3}+\eta\delta_{1})\gamma_{2}-a_ {0}(\delta_{1}-\delta_{3})\gamma_{2}}{(1+\eta)(\delta_{1}\delta_{3}+\gamma_{2 })-(\delta_{3}+\eta\delta_{1})\gamma_{2}-(\delta_{1}+\eta\delta_{3})-a_{1}( \gamma_{2}-1)(\delta_{1}-\delta_{3})}\cdot\frac{\beta_{\rm s}}{\beta_{\rm b}}. \tag{21}\] Note that \(k>0\) and this can be shown by proving that both the numerator and denominator in (21) are positive. Since \(0<a_{0}<1\) and \(0<a_{1}<1\), we have \[\delta_{1}\delta_{3}(1+\eta)-(\delta_{3}+\eta\delta_{1})\gamma_{2 }-a_{0}(\delta_{1}-\delta_{3})\gamma_{2}\] \[>\delta_{1}\delta_{3}(1+\eta)-(\delta_{3}+\eta\delta_{1})\gamma_{ 2}-(\delta_{1}-\delta_{3})\gamma_{2}\] \[=(1+\eta)(-\delta_{1})(\gamma_{2}-\delta_{3})>0.\] Moreover, \[(1+\eta)(\delta_{1}\delta_{3}+\gamma_{2})-(\delta_{3}+\eta\delta _{1})\gamma_{2}-(\delta_{1}+\eta\delta_{3})-a_{1}(\gamma_{2}-1)(\delta_{1}- \delta_{3})\] \[>(1+\eta)(\delta_{1}\delta_{3}+\gamma_{2})-(\delta_{3}+\eta \delta_{1})\gamma_{2}-(\delta_{1}+\eta\delta_{3})-(\gamma_{2}-1)(\delta_{1}- \delta_{3})\] \[=(1+\eta)(\delta_{1}\delta_{3}+\gamma_{2})-\gamma_{2}(\delta_{3}+ \eta\delta_{1}+\delta_{1}-\delta_{3})-(\delta_{1}+\eta\delta_{3}-\delta_{1}+ \delta_{3})\] \[=(1+\eta)(\delta_{1}\delta_{3}+\gamma_{2}-\gamma_{2}\delta_{1}- \delta_{3})\] \[=(1+\eta)(\gamma_{2}-\delta_{3})(1-\delta_{1})>0.\] Therefore, \(k>0\). Next, we solve for the rest of parameters. From (18), we have \[C_{1}=\frac{-\delta_{3}\beta_{\rm s}+(\delta_{3}-1)\beta_{\rm b}k}{(\delta_{1 }-\delta_{3})k^{\delta_{1}}},\quad\text{and}\quad C_{3}=\frac{\delta_{1}\beta _{\rm s}-(\delta_{1}-1)\beta_{\rm b}k}{(\delta_{1}-\delta_{3})k^{\delta_{3}}}. \tag{22}\] Similarly, (19) yields \[C_{1}=\frac{(\gamma_{2}-\delta_{3})C_{2}k^{\gamma_{2}}-a_{0} \delta_{3}\beta_{\rm s}+a_{1}(\delta_{3}-1)\beta_{\rm b}k}{(\delta_{1}-\delta _{3})k^{\delta_{1}}}, \tag{23}\] \[C_{3}=\frac{(\gamma_{2}-\delta_{1})C_{2}k^{\gamma_{2}}-a_{0} \delta_{1}\beta_{\rm s}+a_{1}(\delta_{1}-1)\beta_{\rm b}k}{\eta(\delta_{1}- \delta_{3})k^{\delta_{3}}}.\] Combine (22) and (23) to obtain \[(\gamma_{2}-\delta_{3})C_{2}k^{\gamma_{2}}+(1-a_{1})(1-\delta_{3}) \beta_{\rm b}k =(a_{0}-1)\delta_{3}\beta_{\rm s},\] \[(\gamma_{2}-\delta_{1})C_{2}k^{\gamma_{2}}-(\eta+a_{1})(1-\delta_ {1})\beta_{\rm b}k =(a_{0}+\eta)\delta_{1}\beta_{\rm s}.\] Eliminating the term linear in \(k\), we have \[C_{2}=\frac{[(1-a_{0})(\eta+a_{1})(1-\delta_{1})(-\delta_{3})+(1-a_{1})(\eta+a _{0})(1-\delta_{3})\delta_{1}]\beta_{\rm s}}{(1+\eta)(\delta_{1}\delta_{3}+ \gamma_{2})-\delta_{1}[(a_{1}+\eta)\gamma_{2}+(1-a_{1})]-\delta_{3}[(a_{1}+ \eta)+\gamma_{2}(1-a_{1})]k^{\gamma_{2}}}. \tag{24}\] Finally, we give a lemma needed in the proof a verification theorem to follow. Its proof is technical and length. We provide it in the Appendix. **Lemma 3.1**.: _Under Assumption_ (A1)_, the constants \(C_{1}\), \(C_{2}\), and \(C_{3}\) are positive._ ## 4 A Verification Theorem In this section, we first show that the functions \(w_{0}\) and \(w_{1}\) are solutions of the HJB equations (6). Then, we provide a verification theorem. **Theorem 4.1**.: _Assume_ (A1)_. Then, the following functions \(w_{0}\) and \(w_{1}\) satisfy the HJB equations (6):_ \[w_{0}(y) =\left\{\begin{array}{ll}C_{2}y^{\gamma_{2}}+a_{0}\beta_{\rm s} -a_{1}\beta_{\rm b}y&\mbox{ if }0<y<k,\\ C_{1}y^{\delta_{1}}-\eta C_{3}y^{\delta_{3}}&\mbox{ if }y\geq k,\\ \beta_{\rm s}-\beta_{\rm b}y&\mbox{ if }0<y<k,\\ C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}&\mbox{ if }y\geq k.\end{array}\right.\] Proof.: It suffices to show the following variational inequalities hold: \[(0,k): (\rho+\lambda_{1}-\mathcal{L})w_{1}(y)\geq\lambda_{1}w_{0}(y),\] \[(k,\infty): w_{1}(y)\geq\beta_{\rm s}-\beta_{\rm b}y.\] Recall that on the interval \((0,k)\), \[w_{0}(y)=C_{2}y^{\gamma_{2}}+a_{0}\beta_{\rm s}-a_{1}\beta_{\rm b}y\quad\mbox {and}\quad w_{1}(y)=\beta_{\rm s}-\beta_{\rm b}y.\] Then \[(\rho+\lambda_{1}-\mathcal{L})w_{1}(y)=(\rho+\lambda_{1}-\mu_{1})\beta_{\rm s }-(\rho+\lambda_{1}-\mu_{2})\beta_{\rm b}y.\] We let \[\psi(y) =(\rho+\lambda_{1}-\mathcal{L})w_{1}(y)-\lambda_{1}w_{0}(y)\] \[=(\rho+\lambda_{1}-\mu_{1})\beta_{\rm s}-(\rho+\lambda_{1}-\mu_{ 2})\beta_{\rm b}y-\lambda_{1}(C_{2}y^{\gamma_{2}}+a_{0}\beta_{\rm s}-a_{1} \beta_{\rm b}y)\] \[=[\rho+(1-a_{0})\lambda_{1}-\mu_{1}]\beta_{\rm s}-[\rho+(1-a_{1} )\lambda_{1}-\mu_{2}]\beta_{\rm b}y-\lambda_{1}C_{2}y^{\gamma_{2}}.\] We need to show that \(\psi(y)\geq 0\) on the interval \((0,k)\). First, we note that \[\psi(0)=[\rho+(1-a_{0})\lambda_{1}-\mu_{1}]\beta_{\rm s}>0\mbox{ and }\psi^{ \prime}(0)=-[\rho+(1-a_{1})\lambda_{1}-\mu_{2}]\beta_{\rm b}<0.\] We also have \[\psi^{\prime\prime}(y)=-C_{2}\lambda_{1}\gamma_{2}(\gamma_{2}-1)y^{\gamma_{2} -2}<0\quad\mbox{since }\gamma_{2}>1\mbox{ and }C_{2}>0.\] Hence \(\psi^{\prime}(y)\) is decreasing and \(\psi^{\prime}(y)<0\) on the interval \((0,k)\). It suffices to show that \(\psi(k)\geq 0\) which implies \(\psi(y)\geq 0\) for \(0\leq y\leq k\). Introduce new functions \(w_{j}^{+}\) and \(w_{j}^{-}\) such that \[w_{j}(y)=\begin{cases}w_{j}^{-}(y)&0\leq y<k,\\ w_{j}^{+}(y)&y\geq k,\end{cases}\quad\text{for }j=0,1.\] Then, following from the smooth-fit conditions, we have, for \(j=0,1\), \[w_{j}^{-}(k)=w_{j}^{+}(k)\quad\text{and}\quad[w_{j}^{-}]^{\prime}(k)=[w_{j}^{+ }]^{\prime}(k).\] Moreover, \(\psi(k)\geq 0\) is equivalent to \[(\rho+\lambda_{1}-\mathcal{L})w_{1}^{-}(y)|_{y=k}\geq\lambda_{1}w_{0}^{-}(k).\] Note that \[(\rho+\lambda_{1}-\mathcal{L})w_{1}^{+}(y)|_{y=k}=\lambda_{1}w_{0}^{+}(y)|_{y =k}=\lambda_{1}w_{0}^{-}(y)|_{y=k}.\] This reduces the proof of \(\psi(k)\geq 0\) to \[(\rho+\lambda_{1}-\mathcal{L})w_{1}^{-}(y)|_{y=k}\geq(\rho+\lambda_{1}- \mathcal{L})w_{1}^{+}(y)|_{y=k}.\] Then we use \[w_{j}^{-}(k):=\lim_{y\uparrow k}w_{j}^{-}(y)=w_{j}^{+}(k)\quad\text{and}\quad [w_{j}^{-}]^{\prime}(k):=\lim_{y\uparrow k}[w_{j}^{-}]^{\prime}(y)=[w_{j}^{+} ]^{\prime}(k),\] for \(j=0,1\), to the above to get \[-\sigma k^{2}[w_{1}^{-}]^{\prime\prime}(y)|_{y=k}\geq-\sigma k^{2}[w_{1}^{+}] ^{\prime\prime}(y)|_{y=k},\] which is equivalent to \[[w_{1}^{-}]^{\prime\prime}(y)|_{y=k}\leq[w_{1}^{+}]^{\prime\prime}(y)|_{y=k}.\] Recall Lemma 3.1. The latter holds because \[[w_{1}^{-}]^{\prime\prime}(y)|_{y=k}=0\quad\text{and}\quad[w_{1}^{+}]^{\prime \prime}(y)|_{y=k}=C_{1}\delta_{1}(\delta_{1}-1)k^{\delta_{1}-2}+C_{3}\delta_{3 }(\delta_{3}-1)k^{\delta_{3}-2}>0.\] Therefore, \(\psi(k)\geq 0\) and hence \(\psi(y)\geq 0\) on \((0,k)\). On the interval \((k,\infty)\) we need to show that \(w_{1}(y)\geq\beta_{\text{s}}-\beta_{\text{b}}y\) with \(w_{1}(y)=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}\). Let \(\phi(y)=C_{1}y^{\delta_{1}}+C_{3}y^{\delta_{3}}-\beta_{\text{s}}+\beta_{\text{ b}}y\). Then the smooth-fitting conditions implies \(\phi(k)=\phi^{\prime}(k)=0\). Moreover, \[\phi^{\prime}(y) =C_{1}\delta_{1}y^{\delta_{1}-1}+C_{3}\delta_{3}y^{\delta_{3}-1} +\beta_{\text{b}},\] \[\phi^{\prime\prime}(y) =C_{1}\delta_{1}(\delta_{1}-1)y^{\delta_{1}-2}+C_{3}\delta_{3}( \delta_{3}-1)y^{\delta_{3}-2}.\] Since \(\delta_{1}<0\) and \(\delta_{3}<0\), \(\phi^{\prime\prime}(y)>0\) in the interval \([y,\infty)\). This implies \(\phi^{\prime}(y)\) is increasing and \(\phi^{\prime}(k)=0\) implies \(\phi^{\prime}(y)>0\) for \(y>k\). Hence \(\phi(y)\) is increasing, \(\phi(k)=0\) implies \(\phi(y)>0\) for \(y>k\). Next, we next provide a verification theorem. **Theorem 4.2**.: _Assume (A1). Then, \(v_{\alpha}(x_{1},x_{2})=x_{1}w_{\alpha}(x_{2}/x_{1})=V_{\alpha}(x_{1},x_{2})\), \(\alpha=0,1\). Let \(D=\{(x_{1},x_{2},1):\ x_{2}>kx_{1}\}\). Let \(\tau^{*}=\inf\{t:\ (X_{t}^{1},X_{t}^{2},\alpha_{t})\not\in D\}\). Then \(\tau^{*}\) is optimal._ Proof.: The proof is similar to that of [6, Theorem 2]. We only sketch the main steps for the sake of completeness. First, for any admissible stopping time \(\tau\), following Dynkin's formula, we have \[v_{\alpha}(x_{1},x_{2})\geq Ee^{-\rho\tau}v_{\alpha_{\tau}}(X_{\tau}^{1},X_{ \tau}^{2})\geq Ee^{-\rho\tau}(\beta_{\text{s}}X_{\tau}^{1}-\beta_{\text{b}}X_{ \tau}^{2})=J(x_{1},x_{2},\alpha,\tau).\] So, \(v_{\alpha}(x_{1},x_{2})\geq V_{\alpha}(x_{1},x_{2})\). The equality holds when \(\tau=\tau^{*}\). Hence, \(v_{\alpha}(x_{1},x_{2})=J(x_{1},x_{2},\alpha,\tau^{*})=V_{\alpha}(x_{1},x_{2})\) Asymptotics of \(k=k(\lambda_{0},\lambda_{1})\) In this section, we consider the asymptotic behavior of \(k=k(\lambda_{0},\lambda_{1})\) as one of \(\lambda_{0}\) and \(\lambda_{1}\) goes to \(\infty\) with the other fixed. To facilitate the subsequent calculation, we regroup the terms in (21) for \(k\) as follows: \[k=\frac{(1+\eta)\delta_{1}\delta_{3}+\gamma_{2}[(a_{0}+\eta)(-\delta_{1})+(1-a _{0})(-\delta_{3})]}{(1+\eta)(\delta_{1}\delta_{3}+\gamma_{2})-\delta_{1}[(a_{ 1}+\eta)\gamma_{2}+(1-a_{1})]-\delta_{3}[(a_{1}+\eta)+\gamma_{2}(1-a_{1})]} \cdot\frac{\beta_{\rm s}}{\beta_{\rm b}}.\] #### Asymptotics of \(k\) as \(\lambda_{0}\to\infty\) We first consider the limit of \(k=k(\lambda_{0},\lambda_{1})\) as \(\lambda_{0}\to\infty\) with \(\lambda_{1}\) fixed. In this case, the mean time for \(\alpha_{t}\) spent at state \(0\) is given by \((1/\lambda_{0})\) which goes to \(0\). In view of this, the limit of \(k\) should correspond to the threshold of unconstrained pairs selling. To validate this observation, we list all the terms in (21) that are \(\lambda_{0}\) depended: \[\eta=\frac{\lambda_{0}}{\lambda_{1}},\ 1-a_{0}=\frac{\rho-\mu_{1}}{\rho-\mu_{1 }+\lambda_{0}},\ 1-a_{1}=\frac{\rho-\mu_{2}}{\rho-\mu_{2}+\lambda_{0}},\ \gamma_{2}\approx\frac{\sqrt{\lambda_{0}}}{\sqrt{\sigma}},\ \mbox{and}\ \delta_{3} \approx-\frac{\sqrt{\lambda_{0}}}{\sqrt{\sigma}}.\] Therefore, we have \[\lim_{\lambda_{0}\to\infty}k=\lim_{\lambda_{0}\to\infty}\frac{-2\delta_{1} \lambda_{0}^{3/2}/(\lambda_{1}\sqrt{\sigma})+\mbox{lower order terms}}{2(1- \delta_{1})\lambda_{0}^{3/2}/(\lambda_{1}\sqrt{\sigma})+\mbox{lower order terms}}\cdot\frac{\beta_{\rm s}}{\beta_{\rm b}}=\frac{-\delta_{1}}{1-\delta_{1}} \cdot\frac{\beta_{\rm s}}{\beta_{\rm b}}=:k_{0}.\] To see the connection with the selling rule without constraints. We note that the associated HJB equation (unconstrained) has the form: \[\min\Big{\{}\rho w(y)-\mathcal{L}w(y),\ w(y)-\beta_{\rm s}+\beta_{\rm b}y \Big{\}}=0.\] Repeating our previous smooth-fit calculation yields exactly \(k=k_{0}\) obtained above. #### Asymptotics of \(k\) as \(\lambda_{1}\to\infty\) Similarly, we can consider the limit \(\lambda_{1}\to\infty\) with fixed \(\lambda_{0}\). Note that \(\delta_{3}\approx-\sqrt{\lambda_{1}}/\sqrt{\sigma}\) and \(\eta=\lambda_{0}/\lambda_{1}\) are the only two terms depending on \(\lambda_{1}\). It follows that \[\lim_{\lambda_{1}\to\infty}k=\frac{\delta_{1}-\gamma_{2}(1-a_{0})}{\delta_{1}- [a_{1}+\gamma_{2}(1-a_{1})]}\cdot\frac{\beta_{\rm s}}{\beta_{\rm b}}=\frac{- \delta_{1}+\gamma_{2}(1-a_{0})}{1-\delta_{1}+(\gamma_{2}-1)(1-a_{1})}\cdot \frac{\beta_{\rm s}}{\beta_{\rm b}}=:k_{1}.\] It is not difficult to show \(k_{1}>k_{0}\). (Actually, it is equivalent to the inequality in (28) that is proved in the Appendix.) Intuitively, this makes sense because one has to make trading easier (with larger \(k\)) when trading constraints are present. ## 6 Numerical Examples In this section, we consider the trading pair of Target Corp. (TGT) and Walmart Stores Inc. (WMT). The model is calibrated by using the daily closing prices from 1985-1999. Let \(\mathbf{S}^{1}\)=WMT and \(\mathbf{S}^{2}\)=TGT. Using the traditional least squares method, we have \(\mu_{1}=0.2459\), \(\mu_{2}=0.2059\), \(\sigma_{11}=0.2943\), \(\sigma_{12}=0.0729\), \(\sigma_{21}=0.0729\), and \(\sigma_{22}=0.3112\). We take \(\rho=0.5\), \(\lambda_{0}=\lambda_{1}=10\), and \(K=0.001\). Using (21), we obtain \(k=0.7036\). We plot the corresponding \(w_{0}(y)\) and \(w_{1}(y)\) in Figure 1 and \(v_{i}(x_{1},x_{2})=x_{1}w_{i}(x_{2}/x_{1}),i=0,1\) in Figure 2. Asymptotics of \(k(\lambda_{0},\lambda_{1})\).Next, we examine the asymptotics of \(k=k(\lambda_{0},\lambda_{1})\) as one of \(\lambda_{0}\) and \(\lambda_{1}\) goes to \(\infty\) with the other fixed (at 10). The corresponding graphs are plotted in Figure 3 along with \(y=k_{0}\) and \(y=k_{1}\). In addition, we give the 2D graph of \(k(\lambda_{0},\lambda_{1})\) in Figure 4. Dependence of \(k\) on various parameters.In this section, we fix \(\lambda_{0}=\lambda_{1}=10\). We vary one parameter at a time and examine the dependence of \(k\) on \(\mu_{i},\sigma_{ij},i,j=1,2,K,\rho\) respectively. First, we vary \(\mu_{i},i=1,2\). Recall that we close the position by selling \(X^{1}\) and buying back \(X^{2}\) on \([0,k]\) in the state \(\alpha=1\). A larger \(\mu_{i}\) implies greater growth potential in \(X^{i}\). It can be seen in Table 1 that \(k\) decrease in \(\mu_{1}\) leading to fewer selling opportunities and \(k\) increases in \(\mu_{2}\) leading to more selling opportunities. It is because a larger \(\mu_{2}\) and a smaller \(\mu_{1}\) will encourage its early exit. Next, we vary \(\sigma_{11}\) and \(\sigma_{22}\). Larger volatility leads to a bigger room for the prices to move. Figure 1: Functions \(w_{0}\) and \(w_{1}\) Figure 2: Functions \(v_{0}\) and \(v_{1}\) This is associated with a smaller selling zone and therefore smaller \(k\). Then, we vary \(\sigma_{12}\) (\(=\sigma_{21}\)). Increasing these parameters makes the two stocks more related to each other which leads to a larger selling zone as shown in Table 3. Finally, we vary the discount rate \(\rho\) and the transaction fee \(K\). Larger \(\rho\) encourages to sell early which translates to a bigger selling zone as can be seen in Table 4. Additionally, larger \(K\) sets up a higher barrier for selling which leads to a smaller selling zone. ## 7 Conclusion The focus of this paper is on pairs selling with limited opportunities, and its main goal is to derive an optimal policy in closed form, which is desirable for practical applications. It would be intriguing to extend the results of this study to models that incorporate more practical considerations, such as large block selling, where intensive trading may affect trading windows. Overall, this paper contributes to the understanding of pairs selling and provides insights into developing optimal policies that can be applied in real-world scenarios. There is potential for future research to expand on these findings and explore more complex models that better capture real-world dynamics. ## Appendix In this appendix, we provide the proof of Lemma 3.1. _Proof of Lemma 3.1_. We first work on \(C_{1}\) and \(C_{3}\). In view of (22), \(C_{1}>0\) and \(C_{3}>0\) are equivalent to \[\frac{-\delta_{1}}{1-\delta_{1}}\cdot\frac{\beta_{s}}{\beta_{b}}<k<\frac{- \delta_{3}}{1-\delta_{3}}\cdot\frac{\beta_{s}}{\beta_{b}}. \tag{25}\] To simplify notation, let \(b_{0}=1-a_{0}\) and \(b_{1}=1-a_{1}\). Note that \(0<b_{0}<1\) and \(0<b_{1}<1\). Using this notation, we rewrite \(k\) as \[k=\frac{(1+\eta)(-\delta_{1})(\gamma_{2}-\delta_{3})+b_{0}(\delta_{1}-\delta_ {3})\gamma_{2}}{(1+\eta)(\gamma_{2}-\delta_{3})(1-\delta_{1})+b_{1}(\gamma_{2 }-1)(\delta_{1}-\delta_{3})}\cdot\frac{\beta_{\rm s}}{\beta_{\rm b}}. \tag{26}\] The inequalities are equivalent to \[\frac{-\delta_{1}}{1-\delta_{1}}<\frac{(1+\eta)(-\delta_{1})(\gamma_{2}- \delta_{3})+b_{0}(\delta_{1}-\delta_{3})\gamma_{2}}{(1+\eta)(\gamma_{2}- \delta_{3})(1-\delta_{1})+b_{1}(\gamma_{2}-1)(\delta_{1}-\delta_{3})}<\frac{- \delta_{3}}{1-\delta_{3}}. \tag{27}\] ### First inequality of (27). The first inequality in (27) is equivalent to \[b_{1}(\gamma_{2}-1)(\delta_{1}-\delta_{3})(-\delta_{1})<b_{0}\gamma_{2}( \delta_{1}-\delta_{3})(1-\delta_{1})\iff b_{1}(\gamma_{2}-1)(-\delta_{1})<b_ {0}\gamma_{2}(1-\delta_{1}).\] The last inequality is equivalent to \[\frac{b_{1}}{b_{0}}<\frac{\gamma_{2}}{\gamma_{2}-1}\cdot\left(1-\frac{1}{\delta_{ 1}}\right)=\left(1+\frac{1}{\gamma_{2}-1}\right)\left(1-\frac{1}{\delta_{1}} \right). \tag{28}\] Note that \[b_{0}=\frac{\rho-\mu_{1}}{\rho+\lambda_{0}-\mu_{1}}\text{ and }b_{1}=\frac{\rho-\mu_ {2}}{\rho+\lambda_{0}-\mu_{2}}.\] We consider two cases: Case I (\(\mu_{1}\leq\mu_{2}\)) and Case II (\(\mu_{1}>\mu_{2}\)). **Case I**: If \(\mu_{1}\leq\mu_{2}\), then \[\frac{b_{1}}{b_{0}}=\frac{1+\frac{\lambda_{0}}{\rho-\mu_{1}}}{1+\frac{\lambda_ {0}}{\rho-\mu_{2}}}\leq 1;\] and the right hand side (28) is bigger than \(1\) since \(\gamma_{2}>1\) and \(\delta_{1}<0\). So the first inequality in (27) follows. **Case II**: If \(\mu_{1}>\mu_{2}\), then \[\frac{b_{1}}{b_{0}}=\frac{1+\frac{\lambda_{0}}{\rho-\mu_{1}}}{1+\frac{\lambda_ {0}}{\rho-\mu_{2}}}=\frac{(\rho+\lambda_{0}-\mu_{1})(\rho-\mu_{2})}{(\rho+ \lambda_{0}-\mu_{2})(\rho-\mu_{1})}>1.\] The previous simple argument no longer works. We need to elaborate the value of \(\left(1+\frac{1}{\gamma_{2}-1}\right)\left(1-\frac{1}{\delta_{1}}\right)\). To this end, note that \[\frac{\gamma_{2}}{\gamma_{2}-1} =\frac{1+\frac{\mu_{1}-\mu_{2}}{\sigma}+\sqrt{\left(1+\frac{\mu_ {1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho+\lambda_{0}-\mu_{1})}{\sigma}}}{- 1+\frac{\mu_{1}-\mu_{2}}{\sigma}+\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma} \right)^{2}+\frac{4(\rho+\lambda_{0}-\mu_{1})}{\sigma}}}\] \[=\frac{\rho+\lambda_{0}+\frac{\sigma+\sigma\sqrt{B_{1}}-\mu_{1}- \mu_{2}}{2}}{\rho+\lambda_{0}-\mu_{2}},\] where \[B_{1}=\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho+\lambda_ {0}-\mu_{1})}{\sigma}.\] Next we compute \[\frac{\delta_{1}-1}{\delta_{1}} =\frac{-1+\frac{\mu_{1}-\mu_{2}}{\sigma}-\sqrt{\left(1+\frac{\mu _{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho-\mu_{1})}{\sigma}}}{1+\frac{\mu _{1}-\mu_{2}}{\sigma}-\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2} +\frac{4(\rho-\mu_{1})}{\sigma}}}\] \[=\frac{1-\frac{\mu_{1}-\mu_{2}}{\sigma}+\sqrt{\left(1+\frac{\mu _{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho-\mu_{1})}{\sigma}}}{-1-\frac{\mu _{1}-\mu_{2}}{\sigma}+\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2} +\frac{4(\rho-\mu_{1})}{\sigma}}}\] \[=\frac{\rho+\frac{\sigma+\sigma\sqrt{B_{2}}-\mu_{1}-\mu_{2}}{2}}{ \rho-\mu_{1}},\] \begin{table} \begin{tabular}{l|l l l l l} \hline \(\rho\) & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 \\ \hline \(k\) & 0.5590 & 0.6541 & 0.7036 & 0.7358 & 0.7590 \\ \hline \(K\) & 0.0001 & 0.0005 & 0.001 & 0.002 & 0.003 \\ \hline \(k\) & 0.7049 & 0.7043 & 0.7036 & 0.7022 & 0.7008 \\ \hline \end{tabular} \end{table} Table 4: \(k\) **with varying \(\rho\) and \(K\)** where \[B_{2}=\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho-\mu_{1})}{ \sigma}.\] Since \(\mu_{1}>\mu_{2}\), we have \[\sqrt{B_{1}}>1+\frac{\mu_{1}-\mu_{2}}{\sigma}\quad\text{and}\quad\sqrt{B_{2}}> 1+\frac{\mu_{1}-\mu_{2}}{\sigma}.\] This implies \[\frac{\sigma+\sigma\sqrt{B_{1}}-\mu_{1}-\mu_{2}}{2}>\sigma-\mu_{2},\quad\frac{ \sigma+\sigma\sqrt{B_{2}}-\mu_{1}-\mu_{2}}{2}>\sigma-\mu_{2},\] and \[\left(1+\frac{1}{\gamma_{2}-1}\right)\left(1-\frac{1}{\delta_{1}}\right) >\frac{\rho+\lambda_{0}+\sigma-\mu_{2}}{\rho+\lambda_{0}-\mu_{2}} \cdot\frac{\rho+\sigma-\mu_{2}}{\rho-\mu_{1}}\] \[>\frac{(\rho+\lambda_{0}-\mu_{1})(\rho-\mu_{2})}{(\rho+\lambda_{ 0}-\mu_{2})(\rho-\mu_{1})}.\] This is exactly what we need to show. Here we have used \[\rho+\lambda_{0}+\sigma-\mu_{2}>\rho+\lambda_{0}-\mu_{1}\text{ and }\rho+\sigma-\mu_{2}>\rho-\mu_{2},\] since \(\mu_{1}>\mu_{2}\) and \(\sigma>0\). ### Second inequality of (27). The second inequality in (27) is equivalent to \[(1+\eta)(-\delta_{1})(\gamma_{2}-\delta_{3})(1-\delta_{3})+b_{0} \gamma_{2}(\delta_{1}-\delta_{3})(1-\delta_{3})\] \[< (1+\eta)(\gamma_{2}-\delta_{3})(1-\delta_{1})(-\delta_{3})+b_{1} (\gamma_{2}-1)(\delta_{1}-\delta_{3})(-\delta_{3}).\] This is equivalent to \[b_{0}\gamma_{2}(1-\delta_{3})<(1+\eta)(\gamma_{2}-\delta_{3})+b_{1}(\gamma_{2 }-1)(-\delta_{3}).\] We will let \(a=-\delta_{3}>0\) and \(\gamma_{2}=1+\gamma\). Then, the above inequality is reduced to \[b_{0}(1+\gamma)(1+a)<(1+\eta)(1+\gamma+a)+b_{1}\gamma a.\] Since \(1+\gamma+a=(1+a)(1+\gamma)-a\gamma\), the above inequality is equivalent to \[(1+\eta-b_{1})a\gamma<(1+\eta-b_{0})(1+a)(1+\gamma)\quad\Longleftrightarrow \quad\frac{1+\eta-b_{1}}{1+\eta-b_{0}}<\frac{(1+a)(1+\gamma)}{a\gamma}.\] Then, we have \[\frac{1+\eta-b_{1}}{1+\eta-b_{0}} =\frac{1+\frac{\lambda_{0}}{\lambda_{1}}-\frac{\rho-\mu_{2}}{\rho +\lambda_{0}-\mu_{2}}}{1+\frac{\lambda_{0}}{\lambda_{1}}-\frac{\rho-\mu_{1}}{ \rho+\lambda_{0}-\mu_{1}}}=\frac{\frac{\lambda_{0}}{\lambda_{1}}+\frac{\lambda _{0}}{\rho+\lambda_{0}-\mu_{2}}}{\frac{\lambda_{0}}{\lambda_{1}}+\frac{\lambda _{0}}{\rho+\lambda_{0}-\mu_{1}}}\] \[=\frac{1+\frac{\lambda_{1}}{\rho+\lambda_{0}-\mu_{2}}}{1+\frac{ \lambda_{1}}{\rho+\lambda_{0}-\mu_{1}}}=\frac{(\rho+\lambda_{0}+\lambda_{1}- \mu_{2})(\rho+\lambda_{0}-\mu_{1})}{(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})( \rho+\lambda_{0}-\mu_{2})},\] \[\frac{(1+a)(1+\gamma)}{a\gamma}=\left(1+\frac{1}{a}\right)\left(1+\frac{1}{\gamma} \right).\] Recall that \[a=-\delta_{3} =\frac{1}{2}\left[\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma} \right)^{2}+\frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}}-\left(1+ \frac{\mu_{1}-\mu_{2}}{\sigma}\right)\right],\] \[a+1 =\frac{1}{2}\left[\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma} \right)^{2}+\frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}}+\left(1- \frac{\mu_{1}-\mu_{2}}{\sigma}\right)\right].\] Simple calculation yields \[\frac{1+a}{a} =\frac{\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+ \frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}}+\left(1-\frac{\mu_{1 }-\mu_{2}}{\sigma}\right)}{\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma} \right)^{2}+\frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}-\left(1+ \frac{\mu_{1}-\mu_{2}}{\sigma}\right)}}\] \[=\frac{\left[\sqrt{\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right) ^{2}+\frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})}{\sigma}}+1\right]^{2}- \left(\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}}{\frac{4(\rho+\lambda_{0}+ \lambda_{1}-\mu_{1})}{\sigma}}\] \[=\frac{2+\frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})+2\mu_{1}-2 \mu_{2}}{\sigma}+2\sqrt{B_{3}}}{\frac{4(\rho+\lambda_{0}+\lambda_{1}-\mu_{1}) }{\sigma}}\] \[=\frac{\rho+\lambda_{0}+\lambda_{1}-\frac{\mu_{1}+\mu_{2}}{2}+ \frac{1+\sqrt{B_{3}}}{2}\sigma}{\rho+\lambda_{0}+\lambda_{1}-\mu_{1}},\] where \[B_{3}=\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho+\lambda_ {0}+\lambda_{1}-\mu_{1})}{\sigma}.\] Recall \(B_{1}=\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}+\frac{4(\rho+\lambda_ {0}-\mu_{1})}{\sigma}\). We have \[\frac{\gamma+1}{\gamma} =\frac{\frac{\mu_{1}-\mu_{2}}{\sigma}+1+\sqrt{\left(\frac{\mu_{1} -\mu_{2}}{\sigma}+1\right)^{2}+\frac{4(\rho+\lambda_{0}-\mu_{1})}{\sigma}}}{ \frac{\mu_{1}-\mu_{2}}{\sigma}-1+\sqrt{\left(\frac{\mu_{1}-\mu_{2}}{\sigma}+1 \right)^{2}+\frac{4(\rho+\lambda_{0}-\mu_{1})}{\sigma}}}\] \[=\frac{\rho+\lambda_{0}-\frac{\mu_{1}+\mu_{2}}{2}+\frac{1+\sqrt {B_{1}}}{2}\sigma}{\rho+\lambda_{0}-\mu_{2}}.\] Note that \[\sqrt{B_{3}}>\left|1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right|\quad\text{and}\quad \sqrt{B_{1}}>\left|1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right|.\] This implies \[\frac{(1+\sqrt{B_{3}})\sigma}{2}-\frac{\mu_{1}+\mu_{2}}{2} >\frac{\sigma+|\sigma+\mu_{1}-\mu_{2}|-\mu_{1}-\mu_{2}}{2}\] \[=\begin{cases}\sigma-\mu_{2}&\sigma+\mu_{1}>\mu_{2},\\ -\mu_{1}&\sigma+\mu_{1}\leq\mu_{2}.\end{cases}\] and similarly \[\frac{(1+\sqrt{B_{1}})\sigma}{2}-\frac{\mu_{1}+\mu_{2}}{2}>\begin{cases}\sigma-\mu_ {2}&\sigma+\mu_{1}>\mu_{2},\\ -\mu_{1}&\sigma+\mu_{1}\leq\mu_{2}.\end{cases}\] This implies that if \(\sigma+\mu_{1}>\mu_{2}\) we have \[\frac{(1+a)(1+\gamma)}{a\gamma} >\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{2}+\sigma}{\rho+\lambda_ {0}+\lambda_{1}-\mu_{1}}\cdot\frac{\rho+\lambda_{0}-\mu_{2}+\sigma}{\rho+ \lambda_{0}-\mu_{2}}\] \[>\frac{(\rho+\lambda_{0}+\lambda_{1}-\mu_{2})(\rho+\lambda_{0}- \mu_{1})}{(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})((\rho+\lambda_{0}-\mu_{2})}\] \[=\frac{1+\eta-b_{1}}{1+\eta-b_{0}}.\] Here we have used \(\sigma+\mu_{1}-\mu_{2}>0\) hence \(\sigma-\mu_{2}>-\mu_{1}\) and \[\rho+\lambda_{0}-\mu_{2}+\sigma>\rho+\lambda_{0}-\mu_{1}.\] If \(\sigma+\mu_{1}\leq\mu_{2}\), then \(\mu_{1}<\sigma+\mu_{1}\leq\mu_{2}\) and \[\rho+\lambda_{0}+\lambda_{1}-\mu_{1}>\rho+\lambda_{0}+\lambda_{1}-\mu_{2}\] and \[\frac{(1+a)(1+\gamma)}{a\gamma} >\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{1}}{\rho+\lambda_{0}+ \lambda_{1}-\mu_{1}}\cdot\frac{\rho+\lambda_{0}-\mu_{1}}{\rho+\lambda_{0}-\mu _{2}}\] \[>\frac{(\rho+\lambda_{0}+\lambda_{1}-\mu_{2})(\rho+\lambda_{0}- \mu_{1})}{(\rho+\lambda_{0}+\lambda_{1}-\mu_{1})((\rho+\lambda_{0}-\mu_{2})}\] \[=\frac{1+\eta-b_{1}}{1+\eta-b_{0}}.\] This exactly what we need to prove for the second inequality in (27). #### Proof of \(C_{2}>0\) Recall \(C_{2}\) given in (24). In that formula, the denominator is positive since \(\delta_{3}<\delta_{1}<0\), \(\gamma_{2}>1\), and \(0<a_{0},\ a_{1}<1\). \(C_{2}\) is positive if \[(1-a_{0})(\eta+a_{1})(1-\delta_{1})(-\delta_{3})>(1-a_{1})(\eta+a_{0})(1- \delta_{3})(-\delta_{1}).\] Since \(\delta_{1}<0\) and \(\delta_{3}<0\), this is equivalent to \[\frac{\eta+a_{1}}{1-a_{1}}\cdot\frac{-\delta_{1}}{1-\delta_{1}}>\frac{\eta+a_ {0}}{1-a_{0}}\cdot\frac{1-\delta_{3}}{-\delta_{3}}. \tag{29}\] We need to compute both sides and compare them. Note that \[\frac{\eta+a_{1}}{1-a_{1}} =\frac{\frac{\lambda_{0}}{\lambda_{1}}+\frac{\lambda_{0}}{\rho+ \lambda_{0}-\mu_{2}}}{\frac{\rho-\mu_{2}}{\rho+\lambda_{0}-\mu_{2}}}=\frac{ \lambda_{0}}{\lambda_{1}}\cdot\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{2}}{ \rho-\mu_{2}},\] \[\frac{\eta+a_{0}}{1-a_{0}} =\frac{\frac{\lambda_{0}}{\lambda_{1}}+\frac{\lambda_{0}}{\rho+ \lambda_{0}-\mu_{1}}}{\frac{\rho-\mu_{1}}{\rho+\lambda_{0}-\mu_{1}}}=\frac{ \lambda_{0}}{\lambda_{1}}\cdot\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{1}}{ \rho-\mu_{1}}.\] Then (29) is reduced to \[\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{2}}{\rho-\mu_{2}}\cdot\frac{1-\delta_{1}}{ -\delta_{1}}>\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{1}}{\rho-\mu_{1}}\cdot \frac{1-\delta_{3}}{-\delta_{3}}. \tag{30}\] Recall that \[\frac{1-\delta_{1}}{-\delta_{1}}=\frac{\rho-\frac{\mu_{1}+\mu_{2}}{2}+\frac{1+ \sqrt{B_{2}}}{2}\sigma}{\rho-\mu_{1}},\text{ and }\frac{1-\delta_{3}}{-\delta_{3}}=\frac{\rho+ \lambda_{0}+\lambda_{1}-\frac{\mu_{1}+\mu_{2}}{2}+\frac{1+\sqrt{B_{3}}}{2} \sigma}{\rho+\lambda_{0}+\lambda_{1}-\mu_{1}}.\] Then (30) is reduced to \[\frac{\rho+\lambda_{0}+\lambda_{1}-\mu_{2}}{\rho-\mu_{2}}\left(\rho-\frac{\mu _{1}+\mu_{2}}{2}+\frac{1+\sqrt{B_{2}}}{2}\sigma\right)>\rho+\lambda_{0}+ \lambda_{1}-\frac{\mu_{1}+\mu_{2}}{2}+\frac{1+\sqrt{B_{3}}}{2}\sigma.\] This is equivalent to \[(\rho+\lambda_{0}+\lambda_{1}-\mu_{2})\left(\rho-\frac{\mu_{1}+ \mu_{2}-\sigma}{2}\right)-\left(\rho+\lambda_{0}+\lambda_{1}-\frac{\mu_{1}+ \mu_{2}-\sigma}{2}\right)(\rho-\mu_{2})\] \[> (\rho-\mu_{2})\frac{\sqrt{B_{3}}}{2}\sigma-(\rho+\lambda_{0}+ \lambda_{1}-\mu_{2})\frac{\sqrt{B_{2}}}{2}\sigma\] \[= \frac{\sigma}{2}(\rho-\mu_{2})(\sqrt{B_{3}}-\sqrt{B_{2}})-(\lambda _{0}+\lambda_{1})\frac{\sqrt{B_{2}}}{2}\sigma.\] We can simplify the above inequality and it is equivalent to \[(\lambda_{0}+\lambda_{1})(1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{2}})>(\rho -\mu_{2})(\sqrt{B_{3}}-\sqrt{B_{2}}). \tag{31}\] Note that \[\sqrt{B_{3}}-\sqrt{B_{2}}=\frac{B_{3}-B_{2}}{\sqrt{B_{3}}+\sqrt{B_{2}}}=\frac {4(\lambda_{0}+\lambda_{1})}{\sigma}\cdot\frac{1}{\sqrt{B_{3}}+\sqrt{B_{2}}}.\] We can reduce (31) to \[\left(1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{2}}\right)(\sqrt{B_{3}}+\sqrt{ B_{2}})>\frac{4(\rho-\mu_{2})}{\sigma}.\] This is equivalent to \[\left(1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{2}}\right)\sqrt{B_{3}}+\left( 1+\frac{\mu_{2}-\mu_{1}}{\sigma}\right)\sqrt{B_{2}}>\frac{4(\rho-\mu_{2})}{ \sigma}-B_{2}.\] Then we note that \[\frac{4(\rho-\mu_{2})}{\sigma}-B_{2}=\frac{4(\rho-\mu_{2})}{\sigma }-\left(1+\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}-\frac{4(\rho-\mu_{1})}{\sigma}\] \[= \frac{4(\mu_{1}-\mu_{2})}{\sigma}-\left(1+\frac{\mu_{1}-\mu_{2}}{ \sigma}\right)^{2}=-\left(1-\frac{\mu_{1}-\mu_{2}}{\sigma}\right)^{2}=-\left( 1+\frac{\mu_{2}-\mu_{1}}{\sigma}\right)^{2}.\] The inequality is equivalent to \[\left(1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{2}}\right)\sqrt{B_{3}}+\left( 1+\frac{\mu_{2}-\mu_{1}}{\sigma}\right)\sqrt{B_{2}}>-\left(1+\frac{\mu_{2}-\mu _{1}}{\sigma}\right)^{2}.\] Moving the term \(-\left(1+\frac{\mu_{2}-\mu_{1}}{\sigma}\right)^{2}\) to the left-hand side and we obtain \[\left(1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{2}}\right)\left(1+\frac{\mu_{2} -\mu_{1}}{\sigma}+\sqrt{B_{3}}\right)>0. \tag{32}\] Then we apply \[\sqrt{B_{2}}>|1+\frac{\mu_{1}-\mu_{2}}{\sigma}|\quad\text{and}\quad\sqrt{B_{3}} >|1+\frac{\mu_{1}-\mu_{2}}{\sigma}|\] to obtain \[1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{2}}>\begin{cases}2&\text{if }\sigma>\mu_{2}-\mu_{1},\\ \frac{2(\mu_{2}-\mu_{1})}{\sigma}\geq 2&\text{if }\sigma\leq\mu_{2}-\mu_{1}. \end{cases}\] Similarly, \(1+\frac{\mu_{2}-\mu_{1}}{\sigma}+\sqrt{B_{3}}>2\). So (32) follows. Therefore, \(C_{2}>0\). \(\Box\) ## Acknowledgments This work is supported jointly by the Australian Research Council Discovery Project DP200101550, the Natural Science Foundation of China 11831010 and 61961160732, the Natural Science Foundation of Shandong Province ZR2019ZD42 and the Taishan Scholars Climbing Program of Shandong TSPD20210302.
2304.03032
Laplace transform of the $x-y$ symplectic transformation formula in Topological Recursion
The functional relation coming from the $x-y$ symplectic transformation of Topological Recursion has a lot of applications, for instance it is the higher order moment-cumulant relation in free probability or can be used to compute intersection numbers on the moduli space of complex curves. We derive the Laplace transform of this functional relation, which has a very nice and compact form as a formal power series in $\hbar$. We apply the Laplace transformed formula to the Airy curve and the Lambert curve.
Alexander Hock
2023-04-06T12:43:01Z
http://arxiv.org/abs/2304.03032v1
# Laplace transform of the \(x-y\) symplectic transformation formula in topological recursion ###### Abstract. The functional relation coming from the \(x-y\) symplectic transformation of Topological Recursion has a lot of applications, for instance it is the higher order moment-cumulant relation in free probability or can be used to compute intersection numbers on the moduli space of complex curves. We derive the Laplace transform of this functional relation, which has a very nice and compact form as a formal power series in \(\hbar\). We apply the Laplace transformed formula to the Airy curve and the Lambert curve. ## 1. Introduction Topological Recursion (TR) is a universal structure which generates from the so-called spectral curve a family of multi-differentials \(\omega_{g,n}\) on the spectral curve (Riemann surface). TR occurs to be related to seemingly different areas of mathematics and mathematical physics. To give an incomplete list, it is related to volumes of moduli spaces, Hurwitz numbers, intersection numbers of moduli spaces, Gromov-Witten theory, enumerative combinatorics, random matrix theory, quantum field theory on noncommutative spaces, free probability and quantum knot theory [1]. Knowing properties which hold in general for any spectral curve can give therefore new insight into the applications of TR. For instance, the multi-differential \(\omega_{g,n}\) are symmetric, but generated via a non-symmetric formula. This symmetry is in almost all application obvious from the beginning. Transforming the spectral curve under a _symplectic transformation_ can leave the \(\omega_{g,n}\) invariant. From this one can deduce that certain models for instance in random matrix theory are equivalent. However, there is a very specific symplectic transformation the \(x-y\)_symplectic transformation_ which leaves actually the spectral curve invariant but generates completely different multi-differential. Recently, the relation between these two different families of multi-differentials were found in its most simple representation [10, 1]. The result is a functional relation which has already generalised the higher order moment-cumulant relation in free probability [1]. Equivalently, this functional relation relates fully simple and ordinary maps in enumerative combinatorics [1, 2]. Due to free probability, it can be understood as the quantised version of a moment-cumulant relation. The \(x-y\) symplectic transformation also reproves known results for intersection numbers on the moduli space of complex curves \(\overline{\mathcal{M}}_{g,n}\) and might give new algorithms or closed formulas to compute them. An other important tool in TR is the Laplace transformation. In [12, 13], it was shown that the Laplace transform of the \(\omega_{g.n}\) has a direct interpretation in terms of intersection numbers on \(\overline{\mathcal{M}}_{g,n}\). As an application of TR in Gromov-Witten theory, or more generally, in topological string theory, the Laplace transform has an interpretation as _mirror symmetry_. More precisely, the \(A\)-model and \(B\)-model are two different approaches to studying the geometry of Calabi-Yau manifolds in topological string theory. They are related through mirror symmetry. Based on observations in [1], the \(A\)-model and \(B\)-model are related to TR and the mirror map has the interpretation of the Laplace transform (see also [1, 10] for details). Consequently, it is completely natural to bring together the functional relation of the \(x-y\) symplectic transformation and the Laplace transform. Actually, we observe that the functional relation behaves very well under the Laplace transformation. The differential operator in the functional relation becomes after Laplace transform a multiplication which sums up perfectly in terms of formal power series. The Laplace transform of \(\omega_{g,n}\) and of its not-necessarily connected sibling is given in Corollary 2.9 and Proposition 2.6. The functional relation is not valid in general for spectral curves with logarithmic singularities. However, we consider the example of the Lambert curve [1], which encodes Hurwitz numbers. After a small transformation of this curve, we apply successfully the Laplace transform formula of the \(x-y\) symplectic transformation to compute Hurwitz numbers. This is the first step into generalising the \(x-y\) symplectic transformation to spectral curves with logarithmic singularities, which will have application in topological string theory and quantum knot theory. **Acknowledgement**.: This work was supported through the Walter-Benjamin fellowship1. Footnote 1: “Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 465029630 ## 2. Formula for \(x-y\) Symplectic Transformation in Topological Recursion We start to recap the theory of TR and its property under \(x-y\) symplectic transformation in more details. TR is an algorithm which computes recursively in the negative Euler characteristic \(-\chi=2g+n-2\) from some initial data, the so-called _spectral curve_, a family of multi-differentials denoted by \(\omega_{g,n}\), which are also commonly called _correlators_. More precisely, the spectral curve is the tuple \((\Sigma,x,y,B)\), where \(\Sigma\) is a compact Riemann surface with \(x,y:\Sigma\to\mathbb{C}\) are meromorphic functions with simple and distinct ramification points on \(\Sigma\). The multi-differentials \(\omega_{g,n}\) live on \(\Sigma^{n}\) with \(\omega_{0,1}(z)=y(z)\,dx(z)\) and \(\omega_{0,2}=B\), where \(B\) is symmetric with double pole on the diagonal and no residue, bi-residue \(1\) and normalised such that the \(A\)-periods vanish. In particular for \(\Sigma=\mathbb{P}^{1}\) the complex projective line (Riemann sphere), the bilinear differential is \(B(z_{1},z_{2})=\frac{dz_{1}\,dz_{2}}{(z_{2}-z_{2})^{2}}\). Then for negative Euler characteristic \(\chi<0\), all \(\omega_{g,n}\) are defined via [1] \[\omega_{g,n+1}(I,z)\] (2.1) \[:=\sum_{\beta_{i}}\underset{q\to\beta_{i}}{\mathrm{Res}}\,K_{i} (z,q)\bigg{(}\omega_{g-1,n+2}(I,q,\sigma_{i}(q))+\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\Phi_{n}(x_{1},...,x_{n}):=\sum_{g=0}^{\infty}\hbar^{2g+n-2}\Phi_{g,n}(x_{1},..., x_{n}). \tag{2.6}\] **Example 2.1**.: _For \(\Sigma=\mathbb{P}^{1}\) and \(x\) unramified, i.e. \(x\) has no ramification point, then all \(W_{g,n}=0\) for \(\chi=2g+n-2<0\). The correlators with positive Euler characteristic are_ \[W_{0,1}(x(z))= y(z),\] \[W_{0,2}(x_{1}(z_{1}),x_{2}(z_{2}))= \frac{1}{x_{1}^{\prime}(z_{1})x_{2}^{\prime}(z_{2})(z_{1}-z_{2}) ^{2}}.\] ### Symplectic Transformation Symplectic transformations play a very important role in the theory of TR. All transformations which leaves the symplectic form \[|dx\wedge dy|\] invariant are generated by the three transformations: * \((x,y)\rightarrow(x,y+R(x))\), where \(R(x)\) is a rational function in \(x\) * \((x,y)\rightarrow(\frac{ax+b}{cx+d},\frac{(cx+d)^{2}}{ad-bc}y)\) with \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in SL_{2}(\mathbb{C})\) * \((x,y)\rightarrow(y,x)\). It was proved in [1, 1] that for meromorphic \(x,y\) the **free energies are invariant (up to a known normalisation constant) under all symplectic transformations**. The same is **not true for \(\omega_{g,n}\)**! Note however that the first two symplectic transformations listed above leave indeed \(\omega_{g,n}\) for \(\chi<0\) invariant. This happens due to the fact that \(x\) and \(y\) enter the recursion in (2.1) just through the recursion kernel and the ramification points \(\beta_{i}\) of \(x\) which are both invariant. New inside was achieved recently on the third symplectic transformation, the so-called \(x-y\)_symplectic transformation_. In a series of papers [1, 1, 2, 3, 4], Bychkov et al. derived an involved formula for two different sets of connected moments coming from topological partition functions. This functional relation between two families of moments was proved to be the \(x-y\) symplectic transformation for a certain type of matrix models [1]. Due to the relation between random matrix theory and free probability, this functional relation was shown [1] to give the moment-cumulant relation in free probability. A fairly simple version of the same functional relation for genus \(g=0\) was derived in [10] with a completely different technique via loop insertion operator, and for all genus in [10]. Shortly later, Alexandrov et al. [1] have shown that the simplified version from [10] is indeed the symplectic transformation for any meromorphic \(x,y\). To formulate the functional relation, we define for the spectral curve \((\Sigma,y,x,B)\) (now with \(x\) and \(y\) interchanged) the corresponding multi-differential with \(\omega_{g,n}^{\vee}\). More precisely, \(\omega_{0,1}^{\vee}(z)=x(z)dy(z)\), \(\omega_{0,2}^{\vee}(z_{1},z_{2})=B(z_{1},z_{2})\) and all \(\omega_{g,n}^{\vee}\) via (2.1) with interchanged role of \(x\) and \(y\). This means \(\omega_{g,n}^{\vee}\) has poles just located at the ramification points of \(y\) for \(\chi<0\). Similarly, we define in this setting \[W_{g,n}^{\vee}(y_{1}(z_{1}),...,y_{n}(z_{n}))dy_{1}(z_{1})...dy_{ n}(z_{n}):= \omega_{g,n}^{\vee}(z_{1},...,z_{n}) \tag{2.7}\] \[W_{n}^{\vee}(y_{1},...,y_{n}):= \sum_{g=0}^{\infty}\hbar^{2g+n-2}W_{g,n}^{\vee}(y_{1},...,y_{n})\] (2.8) \[\Phi_{g,n}^{\vee}(y_{1}(z_{1}),...,y_{n}(z_{n})):= \int_{o}^{z_{1}}...\int_{o}^{z_{n}}\omega_{g,n}^{\vee}(z_{1},..., z_{n})\] (2.9) \[\Phi_{n}^{\vee}(y_{1},...,y_{n}):= \sum_{g=0}^{\infty}\hbar^{2g+n-2}\Phi_{g,n}^{\vee}(y_{1},...,y_{n }). \tag{2.10}\] Next, we need the following graphs describing the structure of the functional relation: **Definition 2.2**.: _Let \(\mathcal{G}_{n}\) be the set of connected bicoloured graph \(\Gamma\) with \(n\)\(\bigcirc\)-vertices and \(\bullet\)-vertices, such that the following holds:_ * _the_ \(\bigcirc\)_-vertices are labelled from_ \(1,...,n\)__ * _edges are only connecting_ \(\bullet\)_-vertices with_ \(\bigcirc\)_-vertices_ * \(\bullet\)_-vertices have valence_ \(\geq 2\)_._ _For a graph \(\Gamma\in\mathcal{G}_{n}\), let \(r_{i}(\Gamma)\) be the valence of the \(i^{\text{th}}\)\(\bigcirc\)-vertex._ _Let \(I\subset\{1,...,n\}\) be the set associated to a \(\bullet\)-vertex, where \(I\) is the set labellings of \(\bigcirc\)-vertices connected to this \(\bullet\)-vertex. Let \(\mathcal{I}(\Gamma)\) be the set of all sets \(I\) for a given graph \(\Gamma\in\mathcal{G}_{n}\)._ The automorphism group \(\operatorname{Aut}(\Gamma)\) consists of permutations of edges which preserve the structure of \(\Gamma\) considering the labellings. A graph \(\Gamma\in\mathcal{G}_{n}\) is up to automorphisms completely characterised by the set \(\mathcal{I}(\Gamma)\). Adapted to the definition and functions above, the functional relation reads in a very compact form: **Theorem 2.3** ([12, 1]).: _Let \(x,y\) be two meromorphic functions on a compact Riemann surface with simple distinct ramification points, which generates via TR (2.1) the multi-differentials \(\omega_{g,n}\) and \(\omega_{g,n}^{\vee}\) as above. Let \(\Phi_{n},\Phi_{n}^{\vee},W_{n},W_{n}^{\vee}\) be as above defined from \(\omega_{g,n}\) and \(\omega_{g,n}^{\vee}\). For \(I=\{i_{1},...,i_{n}\}\), let_ \[\hat{\Phi}_{n}^{\vee}(y_{I};\hbar,u_{I}):= \sum_{(\varepsilon_{i_{1}},...,\varepsilon_{i_{n}})\in\{1,-1\}^{n}} \hskip-14.226378pt(-1)^{\#(\varepsilon_{i}=-1)}\Phi_{n}^{\vee}\bigg{(}y_{i_ {1}}+\varepsilon_{i_{1}}\frac{\hbar u_{i_{1}}}{2},...,y_{i_{n}}+\varepsilon_{ i_{n}}\frac{\hbar u_{i_{n}}}{2}\bigg{)} \tag{2.11}\] _and for \(I=\{i,i\}\) (and genus zero spectral curve) the special case_ \[\hat{\Phi}_{n}^{\vee}(y_{I};\hbar,u_{I}):= \underset{(\varepsilon_{1},\varepsilon_{2})\in\{1,-1\}^{2}}{\sum} (-1)^{\#(\varepsilon_{i}=-1)}\bigg{[}\Phi_{2}^{\vee}\bigg{(}y_{i}+\varepsilon_ {1}\frac{\hbar u_{i}}{2},y_{j}+\varepsilon_{2}\frac{\hbar u_{i}}{2}\bigg{)} \tag{2.12}\] \[-\log\bigg{(}y_{i}+\varepsilon_{1}\frac{\hbar u_{i}}{2}-y_{j}- \varepsilon_{2}\frac{\hbar u_{i}}{2}\bigg{)}\bigg{]}_{j=i}\] _(for higher genus spectral curves the logarithm has to be replaces by the appropriate Theta-function). Let further be \(\hat{O}\) a differential operator acting from the left_ \[\hat{O}(y_{i}):= \sum_{m}\bigg{(}-\frac{\partial}{\partial x_{i}}\bigg{)}^{m} \bigg{(}-\frac{dy_{i}}{dx_{i}}\bigg{)}[u_{i}^{m}]\frac{\exp\bigg{(}\hat{\Phi}_ {1}^{\vee}(y_{i};\hbar,u_{i})-x_{i}u_{i}\bigg{)}}{\hbar u_{i}}. \tag{2.13}\] _Then, the functional relation holds as a formal expansion in \(\hbar\)_ \[\boxed{W_{n}(x_{1}(z_{1}),...,x_{n}(z_{n}))=\sum_{\Gamma\in\mathcal{G}_{n}} \frac{1}{|\text{\rm Aut}(\Gamma)|}\prod_{i=1}^{n}\hat{O}(y_{i}(z_{i}))\prod_{I \in\mathcal{I}(\Gamma)}\hat{\Phi}_{n}^{\vee}(y_{I}(z_{I});\hbar,u_{I}).}\] Proof.: The functional relation stated in the theorem is slightly different from the one in [10, 1]. First of all, we have changed the role of \(x\) and \(y\) comparing to [10]. Next, as a formal expansion in \(\hbar\), the weight function \(\hat{\Phi}_{n}^{\vee}(y_{I};\hbar,u_{I})\) can be written as \[\hat{\Phi}_{n}^{\vee}(y_{I};\hbar,u_{I})= \underset{(\varepsilon_{1},\ldots,\varepsilon_{in})\in\{1,-1\}^{ n}}{\sum}(-1)^{\#(\varepsilon_{i}=-1)}\Phi_{n}^{\vee}\bigg{(}y_{i_{1}}+ \varepsilon_{i_{1}}\frac{\hbar u_{i_{1}}}{2},...,y_{i_{n}}+\varepsilon_{i_{n}} \frac{\hbar u_{i_{n}}}{2}\bigg{)}\] \[= \bigg{(}\prod_{i\in I}\hbar u_{i}S(\hbar u_{i}\partial_{x_{i}}) \bigg{)}\big{(}W_{n}(x_{I})\bigg{)},\] where \(S(u)=\frac{e^{u/2}-e^{-u/2}}{u}\), see [10] for more details. Expanding the lhs and the rhs in \(\hbar\) gives for each coefficient the relation stated in [10, 1]. Now, we still want to understand more properties of the functional relation of Theorem 2.3. **Example 2.4**.: _Consider Example 2.1 with \(x,y\) interchanged, i.e. \(y\) is unramified. All \(W_{g,n}^{\vee}=0\) for \(2g+n-2>0\). Therefore, all \(\hat{\Phi}_{n}^{\vee}(y_{I}(z_{I});\hbar,u_{I})=0\) for \(n>2\). Let \(\mathcal{G}_{n}^{2}\subset\mathcal{G}_{n}\) be the set of graphs defined in Definition 2.2 with just 2-valent \(\bullet\)-vertices, then_ \[W_{n}(x_{1}(z_{1}),...,x_{n}(z_{n}))=\sum_{\Gamma\in\mathcal{G}_{n}^{2}}\frac{ 1}{|\text{\rm Aut}(\Gamma)|}\prod_{i=1}^{n}\hat{O}(y_{i}(z_{i}))\prod_{I\in \mathcal{I}(\Gamma)}\hat{\Phi}_{2}^{\vee}(y_{I}(z_{I});\hbar,u_{I}).\] _Note that all diagonal \(\hat{\Phi}_{2}^{\vee}(y_{i}(z_{i}),y_{i}(z_{i});\hbar,u_{i},u_{i})\) as defined in (2.12) can be included in the exponential of the Operator \(\hat{O}\) defined in (2.13). Also the other \(\hat{\Phi}_{2}^{\vee}(y_{i}(z_{i}),y_{j}(z_{j});\hbar,u_{i},u_{j})\) with \(i\neq j\) can be collected as an exponential such that multiple \(\bullet\)-vertices connecting the same \(\bigcirc\)-vertices are generated through expansion of this exponential (this was already discussed in [1, SS7]). The symmetry factor becomes redundant. Thus, we have the alternative form_ \[W_{n}(x_{1}(z_{1}),...,x_{n}(z_{n}))=\prod_{i=1}^{n}\hat{O}^{2}(y_{i}(z_{i})) \sum_{\Gamma\in\tilde{\mathcal{G}}_{n}^{2}}\prod_{I\in\mathcal{I}(\Gamma)} \bigg{(}e^{\hat{\Phi}_{2}^{\vee}(y_{I}(z_{I});\hbar,u_{I})}-1\bigg{)},\] _where \(\tilde{\mathcal{G}}_{n}^{2}\subset\mathcal{G}_{n}^{2}\subset\mathcal{G}_{n}\) is the set of graphs defined in Definition 2.2 with 2-valent \(\bullet\)-vertices just connecting two different \(\bigcirc\)-vertices and at most one \(\bullet\)-vertex connects the same \(\bigcirc\)-vertices. Equivalently, \(\tilde{\mathcal{G}}_{n}^{2}\) is the set of connected labeled graphs with \(n\) vertices (A001187). The modified operator is_ \[\hat{O}^{2}(y_{i}):=\sum_{m}\bigg{(}-\frac{\partial}{\partial x_{i}}\bigg{)}^{ m}\bigg{(}-\frac{dy_{i}}{dx_{i}}\bigg{)}[u_{i}^{m}]\frac{\exp\bigg{(}\hat{ \Phi}_{1}^{\vee}(y_{i};\hbar,u_{i})-x_{i}u_{i}+\frac{1}{2}\hat{\Phi}_{2}^{\vee }(y_{i},y_{i};\hbar,u_{i},u_{i})\bigg{)}}{\hbar u_{i}}.\] _The symmetry factor \(\frac{1}{2}\) inside the exponential comes from the automorphism \(\operatorname{Aut}(\Gamma)\) swapping the two edge of a single \(\bullet\)-vertex connected to the same \(\bigcirc\)-vertex. The permutation of \(k\)\(\bullet\)-vertices connected to the same \(\bigcirc\)-vertex is \(k!\) and also an automorphism in \(\operatorname{Aut}(\Gamma)\), which is collected in the expansion of the exponentials._ The example shows that terms coming from the graph expansion can be nicely collected in an exponential. This is not a surprise since the original derivation came indeed from _not-necessarily-connected correlators_[1]. Rephrasing these computational steps backwards, a even more compact formula can be provided for the not-necessarily-connected correlators \(\overset{\circ}{W}_{n}\) defined by \[\overset{\circ}{W}_{n}(x_{I}):=\sum_{\lambda\vdash I}\prod_{i=1}^{l(\lambda) }W_{|\lambda_{i}|}(x_{\lambda_{i}}), \tag{2.14}\] where \(\lambda\vdash I\) is a set partition of \(I=\{1,...,n\}\), i.e. \(\lambda=\{\lambda_{1},...,\lambda_{l(\lambda)}\}\) of length \(l(\lambda)\) and blocks \(\lambda_{i}\subset I\). Note that \(\overset{\circ}{W}_{n}(x_{I})\) has for all \(n>1\) and at each order in \(\hbar\) in general poles on the diagonal on the variables \(z_{i},z_{j}\). **Corollary 2.5**.: _The not-necessarily-connected correlators \(\overset{\circ}{W}_{n}\) satisfy the functional relation as formal expansion in \(\hbar\)_ \[\overset{\circ}{W}_{n}(x_{I}(z_{I}))=\sum_{m_{1},...,m_{n}}\prod_{i=1}^{n} \bigg{(}-\frac{\partial}{\partial x_{i}(z_{i})}\bigg{)}^{m_{i}}\bigg{(}-\frac {dy_{i}(z_{i})}{dx_{i}(z_{i})}\bigg{)}[u_{i}^{m_{i}}]\frac{1}{\hbar u_{i}} \tag{2.15}\] \[\times\exp\bigg{(}\sum_{k\geq 1}\frac{1}{k!}\sum_{i_{1},...,i_{k}=1 }^{n}\hat{\Phi}_{k}^{\vee}(y_{i_{1}}(z_{i_{1}}),...,y_{i_{k}}(z_{i_{k}});\hbar,u_ {i_{1}},...,u_{i_{k}})\] \[\qquad\qquad-\sum_{i=1}^{n}u_{i}x_{i}(z_{i})\bigg{)}.\] Proof.: Since \(\overset{\circ}{W}_{n}(x_{I})\) includes all not-necessarily-connected correlators, it is a rather classical result that the exponential generates these from the connected correlators (very similar to the discussion in Example 2.4). The automorphisms are split in two groups. The first is generating the symmetry factors \(\frac{1}{k_{i}!}\) permuting a \(\bullet\)-vertex connected with \(k_{i}\) edges to the \(i\)-th \(\bigcirc\)-vertex. The second permutes the same \(\bullet\)-vertices with the same number of edge connecting to the some \(\bigcirc\)-vertices, which is collected in the expansion of the exponential. Since all \(\hat{\Phi}_{k}^{\vee}\) are symmetric, we can reorder the summation in the exponential (via multinomial theorem) as \[\sum_{\begin{subarray}{c}k_{1},...,k_{n}\geq 0\\ k_{1}+...+k_{n}=k>0\end{subarray}}\frac{\hat{\Phi}_{k}^{\vee}(\overbrace{y_{1 }(z_{1}),...,y_{1}(z_{1})}^{k_{1}},...,\overbrace{y_{n}(z_{n}),...,y_{n}(z_{n })}^{k_{n}};\hbar,u_{1},...)}{k_{1}!...k_{n}!}\] \[= \sum_{k\geq 1}\frac{1}{k!}\sum_{i_{1},...,i_{k}=1}^{n}\hat{ \Phi}_{k}^{\vee}(y_{i_{1}}(z_{i_{1}}),...,y_{i_{k}}(z_{i_{k}});\hbar,u_{i_{1}},...,u_{i_{k}}),\] and get the claimed result. We observe that the not-necessarily-connected correlator \(\overset{\circ}{W}_{n}\), which are generated by the spectral curve \((x,y)\), are related to the connected correlators \(W_{g,n}^{\vee}\) generated by the spectral curve \((y,x)\) through a differential operator acting on an exponential. However, the form of the differential operator taking as \(m\)-th derivative of the \(m\)-th order in the \(u\) expansion can have a more transparent explanation. Furthermore, we will also give insight on the factor \(\bigg{(}-\frac{dy_{i}(z_{i})}{dx_{i}(z_{i})}\bigg{)}\) through some formal observations. ### Laplace transform of the \(x-y\) symplectic transformation Let us look at the _Laplace transform_ of (2.15). The integration path \(\gamma\) on the Riemann surface depends on \(x(z),y(z)\). We will not be precise about this path but want rather look at the following formal manipulations: **Proposition 2.6**.: _Assume paths \(\gamma_{i}\ni z_{i}\) exist such that \(\overset{\circ}{W}_{n}(x_{1}(z_{1}),...,x_{n}(z_{n}))\) is analytic on \(\gamma_{i}\) and the integrand vanishes fast enough at its boundary values. Assume further that the Laplace transform of \(\overset{\circ}{W}_{n}(x_{I})\) along the paths \(\gamma_{i}\) converges _for each coefficient in \(\hbar\), then the Laplace transform as a formal expansion in \(\hbar\) reads_ \[\int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}...\int_{ \gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}\overset{\circ}{W}_{n}(x_{1}(z_ {1}),...,x_{n}(z_{n}))\\ = \int_{\gamma_{1}}\frac{dy_{1}(z_{1})}{\hbar\mu_{1}}...\int_{ \gamma_{n}}\frac{dy_{n}(z_{n})}{\hbar\mu_{n}}\exp\bigg{(}\sum_{k\geq 1}\frac{1}{ k!}\sum_{i_{1},...,i_{k}=1}^{n}\hat{\Phi}_{k}^{\vee}(y_{i_{1}}(z_{i_{1}}),...,y_{ i_{k}}(z_{i_{k}});\hbar,-\mu_{i_{1}},...,-\mu_{i_{k}})\bigg{)}.\] Proof.: Take Corollary 2.5 and multiply it with \(\prod_{i=1}^{n}e^{-\mu_{i}x_{i}(z_{i})}dx_{i}(z_{i})\) and integrate over \(\gamma_{i}\) in each variable \(z_{i}\). Due to the assumptions, we can integrate by parts in each variable \(x_{i}(z_{i})\) exactly \(m_{i}\) times such that all boundary terms vanish. This yields \[\int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}...\int_{ \gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}\overset{\circ}{W}_{n}(x_{1}( z_{1}),...,x_{n}(z_{n}))\\ = \int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}...\int_{ \gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}\sum_{m_{1},...,m_{n}\geq 0} \prod_{i=1}^{n}\bigg{(}-\mu_{i}\bigg{)}^{m_{i}}\bigg{(}-\frac{dy_{i}(z_{i})}{ dx_{i}(z_{i})}\bigg{)}[u_{i}^{m_{i}}]\\ \times\frac{1}{\hbar u_{i}}\exp\bigg{(}\sum_{k\geq 1}\frac{1}{k!} \sum_{i_{1},...,i_{k}=1}^{n}\hat{\Phi}_{k}^{\vee}(y_{i_{1}}(z_{i_{1}}),...,y_ {i_{k}}(z_{i_{k}});\hbar,u_{i_{1}},...,u_{i_{k}})-\sum_{i=1}^{n}u_{i}x_{i}(z_{ i})\bigg{)}.\] As a formal expression in \(u_{i}\), we can just substitute all \(u_{i}\) with \(-\mu_{i}\) and ignore the \(m_{i}\) summation. The factors \(e^{-\mu_{1}x_{1}(z_{1})}\) are cancelled by \(e^{-\sum_{i=1}^{n}u_{i}x_{i}(z_{i})}\) for \(u_{i}=-\mu_{i}\). Changing the integration variable \(dx_{i}(z_{i})\to dy_{i}(z_{i})\) finishes the proof. The proposition gives the most compact and transparent relation between the correlators \(W_{g,n}\) and \(W_{g,n}^{\vee}\) generated via TR with spectral curve \((x,y)\) and \((y,x)\), respectively. We see explicitly that taking the \(m_{i}\)-th derivative of a formal expansion of the \(m_{i}\)-th coefficient is a very natural operation for it Laplace transform. The appearance of the factor \(-\frac{dy_{i}(z_{i})}{dx_{i}(z_{i})}\) yields the final change of integration variables. However, the Laplace transform depends on the different paths \(\gamma_{i}\), and interchanging them \(\gamma_{i}\rightarrow\gamma_{\sigma(i)}\) under some permutation gives first of all a different Laplace transform, since \(\overset{\circ}{W}_{n}\) has poles on the diagonal and residues at the diagonal will not necessarily vanish. **Example 2.7**.: _The leading order in \(\hbar\) of Proposition 2.6 is \(\hbar^{-n}\). The lhs of the equation expands into \(\overset{\circ}{W}_{n}(x_{1}(z_{1}),...,x_{n}(z_{n}))=\prod_{i=1}^{n}\frac{y_{ i}(z_{i})}{\hbar}+\mathcal{O}(\hbar^{-n+1})\), and for the rhs the argument of the exponential expands at leading order to_ \[\sum_{i=1}^{n}\hat{\Phi}_{1}^{\vee}(y_{i}(z_{i});\hbar;-\mu_{i}) +\mathcal{O}(\hbar)\] \[= \frac{1}{\hbar}\sum_{i=1}^{n}\bigg{(}\Phi_{1}^{\vee}\bigg{(}y_{i }(z_{i})-\frac{\hbar\mu_{i}}{2}\bigg{)}-\Phi_{1}^{\vee}\bigg{(}y_{i}(z_{i})+ \frac{\hbar\mu_{i}}{2}\bigg{)}\bigg{)}+\mathcal{O}(\hbar)\] \[= -\sum_{i=1}^{n}\mu_{i}x_{i}(z_{i})+\mathcal{O}(\hbar).\] _Putting everything together, the leading order reads_ \[\int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}y_{1}(z_{1})...\int_{\gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}y_{n}(z_{n})\] \[= \int_{\gamma_{1}}\frac{dy_{1}(z_{1})}{\mu_{1}}e^{-\mu_{1}x_{1}(z_ {1})}...\int_{\gamma_{n}}\frac{dy_{n}(z_{n})}{\mu_{n}}e^{-\mu_{n}x_{n}(z_{n})}\] _which is correct under the assumptions and integration by parts._ **Example 2.8**.: _Consider the same situation as in Example 2.1 and 2.4, this is \(y\) is unramified thus all \(W_{g,n}^{\vee}=0\) for \(2g+n-2>0\). As before, this implies that all \(\hat{\Phi}_{k}^{\vee}=0\) for \(k>2\). We conclude for this case from Proposition 2.6_ \[\int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}...\int_{ \gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}\hat{W}_{n}(x_{1}(z_{1}),..., x_{n}(z_{n}))\] \[= \int_{\gamma_{1}}\frac{dy_{1}(z_{1})}{\hbar\mu_{1}}...\int_{ \gamma_{n}}\frac{dy_{n}(z_{n})}{\hbar\mu_{n}}\exp\bigg{(}\sum_{i=1}^{n}\hat{ \Phi}_{1}^{\vee}(y_{i}(z_{i});\hbar;-\mu_{i})+\frac{1}{2}\sum_{i,j=1}^{n}\hat{ \Phi}_{2}^{\vee}(y_{i}(z_{i}),y_{j}(z_{j});\hbar;-\mu_{i},-\mu_{j})\bigg{)}.\] _Note that on the diagonal \(\hat{\Phi}_{2}^{\vee}(y_{i}(z_{i}),y_{i}(z_{i});\hbar;\mu_{i},\mu_{i})\), we have to take the normalised primitives (2.12)._ The derivation above is also valid for connected correlators \(W_{n}\) with exactly the same steps. We just state the result: **Corollary 2.9**.: _Assume a paths \(\gamma_{i}\) exist such that \(W_{n}(x_{1}(z_{1}),...,x_{n}(z_{n}))\) is analytic on \(\gamma_{i}\) and the integrand vanishes fast enough at its boundary values. Assume the Laplace transform of \(W_{n}(x_{I})\) along the path \(\gamma\) converges for each coefficient in \(\hbar\), then the Laplace transform as a formal expansion in \(\hbar\) reads_ \[\int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}...\int_{ \gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}W_{n}(x_{1}(z_{1}),...,x_{n}(z _{n}))\] \[= \int_{\gamma_{1}}\frac{dy_{1}(z_{1})}{\hbar\mu_{1}}...\int_{ \gamma_{n}}\frac{dy_{n}(z_{n})}{\hbar\mu_{n}}\exp\bigg{(}\sum_{i=1}^{n}\hat{ \Phi}_{1}^{\vee}(y_{i}(z_{i});\hbar,-\mu_{i})\bigg{)}\sum_{\Gamma\in\mathcal{G }_{n}}\frac{\prod_{I\in\mathcal{I}(\Gamma)}\hat{\Phi}_{|I|}^{\vee}(y_{I}(z_{I });\hbar,-\mu_{I})}{|\text{Aut}(\Gamma)|}.\] **Example 2.10**.: _Continue the series of Example 2.1, 2.4 and 2.8, where \(y\) is unramified. The Laplace transform of the connected correlator reads_ \[\int_{\gamma_{1}}dx_{1}(z_{1})e^{-\mu_{1}x_{1}(z_{1})}...\int_{ \gamma_{n}}dx_{n}(z_{n})e^{-\mu_{n}x_{n}(z_{n})}W_{n}(x_{1}(z_{1}),...,x_{n}(z _{n}))\] \[= \int_{\gamma_{1}}\frac{dy_{1}(z_{1})}{\hbar\mu_{1}}...\int_{ \gamma_{n}}\frac{dy_{n}(z_{n})}{\hbar\mu_{n}}\exp\bigg{(}\sum_{i=1}^{n}\hat{ \Phi}_{1}^{\vee}(y_{i}(z_{i});\hbar,-\mu_{i})+\frac{1}{2}\hat{\Phi}_{2}^{\vee }(y_{i}(z_{i}),y_{i}(z_{i});\hbar,-\mu_{i},-\mu_{i})\bigg{)}\] \[\times\sum_{\Gamma\in\tilde{\mathcal{G}}_{n}^{2}}\prod_{I\in\mathcal{I} \Gamma(\Gamma)}\bigg{(}e^{\tilde{\Phi}_{2}^{\vee}(y_{I}(z_{I});h,-\mu_{I})}-1 \bigg{)},\] _where \(\tilde{\mathcal{G}}_{n}^{2}\subset\mathcal{G}_{n}\) is the set of graphs with 2-valent \(\bullet\)-vertices connecting two different \(\bigcirc\)-vertices at most with one \(\bullet\)-vertex which is nothing than the set of connected graphs with \(n\) labelled vertices (A001187)._ **Remark 2.11**.: _The definition of the Laplace transform in several variables indicates a possible dependence on the different paths \(\gamma_{i}\). Permuting or geometrically moving two paths \(\gamma_{i},\gamma_{j}\) along each other would pick a non-trivial residue at the diagonal \(z_{i}=z_{j}\). However, it turns out that after computing all path integrals these residues cancel out. This is related to the fact that the lhs of the functional relation of Theorem 2.3 has no pole at the diagonal, even though the rhs is indicating it._ ### Logarithmic \(x,y\) In the previous subsection, logarithmic behaviour of \(x\) or \(y\) was excluded. However, the TR itself can also be applied to logarithmic \(x\) and \(y\). Actually, a lot of very important examples related for instance to Gromov-Witten theory have logarithms. To verify that the functional relation does not hold in general for logarithmic \(x,y\), we take the example of the so-called _Lambert curve_[1, 1]. This curve encodes Hurwitz numbers, which will be explained in more details later. The curve is defined via \[x(z)=-z+\log(z),\qquad y(z)=z. \tag{2.16}\] The spectral curve is of genus zero such that the bilinear differential is \(\omega_{0,2}(z_{1},z_{2})=\frac{dz_{1}\,dz_{2}}{(z_{1}-z_{2})^{2}}\). Starting with these two functions, the formula of TR (2.1) generates all \(W_{g,n}\) and also \(W_{g,n}^{\vee}\). One might check if the functional relation holds for some examples. Since \(y\) is unramified, this curve reflects the examples of the previous subsection, i.e. \(W_{g,n}^{\vee}=0\) for all \(-\chi<2g+n-2\). A short computation of (2.1) gives for instance \[W_{1,1}(x(z))=\frac{1}{x^{\prime}(z)}\operatorname*{Res}_{q\to 1}K_{i}(z,q) \omega_{0,2}(q,\sigma_{i}(q))=\frac{z^{2}(z-4)}{24(z-1)^{5}}.\] On the other hand, the \((g,n)=(1,1)\)-example of the functional relation is \[W_{1,1}(x(z))= -\frac{dy(z)}{dx(z)}W_{1,1}^{\vee}(y(z))+\frac{1}{2}\frac{d}{dx( z)}\bigg{(}\frac{dy(z)}{dx(z)}\hat{W}_{2,0}^{\vee}(y(z),y(z))\bigg{)}-\frac{1}{24} \frac{d^{3}}{dx(z)^{3}}\bigg{(}\frac{1}{\frac{dy(z)}{dx(z)}}\bigg{)}\] \[= -\frac{1}{24}\frac{d^{3}}{dx(z)^{3}}\bigg{(}\frac{1}{\frac{dy(z)} {dx(z)}}\bigg{)}\] \[= \frac{-6z^{2}+4z-1}{24z(z-1)^{5}}\] where \(W^{\vee}_{1,1}(y(z))=0\) and \(\hat{W}^{\vee}_{2,0}(y_{1},y_{2})=W^{\vee}_{2,0}(y_{1},y_{2})-\frac{1}{(y_{1}-y_{ 2})^{2}}=0\) vanish such that the last term contributes only. This is a clear discrepancy to the direct computation. However, if we look at a symplectic transformation of this curve by transforming \[y\to\tilde{y}=y+x, \tag{2.17}\] the validity of the functional relation can be rescued. This means we start with the curve \[x(z)=-z+\log(z),\qquad\tilde{y}(z)=\log(z). \tag{2.18}\] Let us denote the corresponding correlators with \(\tilde{W}_{g,n}\), which are equal to \(W_{g,n}\) of the curve (2.16) due to symplectic transformation. Thus, the formula of TR yields obviously again \[\tilde{W}_{1,1}(x(z))=\frac{z^{2}(z-4)}{24(z-1)^{5}}\] due to the invariance of the kernel under this transformation. After \(x-y\) symplectic transformation, i.e. looking at the correlators \(\tilde{W}^{\vee}_{g,n}\), we see that \(\tilde{W}^{\vee}_{g,n}=0\) for all \(-\chi<2g+n-2\) since \(y\) is unramified again. However, the regularised correlators \(\hat{\tilde{W}}^{\vee}_{0,2}\) on the diagonal does not vanish. It is \[\lim_{z^{\prime}\to z}\frac{1}{\tilde{y}^{\prime}(z)\tilde{y}^{ \prime}(z^{\prime})(z-z^{\prime})^{2}}-\frac{1}{(\tilde{y}(z)-\tilde{y}(z^{ \prime}))^{2}}=-\frac{1}{12}.\] Inserting everything into the functional relation for \((g,n)=(1,1)\), we find \[\tilde{W}_{1,1}(x(z))= -\frac{d\tilde{y}(z)}{dx(z)}\tilde{W}^{\vee}_{1,1}(y(z))+\frac{1 }{2}\frac{d}{dx(z)}\bigg{(}\frac{d\tilde{y}(z)}{dx(z)}\hat{\tilde{W}}^{\vee}_ {2,0}(\tilde{y}(z),\tilde{y}(z))\bigg{)}-\frac{1}{24}\frac{d^{3}}{dx(z)^{3}} \bigg{(}\frac{1}{\frac{d\tilde{y}(z)}{dx(z)}}\bigg{)}\] \[= -\frac{1}{24}\frac{d}{dx(z)}\bigg{(}\frac{d\tilde{y}(z)}{dx(z)} \bigg{)}-\frac{1}{24}\frac{d^{3}}{dx(z)^{3}}\bigg{(}\frac{1}{\frac{d\tilde{y }(z)}{dx(z)}}\bigg{)}\] \[= \frac{z^{2}(z-4)}{24(z-1)^{5}}\] which coincides with the direct computation from TR. This example reveals that considering logarithmic \(x,y\) the \(x-y\) symplectic transformation does not hold in general. But on the other hand, taking the correct symplectic transformation (2.17) before, it actually can hold. One may ask if the Laplace transform of Proposition 2.6 and Corollary (2.9) is still valid if the functional relation is satisfied including logarithms for \(x,y\) as it is for the curve (2.18). **Remark 2.12**.: _For logarithmic behaviour of \(x,y\), the contour \(\gamma\) may cross the branch cut of the logarithm. The observation of Example 2.7 gives us an alternative way making sense of the Laplace transform even for \((g,n)=(0,1)\) including logarithms for \(x,y\). Therefore, we define_ \[\int_{\gamma}dx(z)e^{-\mu x(z)}y(z):=\int_{\gamma}\frac{dy(z)}{\mu}e^{-\mu x(z)},\] _if \(y\) has a logarithm and \(\gamma\) crosses the branch cut. This avoids to split the integration contour at the cut and include boundary terms to regularise the lhs, which is well-defined and could be done in principle._ ## 3. Application to Intersection Numbers on \(\overline{\mathcal{M}}_{g,n}\) This section recalls some examples about the connection between TR and intersection theory on \(\overline{\mathcal{M}}_{g,n}\) and applies the derived formulas. We refer to [1, 2] for much more information. Let \(\overline{\mathcal{M}}_{g,n}\) be the compactified moduli space of complex curves of genus \(g\) with \(n\) labelled points. It is compactified in the sense Deligne and Mumford [10]. \(\overline{\mathcal{M}}_{g,n}\) is a complex orbifold of dimension \(d_{g,n}=3g-3+n\). Its points \((C,p_{1},...,p_{n})\in\overline{\mathcal{M}}_{g,n}\) are isomorphism classes of a complex curve \(C\) with \(n\) labelled point denoted by \(p_{1},...,p_{n}\). Let \(L_{i}\) be the line bundle over \(\overline{\mathcal{M}}_{g,n}\) whose fiber is the contangent space \(TC^{\vee}(p_{i})\) of \(C\) at \(p_{i}\). The first Chern class of this line bundle is called the \(\psi\)-class \[\psi_{i}=c_{1}(L_{i}).\] The \(\psi\)-classes are the easiest examples for tautological classes in \(\overline{\mathcal{M}}_{g,n}\). One can build _intersection numbers_ by wedging several \(\psi\)-classes \[\langle\psi_{1}^{d_{1}}...\psi_{n}^{d_{n}}\rangle_{g,n}=\int_{\overline{ \mathcal{M}}_{g,n}}\psi_{1}^{d_{1}}...\psi_{n}^{d_{n}} \tag{3.1}\] with \(d_{g,n}=\sum_{i=1}^{n}d_{i}\), otherwise it is defined to vanish. It was conjectured by Witten [14] and shortly later proved by Kontsevich [15] that the generating function (3.1) satisfies the KdV equations, see for instance [16, 1] for more details. Kontsevich proved that a hermitian matrix model with external field generates the intersection numbers (3.1) if one identify the parameters of their generating function with certain coefficients of the matrix model. After developing TR, this connection constructed by Kontsevich between matrix models and intersection theory gave birth to the connection between TR and intersection theory in general. The associated spectral curve \(\psi\)-class intersection numbers is known under the name of Airy curve or Witten-Kontsevich curve, defined by \((\mathbb{P}^{1},,x=\frac{z^{2}}{2},y=z,\frac{dz_{1}\,dz_{2}}{(z_{1}-z_{2})^{2}})\)[1]. We will also discuss more general intersection numbers built from \(\psi\)- and Hodge-classes. Let \(\pi:\overline{\mathcal{M}}_{g,n+1}\rightarrow\overline{\mathcal{M}}_{g,n}\) be the forgetful morphism, forgetting the last labelled point, and \(\omega_{\pi}\) the relative dualising sheaf. Then, the Hodge-class is defined by \[\Lambda(\alpha)=1+\sum_{k=1}^{g}(-1)^{k}\alpha^{-k}c_{k}(\mathbb{E}), \tag{3.2}\] where \(c_{k}\) is the \(k\)-th Chern class and \(\mathbb{E}\) the Hodge bundle given by pushforward of the relative dualising sheaf, i.e. \(\mathbb{E}=\pi_{*}(\omega_{\pi})\). Intersection numbers of a mixture of \(\psi\)- and Hodge-classes can be considered and made considerable interest in the past. Especially in the context of Hurwitz numbers, a relation between intersection numbers of \(\psi\)- and Hodge-classes and the counting problem of ramified coverings over \(\mathbb{P}^{1}\) with a certain ramification profil at infinity. This relation is the celebrated ELSV formula [1]. More precisely, let \(h_{g;k\mu_{1},...,\mu_{n}}\) be the number of the equivalence classes of topologically nonequivalent ramified coverings \(f:C\to\mathbb{P}^{1}\), where \(C\) is compact, connected complex curve of genus \(g\) and \(f\) has ramification profil \((\mu_{1},....,\mu_{n})\) over infinity and simple ramification else. The ELSV formula relates the Hurwitz number \(h_{g;\mu_{1},...,\mu_{n}}\) to the following linear Hodge integral \[h_{g;\mu_{1},...,\mu_{n}}=\frac{(2g+\mu+n-2)!}{|\mathrm{Aut}(\mu_{1},...,\mu_ {n})|}\prod_{i=1}^{n}\frac{\mu_{i}^{\mu_{i}}}{\mu_{i}!}\int_{\overline{ \mathcal{M}}_{g,n}}\frac{\Lambda(1)}{\prod_{i=1}^{n}(1-\mu_{i}\psi_{i})}, \tag{3.3}\] where \(|\mathrm{Aut}(\mu_{1},...,\mu_{n})|\) is the number of permutations permuting equal \(\mu_{i}\)'s and \(\mu=\mu_{1}+...+\mu_{n}\). Bouchard and Marino conjectured [1] that these linear Hodge integrals appearing in the ELSV formula (and therefore also Hurwitz numbers) can actually be computed by TR with the so-called _Lambert curve_. This was proved in [1], where the spectral curve was given by \((\mathbb{P}^{1}\setminus\mathbb{R}_{-},x=-z+\log(z),y=\log(z),\frac{dz_{1}\, dz_{2}}{(z_{1}-z_{2})^{2}})\). Note that we have shifted \(y\) in the spectral curve due to the observation in Sec. 2.3, which does not change the result of [1]. Applying the Laplace transform of Corollary 2.9 to the Airy and/or Lambert curve gives easy formulas to compute \(\psi\)- and Hodge-class intersection numbers. ### Airy curve For the Airy curve, the explicit relation between the correlators \(\omega_{g,n}\) and the intersection numbers is given by: **Theorem 3.1** ([1]).: _Let the Airy spectral curve be \((\mathbb{P}^{1},x(z)=z^{2},y(z)=z,\frac{dz_{1}\,dz_{2}}{(z_{1}-z_{2})^{2}})\), then the correlator \(\omega_{g,n}\) computed by TR generates the \(\psi\)-class integral_ \[\omega_{g,n}(z_{1},...,z_{n})=\sum_{k_{1}+...+k_{n}=d_{g,n}}\langle\psi_{1}^{ k_{1}}...\psi_{n}^{k_{n}}\rangle_{g,n}\prod_{i=1}^{n}\frac{(2k_{i}+1)!!}{z_{i}^{2 k_{i}+2}}dz_{i}. \tag{3.4}\] The first step is two extract intersection numbers from \(\omega_{g,n}\) through a Laplace transform: **Lemma 3.2**.: _Let \(\mu_{i}\in\mathbb{N}\) and \(2g+n-2>0\), we have for \(\omega_{g,n}\) generated by the Airy curve of Theorem 3.1_ \[\frac{1}{\sqrt{2\pi}^{n}}\int_{i\varepsilon-\infty}^{i\varepsilon+\infty}e^{- \mu_{1}x_{1}(z_{1})...-\mu_{n}x_{n}(z_{n})}\omega_{g,n}(z_{1},...,z_{n})= \bigg{\langle}\prod_{i=1}^{n}\frac{\sqrt{\mu_{i}}}{1-\mu_{i}\psi_{i}}\bigg{\rangle} _{g,n}. \tag{3.5}\] _For \(2g+n-2\leq 0\), we get \(\bigg{\langle}\frac{1}{(1-\mu\psi)}\bigg{\rangle}_{0,1}=\frac{1}{\mu^{2}}\) and \(\bigg{\langle}\frac{1}{(1-\mu_{1}\psi_{1})(1-\mu_{2}\psi_{2})}\bigg{\rangle} _{0,2}=\frac{1}{\mu_{1}+\mu_{2}}\)._ Proof.: For \(2g+n-2>0\) and each \(\mu_{i}\), we apply the integral to the result of Theorem 3.1. The integral is computed by a version of the well-known Gaussian integral \[\frac{1}{\sqrt{2\pi}}\int_{i\varepsilon-\infty}^{i\varepsilon+\infty}dz\frac{ e^{-\mu z^{2}/2}}{z^{2k+2}}=\frac{\mu^{k+1/2}}{(2k+1)!!}. \tag{3.6}\] The sum over \(k_{1}+...+k_{n}=d_{g,n}\) can be extended to a sum over all \(k_{i}\geq 0\), since all the intersection number are defined to vanish unless the condition \(k_{1}+...+k_{n}=d_{g,n}\) is satisfied. For each \(k_{i}\) summation, we have a geometric series \(\sum_{k_{i}=0}^{\infty}\mu_{i}^{k_{i}}\psi^{k_{i}}=\frac{1}{1-\mu_{i}\psi_{i}}\). For \(2g+n-2\leq 0\), the computation is also straightforward. The Bergman kernel is first expanded in a geometric series \(\frac{1}{(z_{1}-z_{2})^{2}}=\frac{1}{z_{1}^{2}}\sum_{n}n(\frac{z_{2}}{z_{1}}) ^{n-1}\), then apply the Gaussian integral together with \(\frac{1}{(2k-1)!!(-2k-1)!!}=(-1)^{k}\) and write it again as geometric series again. Now, we can apply the formulas developed in Sec. 2.2 to the Airy curve. First of all note that the Airy curve \(x(z)=z^{2},y(z)=z\) is not ramified through \(y\). We can easily stick to the examples discussed in Sec. 2.2 and just have to compute the following primitives: \[\Phi_{1}^{\vee}(y(z)) =\frac{1}{\hbar}\int x(z)dy(z)=\frac{z^{3}}{6\hbar} \tag{3.7}\] \[\hat{\Phi}_{1}^{\vee}(y(z);\hbar,u) =\Phi_{1}^{\vee}\bigg{(}y(z)+\frac{\hbar u}{2}\bigg{)}-\Phi_{1}^{ \vee}\bigg{(}y(z)-\frac{\hbar u}{2}\bigg{)}=u\frac{z^{2}}{2}+\frac{\hbar^{2}u^ {3}}{24}\] (3.8) \[\Phi_{2}^{\vee}(y_{1}(z_{1}),y_{2}(z_{2})) =\log(z_{1}-z_{2})\] (3.9) \[e^{\hat{\Phi}_{2}^{\vee}(y_{1}(z_{1}),y_{2}(z_{2});\hbar,u_{1}, u_{2})}-1 =\frac{\hbar^{2}u_{1}u_{2}}{(z_{1}-z_{2})^{2}-\frac{\hbar^{2}}{4} (u_{1}+u_{2})^{2}}. \tag{3.10}\] Taking Example 2.10 and Lemma 3.2 into account, we conclude: **Corollary 3.3**.: _The \(\psi\)-class intersection numbers can be computed as a formal expansion in \(\hbar\) via_ \[\bigg{\langle}\prod_{i=1}^{n}\frac{\sqrt{\mu_{i}}}{1-\mu_{i}\psi_{i}}\bigg{\rangle} _{g,n}\] \[= [\hbar^{2g+n-2}]\prod_{i=1}^{n}\frac{e^{-\frac{\hbar^{2}\mu_{i}^{3}}{24} }}{\sqrt{2\pi}}\int_{i\varepsilon-\infty}^{i\varepsilon+\infty}dz_{i}\frac{e^{- \mu_{i}\frac{z_{i}^{2}}{2}}}{\hbar\mu_{i}}\sum_{\Gamma\in\tilde{\mathcal{G}}_{n }^{2}}\prod_{\begin{subarray}{c}I\in\mathcal{I}(\Gamma)\\ \{i,j\}=I\end{subarray}}\frac{\hbar^{2}\mu_{i}\mu_{j}}{(z_{i}-z_{j})^{2}-\frac {\hbar^{2}}{4}(\mu_{i}+\mu_{j})^{2}},\] _where \(\tilde{\mathcal{G}}_{n}^{2}\subset\mathcal{G}_{n}\) is the subset of graphs with 2-valent \(\bullet\)-vertices connecting two different \(\bigcirc\)-vertices at most with one \(\bullet\)-vertex which is nothing than the set of connected graphs with \(n\) labelled vertices (A001187)._ For not-necessarily-connected correlators, we can conclude from Example 2.8 and Lemma 3.2: **Corollary 3.4**.: _The sum over all partitions of \(\psi\)-class intersection numbers satisfies as a formal expansion in \(\hbar\)_ \[\sum_{\lambda\vdash I}\prod_{j=1}^{l(\lambda)}\sum_{g_{j}=0}\hbar ^{2g_{j}+|\lambda_{j}|-2}\bigg{\langle}\prod_{i=1}^{|\lambda_{j}|}\frac{\sqrt{ \mu_{\lambda_{j}^{i}}}}{(1-\mu_{\lambda_{j}^{i}}\psi_{\lambda_{j}^{i}})} \bigg{\rangle}_{g_{j},|\lambda_{j}|}\] \[= \prod_{i=1}^{n}\frac{e^{-\frac{\hbar^{2}\mu_{i}^{3}}{24}}}{\sqrt {2\pi}}\int_{i\varepsilon-\infty}^{i\varepsilon+\infty}dz_{i}\frac{e^{-\mu_{i }\frac{z_{i}^{2}}{2}}}{\hbar\mu_{i}}\prod_{i<j}\frac{\hbar^{2}\mu_{i}\mu_{j}}{ (z_{i}-z_{j})^{2}-\frac{\hbar^{2}}{4}(\mu_{i}+\mu_{j})^{2}},\] _where \(\lambda\vdash I\) is a set partition of \(I\) with \(l(\lambda)\) blocks \(\lambda_{j}\subset I\), i.e. \(\lambda=(\lambda_{1},...,\lambda_{l(\lambda)})\). Each block is written as \(\lambda_{j}=(\lambda_{j}^{1},...,\lambda_{j}^{|\lambda_{j}|})\) of cardinality \(|\lambda_{j}|\) and elements \(\lambda_{j}^{i}\in I\)._ The result of Corollary 3.3 and 3.4 are not claimed to be new formulas, because these are equivalent to the one given in [ABDB\({}^{+}\)22, SS7] after applying the Laplace transform as explained in Sec. 2.2. So, performing the Gaussian integrals, one usually manipulate the integrand through derivatives to get formulas like (3.6), we just arrive at the formula directly induced by Theorem 3.1 together with Theorem 2.3 as stated in [ABDB\({}^{+}\)22, SS7]. However, we are still getting a new perspective on the computation of \(\psi\)-class intersection numbers in terms of Gaussian integrals. We want to emphasise also that the order of integration in Corollary 3.3 and 3.4 does not matter. This is related to the fact that the correlators \(\omega_{g,n}\) have just poles at the ramification points and not at the diagonal for \(2g+n-2>0\). The deep algebraic structure which reveals this property is not yet understood, but it is present in the background of all these formulas. ### Lambert curve In this subsection, we apply the formulas of Sec. 2.2 to the Lambert curve. Note that these formulas are just proved for meromorphic \(x,y\). The discussion of Sec. 2.3 has however shown that for the Lambert curve a specific choice of \(y\) can actually work. Through this subsection, we have assigned an asterisk to those corollaries which assume that for the Lambert curve of the form of (2.18) satisfies the \(x-y\) symplectic transformation. A lot of checks with computer algebra have confirmed this assumption. The precise relation between the correlators \(\omega_{g,n}\) and the linear Hodge integrals (or Hurwitz numbers) is given by: **Theorem 3.5** ([1]).: _Let the Lambert spectral curve be \((\mathbb{P}^{1}\setminus\mathbb{R}_{-},x(z)=-z+\log(z),y(z)=\log(z),\frac{dz_{1}\, dz_{2}}{(z_{1}-z_{2})^{2}})\), then the correlator \(\omega_{g,n}\) computed by TR generates the linear Hodge integrals_ \[\omega_{g,n}(z_{1},...,z_{n})=\sum_{k_{1},...,k_{n}\geq 0}\prod_{i=1}^{n}\frac{k_ {i}^{k_{i}+1}}{k_{i}!}\bigg{\langle}\frac{\Lambda(1)}{\prod_{i=1}^{n}(1-k_{i} \psi_{i})}\bigg{\rangle}_{g,n}e^{k_{i}x_{i}(z_{i})}dx_{i}(z_{i}) \tag{3.11}\] Computing the Laplace transform of (3.11), where \(\gamma\) is a contour encircling the origin, will separate the summands in (3.11): **Lemma 3.6**.: _Let \(\mu_{i}\in\mathbb{N}\) and \(2g+n-2>0\), we have for \(\omega_{g,n}\) generated by the Lambert curve of Theorem 3.5_ \[\operatorname{Res}_{z_{i}=0}e^{-\mu_{1}x_{1}(z_{1})...-\mu_{n}x_{n}(z_{n})} \omega_{g,n}(z_{1},...,z_{n})=\prod_{i=1}^{n}\frac{\mu_{i}^{\mu_{i}+1}}{\mu_{ i}!}\bigg{\langle}\frac{\Lambda(1)}{\prod_{i=1}^{n}(1-\mu_{i}\psi_{i})} \bigg{\rangle}_{g,n}. \tag{3.12}\] _For \(2g+n-2\leq 0\), we take that the lhs of (3.12) to be defined by this equation with \(\bigg{\langle}\frac{\Lambda(1)}{(1-\mu\psi)}\bigg{\rangle}_{0,1}=\frac{1}{\mu ^{2}}\) and \(\bigg{\langle}\frac{\Lambda(1)}{(1-\mu_{1}\psi_{1})(1-\mu_{2}\psi_{2})}\bigg{ \rangle}_{0,2}=\frac{1}{\mu_{1}+\mu_{2}}\)._ Proof.: For \(2g+n-2>0\) and each \(\mu_{i}\), we have the same computation \[\operatorname{Res}_{z=0}dx(z)e^{(k-\mu)x(z)}=\operatorname{Res}_{z=0}\frac{(1 -z)dz}{z}\frac{e^{(\mu-k)z}}{z^{\mu-k}}=\delta_{\mu,k}.\] For \(2g+n-2<0\), the residue at \(0\) is not well-defined. Thus, we define the Laplace transform to match the classical known result. The case \((g,n)=(0,1)\) is related to Remark 2.12. Now, we can apply the formulas developed in Sec. 2.2 to the Lambert curve. First of all note that the Lambert curve parametrised by \(x(z)=-z+\log(z),y(z)=\log(z)\), i.e. \[x=-e^{y}+y, \tag{3.13}\] is not ramified through \(y\). We can easily stick to the examples discussed in Sec. 2.2 and just have to compute the following primitives: \[\Phi_{1}^{\vee}(y(z)) =\frac{1}{\hbar}\int x(z)dy(z)=\frac{-z+\frac{\log(z)^{2}}{2}}{ \hbar}=\frac{-e^{y(z)}+\frac{y(z)^{2}}{2}}{\hbar} \tag{3.14}\] \[\hat{\Phi}_{1}^{\vee}(y(z);\hbar,u) =\Phi_{1}^{\vee}\bigg{(}y(z)+\frac{\hbar u}{2}\bigg{)}-\Phi_{1}^ {\vee}\bigg{(}y(z)-\frac{\hbar u}{2}\bigg{)}=\] (3.15) \[=-zuS(\hbar u)+\log(z)u\] \[\Phi_{2}^{\vee}(y_{1}(z_{1}),y_{2}(z_{2})) =\log(z_{1}-z_{2}) \tag{3.16}\] \[\hat{\Phi}_{2}^{\vee}(y_{1}(z_{1}),y_{2}(z_{2});\hbar,u_{1},u_{2})=\log\left( \frac{(z_{1}e^{hu_{1}/2}-z_{2}e^{hu_{2}/2})(z_{1}e^{-hu_{1}/2}-z_{2}e^{-hu_{2}/2}) }{(z_{1}e^{-hu_{1}/2}-z_{2}e^{hu_{2}/2})(z_{1}e^{hu_{1}/2}-z_{2}e^{-hu_{2}/2})}\right) \tag{3.17}\] \[\hat{\Phi}_{2}^{\vee}(y(z),y(z);\hbar,u,u)=-2\log(S(u\hbar)), \tag{3.18}\] where \(S(u)=\frac{e^{u/2}-e^{-u/2}}{u}\) and \(\Phi_{2}^{\vee}\) needed to be regularised due to (2.12) on the diagonal. **Remark 3.7**.: _Note that we have chosen \(y(z)=\log(z)\) instead of \(y(z)=z\) as proposed in [1], because Sec. 2.3 has shown that for \(y(z)=z\) the functional relation does not hold. Actually, transforming \(y(z)=z\mapsto y(z)+x(z)=\log(z)\), is the first symplectic transformation of a spectral curve mentioned in the beginning of Sec. 2.2. Thus, both curves have the same \(\omega_{g,n}\)'s, except for \(\omega_{0,1}\), and therefore generates the linear Hodge integrals as stated in Theorem 3.5. The important factor \(S(u\hbar)\) appearing for Hurwitz numbers and linear Hodge integrals (see for instance [10]) is directly produced in (3.15) and (3.18) by our chosen spectral curve (3.13) and the formula of \(x-y\) symplectic transformation._ Taking Example 2.10 and Lemma 3.6 into account, we conclude: **Corollary 3.8**.: _The linear Hodge integrals are computed as a formal expansion in \(\hbar\) via_ \[\prod_{i=1}^{n}\frac{k_{i}^{k_{i}+1}}{k_{i}!}\bigg{\langle}\frac{ \Lambda(1)}{\prod_{i=1}^{n}(1-k_{i}\psi_{i})}\bigg{\rangle}_{g,n}\] \[= \operatorname{Res}_{z_{i}=0}[\hbar^{2g+n-2}]\prod_{i=1}^{n}\frac{ dz_{i}e^{k_{i}z_{i}S(\hbar k_{i})}}{z_{i}^{1+k_{i}}\hbar k_{i}S(\hbar k_{i})}\] \[\times\sum_{\Gamma\in\tilde{\mathcal{G}}_{n}^{2}}\prod_{ \begin{subarray}{c}I\in\mathcal{I}(\Gamma)\\ \{i,j\}=I\end{subarray}}\bigg{(}\frac{(z_{i}e^{\hbar k_{i}/2}-z_{j}e^{\hbar k_{ j}/2})(z_{i}e^{-\hbar k_{i}/2}-z_{j}e^{-\hbar k_{j}/2})}{(z_{i}e^{-\hbar k_{i}/2}-z_{j}e^{ \hbar k_{j}/2})(z_{i}e^{\hbar k_{i}/2}-z_{j}e^{-\hbar k_{j}/2})}-1\bigg{)},\] _where \(\tilde{\mathcal{G}}_{n}^{2}\subset\mathcal{G}_{n}\) is the subset of graphs with 2-valent \(\bullet\)-vertices connecting two different \(\bigcirc\)-vertices at most with one \(\bullet\)-vertex which is nothing than the set of connected graphs with \(n\) labelled vertices (A001187)._ Explicit computations of Corollary 3.8 with computer algebra have matched all the Hurwitz numbers listed in [1, Tab. 2]. For not-necessarily-connected correlators, we can conclude from Example 2.8 and Lemma 3.6: **Corollary 3.9**.: _The sum over all partitions of linear Hodge integrals satisfies as a formal expansion in \(\hbar\)_ \[\prod_{i=1}^{n}\frac{k_{i}^{k_{i}+1}}{k_{i}!}\sum_{\lambda\vdash I}\prod_{j=1} ^{l(\lambda)}\sum_{g_{j}=0}\hbar^{2g_{j}+|\lambda_{j}|-2}\bigg{\langle}\frac{ \Lambda(1)}{\prod_{i=1}^{|\lambda_{j}|}(1-k_{\lambda_{j}^{i}}\psi_{\lambda_{j} ^{i}})}\bigg{\rangle}_{g_{j},|\lambda_{j}|}\] \[= \prod_{i=1}^{n}\operatorname{Res}_{z_{i}=0}\frac{dz_{i}e^{k_{i}z_{i} S(\hbar k_{i})}}{z_{i}^{1+k_{i}}\hbar k_{i}S(\hbar k_{i})}\prod_{i<j}\frac{(z_{i}e^{ \hbar k_{i}/2}-z_{j}e^{\hbar k_{j}/2})(z_{i}e^{-\hbar k_{i}/2}-z_{j}e^{-\hbar k_{ j}/2})}{(z_{i}e^{-\hbar k_{i}/2}-z_{j}e^{\hbar k_{j}/2})(z_{i}e^{\hbar k_{i}/2}-z_{j}e^{- \hbar k_{j}/2})},\] _where \(\lambda\vdash I\) is a set partition of \(I\) with \(l(\lambda)\) blocks \(\lambda_{j}\subset I\), i.e. \(\lambda=(\lambda_{1},...,\lambda_{l(\lambda)})\). Each block is written as \(\lambda_{j}=(\lambda_{j}^{1},...,\lambda_{j}^{|\lambda_{j}|})\) of cardinality \(|\lambda_{j}|\) and elements \(\lambda_{j}^{i}\in I\)._
2310.15917
Density Functional Theory Study of Light Metal (Li/Na/Ca) Functionalized Borophosphene for Reversible Hydrogen Storage
Borophosphene is investigated for hydrogen storage by density functional theory calculations through Li, Na and Ca decoration. Decoration enhances the binding energy from -0.047 eV/H2 to -0.20 -- -0.42 eV/H2. PDOS and Bader charge analysis elucidate the role of adatom decoration in charge transfer and better binding. Up to 10, 12 and 20 H2 molecules can be adsorbed over a single Li, Na and Ca adatom, respectively, in a supercell of 32 atoms. Desorption temperature is calculated from the binding energies. A complete discharge of the stored molecules from decorated borophosphene can be realized in temperature range of 125 to 531 K. Further, decoration at multiple sites of the substrate is performed to evaluate the theoretical gravimetric density. With Li, Na, and Ca overloading, gravimetric densities of 6.22%, 5.34%, and 6.08% are obtained. NEB results show that inter-site energy barriers of the adatoms are larger than their thermal energy by an order.
Sandip Haldar
2023-10-24T15:19:30Z
http://arxiv.org/abs/2310.15917v1
Density Functional Theory Study of Light Metal (Li/Na/Ca) Functionalized Borophosphene for Reversible Hydrogen Storage ###### Abstract Borophosphate is investigated for hydrogen storage by density functional theory calculations through Li, Na and Ca decoration. Decoration enhances the binding energy from -0.047 eV/H\({}_{2}\) to -0.20 - -0.42 eV/H\({}_{2}\). PDOS and Bader charge analysis elucidate the role of adatom decoration in charge transfer and better binding. Up to 10, 12 and 20 H\({}_{2}\) molecules can be adsorbed over a single Li, Na and Ca adatom, respectively, in a supercell of 32 atoms. Desorption temperature is calculated from the binding energies. A complete discharge of the stored molecules from decorated borophosphate can be realized in temperature range of \(125-531\) K. Further, decoration at multiple sites of the substrate is performed to evaluate the theoretical gravimetric density. With Li, Na, and Ca overloading, gravimetric densities of 6.22%, 5.34%, and 6.08% are obtained. NEB results show that inter-site energy barriers of the adatoms are larger than their thermal energy by an order. keywords: Borophosphate, Hydrogen storage, Light metal decoration, Diffusion barrier, Hydrogen desorption, Density functional theory + Footnote †: journal: Journal of Chemical Physics ## 1 Introduction Hydrogen energy is one of the most available clean energy solutions due to its pollution free nature and high energy density per unit weight and brings the potential to alleviate the carbon footprint from the fossil fuels [1]. Conventional approaches, e.g. pressurized tank and liquid hydrogen fuel come with safety concerns, higher cost along with inadequate energy density [2; 3]. Molecular hydrogen storage over nanomaterials is one of the sought after solutions for hydrogen energy. The US Department of Energy (DOE) target is set at 5.5 - 9.5 % gravimetric density by 2025 and a binding energy between physiosorption and chemisorption [4; 5; 6]. Nazir et al. [7] reviewed the challenges and state of the art in \(H_{2}\) storage and outlook toward \(H_{2}\) based green energy.2D materials have propelled a wide attention in design and synthesis from several novel elements since the grand arrival of graphene. Due to their high specific area, several surface dominated applications have been considered to be beneficial including hydrogen storage. A wide variety of monoelemental 2D materials have been thoroughly scrutinized for their application in molecular hydrogen storage, for example, carbon allotropes [8; 9; 10; 11; 3; 12], allotropes of phosphorous [13; 14; 15; 16; 17; 18], allotropes of boron [19; 20; 21; 22; 23], silicene and germanene [2; 24; 25] etc. Apart from monoelemental 2D materials, different dielemental 2D materials have been considered for hydrogen storage, for example, Boron Nitride [26; 27; 28], Boron sulfide [29], Zinc oxide [30], magnesium hydride [31], Beryllium polynitrides [32], Boron/Carbon nitride [5; 33] etc. Generally, pristine materials exercise poor interaction with the \(H_{2}\) molecules resulting in weak binding energy that is unsuitable for reversible hydrogen storage [33; 4]. For example, pristine graphene, phosphorene, borophene show binding energy of 0.04 - 0.10 eV/\(H_{2}\) in storing molecular \(H_{2}\) due to weak interaction [12; 15; 34; 35; 21]. One of the popularly adopted strategies to enhance the interaction, and thereby, hydrogen storage performance, have been adatom decoration, defect engineering, or both. For decoration (also referred as surface functionalization), alkali metals or transition metals are widely chosen to enhance the interaction through the contribution of the adatoms [20; 34; 36; 37; 38; 39]. Metal decorations with low electronegativities become strongly polarized after being adsorbed over the substrate and as a result, attract \(H_{2}\) molecules [3]. Functionalization of the 2D materials (e.g. decoration) enhanced the substrate-\(H_{2}\) interaction by the charge transfer and was adopted as a promising approach for im- proving the binding with \(H_{2}\) molecule. This, as a result, improved the storage capacity or gravimetric density of \(H_{2}\) storage onto the 2D substrates. By decoration, the binding energy of \(H_{2}\) molecules are improved multi-fold and enhance the gravimetric density. The alkali metals offer the binding energy for physioxorption through charge polarization and minimize the cluster formation. On the other hand, tran- sition metals offer stronger binding energy by Kubas type interaction where metal-\(d\) and H\({}_{2}\)-s participate in hybridization, however, they tend to cluster [40; 29; 41]. Further, while transition metals can adsorb more \(H_{2}\) molecules, they compromise the gravimetric density due to their higher atomic mass. The promise and high performance of the 2D materials have led to further efforts in design and search of novel materials as well as computational screening of their performance while fabrication of them is awaited. Since its prediction, borophosphene has recently garnered a wide interest for different applications such as energy applications. A anisotropic Dirac material with graphene like hexagonal structure constituting of Phosphorous (P) and Boron (B) atoms, referred as borophosphene, was proposed by Zhang et al. [42] and its stability was established. A structure in Pmmm plane group, with B-P-P-B sequence, the unit cell has two lattice constants as 3.22 A in zigzag direction and 5.57 A in the armchair direction [42; 43]. Experimental realization is foreseen from the favorable 4.82 eV/atom cohesive energy and 12 meV/\(\AA^{2}\) energy for exfoliation [42; 44]. Borophosphene has been evaluated for lithium-, non-lithium-ion batteries, lithium-sulfur, sodium-sulfur batteries [44; 45; 46; 47; 48; 49]. This work reports hydrogen storage performance of borophosphene as evaluated using Density Functional Theory (DFT). Light metal (Li, Na and Ca) decoration has been investigated to enhance the hydrogen storage performance of borophosphene by calculating the binding energies. The results have been compared with the monoelemental counterparts, i.e. borophene and phosphorene. Finally, theoretical gravimetric density has been calculated from the results. Further, competition between decoration and clustering of the adatoms over the borophosphene was checked from diffusion barrier energy of the adatoms from the neighboring adatom. ## 2 Material and Computational detail The substrate is taken as a supercell of 32 atoms from 4 \(\times\) 2 unit cells and was subjected to energy minimization to determine the ground state structure. \(H_{2}\) was added to the substrate and binding energy was obtained from the energy minimization of the system. For functionalization, Li, Na and Ca decoration over the relaxed pristine substrate were first stabilized and then, computations for hydrogen storage over the decorated substrate were performed. Quantum Espresso, available under GNU license, was used for the DFT calculations [50, 51]. PAW pseudopotentials were used to model the elements along with Perdew-Burke-Ernzerhof (PBE) exchange correlation functional [52, 53]. The pseudopotentials treat B: \(2s^{2}2p^{1}\), P: \(3s^{2}3p^{3}\), Li: \(1s^{2}2s^{1}\), Na: \(2s^{2}2p^{6}3s^{1}\), Ca: \(3s^{2}3p^{6}4s^{2}\), H: \(1s^{1}\) as valence electrons. The van der Waals forces were corrected through DFT-D2 framework [54]. The wave functions are truncated at a cut-off of 6o Ry and 48o Ry is used for charge density cut-off \(1e^{-}5\) Ry is used as convergence threshold for the total energy of the system for SCF calculations. Brillouin zone integration was performed using Monkhorst-Pack grid with 9 \(\times\) 7 \(\times\) 1 k-points [45, 55]. Degauss value of o.02 in Methfessel-Paxton smearing was used in the simulations [56]. In all cases, at least 15 A vacuum space was added above the substrate to eliminate interlayer long range interactions that may arise from the periodic image. Hydrogen storage of borophsphere is evaluated by calculating the binding energy given by, \[E_{b}^{H_{2}}=\left(E_{B\mathcal{P}+nH}\right._{2}-E_{B\mathcal{P}}-E_{nH} \left.\right)/i, \tag{1}\] \[E_{b}^{H_{2}}=\left(E_{B\mathcal{P}+LM+nH}\right._{2}-E_{B\mathcal{P}+LM}-E_{ nH}\left.\right._{2})/i, \tag{2}\] for pristine and decorated borophsphere, respectively. The total energy \(E_{*}\) of \({}_{*}\) material system is obtained from the DFT results, and \(n\) is number of \(H_{2}\) molecules in the system. In order for borophosphene to adsorb the \(H_{2}\) molecules this energy has to be negative. A higher magnitude of \(E_{b}\) implies that the substrate binds strongly. To enquire the interaction among the elements, projected density of states (PDOS) are presented and Bader charge analysis for quantitative analysis. Visualization figures are prepared using XCrySDen package [57]. ## 3 Results and discussions ### Borophosphene For completeness and validation, pristine borophosphene is first studied to compare with results reported in the literature. The optimized borophosphene is shown in Figure 1(a) along with band structure in Figure 1(b). The conventional unit cell with four atoms is also marked in the figure. Two relevant locations, that will be referred later, are denoted by \(H_{B}\) and \(H_{P}\), in the ring of four B atoms, and four P atoms, respectively. The substrate of 32 atoms 4 \(\times\) 2 cells was adopted from the previous reports in literature [44; 45]. The lattice parameters from the relaxed structure were measured as \(a=3.22\) A, and \(b=5.56\) A. The lengths of the B-B, B-P and P-P bonds are stable at 1.66 A, 1.84 A, and 2.10 A, respectively. These parameters are very close to those reported in literature [42]. With a Dirac cone between \(\Gamma\) and X points, the band structure is also similar to those reported earlier [42]. ### \(H_{2}\) storage in borophosphene Borophosphene and the \(H_{2}\) molecule together was stabilized from a number of unique locations of \(H_{2}\) as indicated in Figure 2(a). Binding energies (Eq. 1) for those initial locations were obtained between -0.029 to -0.047 eV/\(H_{2}\). The best binding configuration (\(E_{b}=-0.047\) eV) is shown in Figure 2(b) resulting in \(H-H\) bond of 0.751 A and positioned at a height of 2.92 A from the substrate. Borophosphene exhibits weak binding with \(H_{2}\) in comparison with graphene (\(E_{b}=-0.10\) eV) [12; 35], however, closer to borophene (\(E_{b}=-45\) meV) [21], and phosphorene (\(E_{b}=-70\) meV) [34]. Binding energies of \(H_{2}\) molecule over different 2D materials are compared in Table 1 from literature data. Figure 1: Borophosphene: (a) relaxed monolayer of 4\(\times\)2 unit cells, and (b) band structure. To elucidate the interaction between borophosphene and \(H_{2}\) molecule, the projected density of state (PDOS) is depicted in Figure 3. Any significant interaction noticed in coherence with the low binding energy the \(H_{2}\) molecule over borophosphene. Bader charge analysis showed that a charge of only o.0092 e is transferred from the substrate to the \(H_{2}\) molecule. The borophosphene substrate stability is not affected due to \(H_{2}\) adsorption. ### \(H_{2}\) storage over decorated borophosphene #### 3.3.1 Decoration over borophosphene The pristine borophosphene was first decorated by the chosen light metal adatoms (Li, Na, Ca). Motivated by the literature, adatoms were placed at two locations (\(H\)s and \(H\)s ) around 3 A above the substrate as shown in Figure 4(a-c) [44; 45] and followed by the energy minimization. The binding energy of adatoms is determined by, \[E_{b}^{M}\ =\ (E_{BP\ +JM}\ -E_{BP}\ -jE_{M})/\lambda. \tag{3}\] In the above relation, \(E_{BP\ +JM}\) is energy of M-decorated borophosphene with \(j\) adatoms, \(E_{BP}\) is energy of borophosphene and \(E_{M}\) is energy of the single adatom. The equation will result in a negative binding energy in favorable decorations. For all the adatoms, \(H\)s was found to be more stable location inferred \begin{table} \begin{tabular}{l c c c} \hline System & \(E_{b}^{H_{2}}\) (eV/\(H_{2}\)) & \(Z_{H\ -sub}\) ( A) & \(R_{H\ -H}\) ( A) \\ \hline Pristine BP & -0.047 & 2.92 & 0.751 \\ \hline Borophene [21] & -0.045 & 2.95 & - \\ \hline Phosphorene [34] & -0.07 & 2.98 & 0.750 \\ \hline \end{tabular} \end{table} Table 1: The binding energy (\(E_{b}^{H_{2}}\)), height (\(Z_{H_{2}-sub}\)), and bond length (\(R_{H\ -H}\)) of \(H_{2}\) molecules for pristine 2D materials. Figure 2: (a) Different initial positions of \(H_{2}\) molecules over borophosphene, and (b) the best binding (\(E_{b}=-0.047\ eV\)) configuration. from the stronger binding. The Li binding energies over \(H_{B}\) and \(H_{P}\) were computed as 1.06 eV and 0.797 eV, little higher than reported values by Du et al. [44]. For Na, those energies were 2.25 eV and 2.11 eV, and for Ca, the energies were computed as 2.77 eV, 2.51 eV, respectively. The stable configurations of decorated substrates are shown in Figure 4(a-c). The Li, Na, and Ca adatoms are located at approximately 1.64 A, 2.07 A, and 1.95 A above the borophosphate substrate, respectively. The reported values in literature are summarized in Table 2. Figures 4 (d-f) also show the projected density of states (PDOS) of valence electrons before and after decoration. The PDOS indicate adatom to substrate charge transfer and interaction with the substrate. When the adatoms are placed, the peaks in the conduction band are shifted toward the valence band. As observed in Figure 4 (d), the BP sheet obtained the Li(s) electron and the shift of the peak occurred closer to the Fermi energy toward the valence band due to the charge transfer. Hybridization of Li with the BP sheet is noticed at around 2.5 eV above the Fermi level. Similar affects are observed as a result of Na and Ca decoration as shown in Figures 4 (e-f) indicating ionic bonds between the substrate and the adatoms. The bader charge analysis was utilized to quantitatively determine the charge transfer. The charge transfer was calculated to be 0.88 e from Li to BP and 0.88 e from Na to BP, and 1.39 e from Ca to BP substrate. The charge transfer was reported to be 0.3 e (Hirshfeld analysis) [44] in Li decoration, 0.851 e in Na decoration [45], and 1.40 e from Ca to BP [47]. Figure 3: PDOS of B(p), P(p), and H(s) in borophosphate after hydrogen adsorption. The Fermi energy is set to zero. #### 3.3.2 Hydrogen storage over decorated borophosphate To study the hydrogen storage over the decorated borophosphate substrate, \(H_{2}\) molecules were augmented over the adatom to obtain binding energy. From Eq. 2, binding energy of the first \(H_{2}\) on the decorated borophosphate was obtained as \(-\)0.27 eV/\(H_{2}\), \(-\)0.20 eV/\(H_{2}\), and \(-\)0.42 eV/\(H_{2}\), respectively, for Li, Na, and Ca functionalization. In the relaxed system, the \(H_{2}\) molecule was located 1.94 A above Li with the H\(-\)H bond being 0.756 A long. For Na and Ca decorated borophosphate, the \(H_{2}\) was stable at a height of 2.32 A and 2.45 A above the adatoms with a bond length of 0.753 A and 0.755 A, respectively (Table 3). The stable structure of \(H_{2}\) adsorption over decorated substrate are shown in Figure 5(a-c). The binding energies of \(H_{2}\) over decorated borophosphate substrates are \begin{table} \begin{tabular}{l c c c c c} \hline \hline Adatom & site & \(E_{b}^{\prime\prime}\) (eV/M) & \(Z\) (Å) & \(h\) (Å) & \(\Delta\)q (e) \\ \hline Li & \(H_{B}\) & 1.06 & 1.64 & 0.21 & 0.88 \\ Literature [44] & \(H_{B}\) & 0.97 & - & - & 0.30 \\ \hline Na & \(H_{B}\) & 2.25 & 1.16 & 0.22 & 0.88 \\ Literature [45; 47] & \(H_{B}\) & 0.68-0.838 & 2.04-2.19 & 0.19 & 0.851-0.88 \\ \hline Ca & \(H_{B}\) & 2.77 & 1.95 & 0.80 & 1-39 \\ Literature [47] & \(H_{B}\) & 0.81 & 1.58 & 0.61 & 1.40 \\ \hline \hline \end{tabular} \end{table} Table 2: The adatom binding energy (\(E_{b}^{\prime\prime}\)), height from substrate (Z), corrugation height (\(h\)), and charge transfer (\(\Delta\)q) for different adatoms and comparison with literature. Figure 4: Stable configuration of (a) Li, (b) Na and (c) Ca decorated borophosphate and (d-f) PDOS. The Fermi energy of the respective systems are set to zero. compared with borophene and phosphorene with similar decorating elements in Table 3. It can be noticed that the binding energy of \(H_{2}\) molecule is better in borophosphene than the later ones. Figure 5: Stable configuration of \(H_{2}\) adsorption over (a) Li decorated, (b) Na decorated, and (c) Ca decorated borophosphate, (d-f) relevant PDOS of the systems, (g-i) PDOS of adatoms in different systems. The Fermi energy of the respective systems are set to zero. The PDOS of the elements after \(H_{2}\) adsorption are presented in Figure 5 (d-f) with the Fermi energy being set to zero. The PDOS shows indicates hybridization of \(H_{2}\) with the adatoms above Fermi energy. Due to the charge transfer from adatoms to borophosphene, the cationic adatoms (Li\({}^{+}\), Na\({}^{+}\), Ca\({}^{+}\)) are binding sites for the \(H_{2}\) molecules. As a results, the \(H_{2}\) molecules are polarized and bound by cationic adatoms through electrostatic and van der Walls interactions [29]. The PDOS of the adatoms in different systems are compared with a single isolated adatom in Figure 5 (g-i). The comparison reflects Li \(\rightarrow\) substrate charge transfer during the decoration resulting in lower density of state. Figure 6: Hydrogenation with different numbers of molecules and dehydrogenation process of (a) Li decorated, (b) Na decorated, and (c) Ca decorated borophosphate layer. Sequentially multiple \(H_{2}\) molecules were added to the system to establish the highest number of \(H_{2}\) molecules adsorbed around each adatom over the decorated borophosphate substrate. It was observed that a maximum of 10, 12, and 20 \(H_{2}\) molecules could be adsorbed over Li, Na, and Ca decorated substrate, respectively. The hydrogenation of the decorated substrate is shown in Figure 6 for different numbers of \(H_{2}\) molecules. The distribution of positions of \(H_{2}\) molecules measured from adatoms is shown in Figure 7(a) along with the H-H bond lengths in Figure 7(b). The H-H bond lengths are calculated to be in the range of \(0.750-0.765\) A. The evolutions of average per molecule binding energy during hydrogenation is shown in Figure 8(a). The bond lengths of the \(H_{2}\) molecules and the binding energy indicate that there is no conventional Kubas type interaction [3]. In addition, the desorption temperature (\(T_{D}\)) of the \(H_{2}\) molecules is calculated from the von't Hoff equation given by [29, 60] \[T_{D}=\frac{E_{b}}{k_{b}}\frac{\Delta S}{R}-ln\,p^{\text{l}_{-1}}\quad, \tag{4}\] where \(E_{b}\) is the calculated adsorption energy (J/\(H_{2}\)). The symbols in the equation represent Boltzmann constant (\(k_{b}\)), entropy change of \(H_{2}\) from gas to liquid (\(\Delta S=75\)-\(44\)J/mol-K), and the universal gas constant (\(R=8.314\)J/mol-K). \(p\) is the equilibrium pressure taken as is 1 atm. The desorption temperature associated with the sequential adsorption is shown in Figure 8(b). From the above calculation, the temperature for onset of desorption (\(T_{D}\)) can be determined from binding energy of the last \(H_{2}\) molecules, and by using the binding energy of the first \(H_{2}\), the highest temperature (\(T_{D^{\prime}}\) ) for complete desorption can be obtained [60]. The temperatures for onset and full discharge was calculated as \(127-151\) K for Li decorated, \(125-254\) K for Na decorated, and \(140-531\) K for Ca decorated borophosphate, respectively (Figure 8(b)). Figure 7: (a) Distribution of \(H_{2}\) positions with respect to the adatom, and (b) H-H bond length of adsorbed \(H_{2}\) molecules in different decorated borophosphate. The dashed line corresponds to \(0.75\) Å. #### 3.3.3 Adatom overloading in borophosphene Overloading of the adatoms was studied through decoration at different locations with eight adatoms in the substrate. Li, Na, and Ca adatoms were adsorbed to borophosphene by an energy of 0.87 eV/Li, 1.82 eV/Na, and 2.43 eV/Ca, respectively, with the binding energies being little less than those of single adatom. Following the decoration, multiple \(H_{2}\) molecules were than adsorbed over the decorated Figure 8: (a) Adsorption energy (\(E_{b}\)) and (b) Desorption temperature (\(T_{D}\)) during sequential hydrogen adsorption over functionalized borophosphene surface. Figure 9: \(H_{2}\) adsorption on decorated borophosphene through (a) Li overloading, (b) Na overloading, and (c) Ca overloading. substrate. In the Li decorated substrate (BP+8Li), up to 48 \(H_{2}\) molecules could be adsorbed at an average of 6\(H_{2}\) per Li with -0.10 eV/\(H_{2}\) as average binding energy. Within same binding energy, up to 48, and 56 \(H_{2}\) molecules could be adsorbed over Na and Ca decorated substrates (BP+8Na, BP+8Ca), respectively (Figure 9). The gravimetric density of \(H_{2}\) storage was computed by \[\rho=\frac{n\,m_{H}}{16\,\,ms\,+\,16\,\,m_{P}\,+\,8\,\,m_{H}\,+\,n\,\,m_{H}}, \tag{5}\] where \(m_{*}\) represent atomic mass of the \(*\) element and \(n\) being the number of \(H\) atoms in the system. The gravimetric density corresponding to this adsorption is calculated to be 6.22 % for Li decorated system. However, due to larger mass of Na and Ca, resulting gravimetric density is 5-34 % and 6.08 % in Na and Ca decorated borophosphate. The gravimetric densities can be further increased through decoration on both sides of the 2D borophosphene [44; 47]. #### 3.3.4 Diffusion barrier of adatoms to neighbor sites In adatom decoration, clustering of the adatoms with neighboring favorable is an issue. To evaluate the possibility of multiple adatoms against decoration, diffusion energy barrier from one site to the next neighbor site was calculated. Climbing image Nudged Elastic Band (CI-NEB) calculation was carried out in order to determine the energy barriers between the nearest neighbor stable locations (inset in Figure 10). The energy barriers for diffusion of Li, Na, and Ca atoms from one site to the other are depicted in Figure 10 along with the associated diffusion paths. A diffusion barrier of 0.57 eV is calculated that prohibits the Li atom to cluster from \(H_{B}\) site to the neighboring \(H_{B}\) site, the barrier from \(H_{B}\) site to \(H_{P}\) site is 0.44 eV, and that from \(H_{P}\) site to \(H_{P}\) site is 0.55 eV. The energy barrier values are reported in the range of 0.19 - 0.59 eV [44]). For the Na adatom, the energy barriers are obtained as 0.26 eV, 0.24 eV, and 0.36 eV for \(H_{B}\)-\(H_{B}\), \(H_{B}\)-\(H_{P}\), and \(H_{P}\)-\(H_{P}\) diffusion paths (0.22 eV, 0.14 eV, and 0.31 eV reported in [45]). For Ca, the energy barriers are obtained as 0.95 eV for B to B, 0.21 eV for B to P, and Figure 10: Diffusion barrier of adatoms across different sites along with the paths shown in inset figures. 0.26 eV for those paths. The thermal energy of the atom must be less than the energy barrier across a reaction path. The thermal energy can be computed from \(E=\frac{3}{2}KT\) using the Boltzmann constant \(K\) and temperature \(T\). This relation results in a thermal energy of 0.071 eV at 550 K. The diffusion energy barrier of the atoms along any path is well higher than this thermal energy. It is noteworthy that the diffusion energy barriers of all the adatoms are well above the thermal energy inferring a stable decoration during \(H_{2}\) desorption, including at the temperature associated with the full discharge (\(T_{DH}=531\,K\) for Ca). ## 4 Conclusions Using first principles based density functional theory, hydrogen storage performance of borophosphate was investigated. Pristine borophosphate offered very weak binding (0.047 eV) that was not acceptable for efficient hydrogen storage. To enhance the storage capacity Li, Na and Ca decoration was adopted. The adatoms were with the borophosphate strongly with binding energies in the range of \(1.06-2.77\) eV. Bader charge analysis revealed a charge transfer of 0.88\(e-1.39e\) from the adatom to the borophosphate sheet that contributed to the substrate-adatom interaction resulting in cationic state of the adatoms. Binding energy of \(H_{2}\) molecules over the decorated borophosphate was then calculated from energy minimization. The results showed that the adatom decoration significantly enhanced the \(H_{2}\) storage capacity in comparison with pristine borophosphate. The binding energies of \(H_{2}\) molecule over the Li, Na and Ca decorated borophosphate was calculated in the range of -0.20 to -0.42 eV/\(H_{2}\). The \(H_{2}\) adsorption with this range of binding energy is considered suitable for reversible storage. The PDOSs and Bader charge analysis were presented unravel the role of adatoms in charge transfer resulting in improved interaction. The binding energies of the first \(H_{2}\) molecules over borophosphate substrates decorated with different elements were compared with reported results for borophene and phosphorene. The comparison showed that borophosphene adsorbed the \(H_{2}\) molecules stronger than the other two with same decoration. Further, complete hydrogenation process was calculated through sequential addition of \(H_{2}\) molecules. The results yielded adsorption of 10, 12, and 20 \(H_{2}\) molecules at a single Li, Na, and Ca adatom over the 4 \(\times\) 2 borophosphene substrates, respectively, within average binding energy of -0.10 eV/\(H_{2}\). H-H bond lengths were found to be within 0.750\(-\)0.765 A. Dehydrogening temperature for complete release of all the \(H_{2}\) molecules at atmospheric pressure was calculated using von't Hoff equation. Dehydrogenation temperature was obtained in the range of \(125-530\) K for different adatom decorated substrates. To determine the maximum capacity of hydrogen storage, adatom overloading, i.e. decoration by several adatoms on substrate was also performed. This was pursued with 8 adatoms at the favorable sites over the supercell. The average binding energy of the adatoms reduced slightly compared to that of a single adatom. The results of \(H_{2}\) adsorption indicated that up to 48 \(H_{2}\) molecules were stored by Li and Na decoration, and 64 \(H_{2}\) molecules by Ca decoration, respectively, within the average binding energy of 0.1 eV/\(H_{2}\). Thus, decoration at single side borophosphene substrate resulted in 6.22 %, 5-34 %, and 6.08 % gravimetric density with Li, Na and Ca decoration. The gravimetric density can be further increased by decoration on both sides of the substrate. Inter-site diffusion barriers were found larger by an order of magnitude than the thermal energy at desorption temperatures indicating the clustering being unfavorable. The results may motivate further strategies to be developed for improved hydrogen storage performance in borophosphene. ## Acknowledgement Computational resources provided by Dr Harpreet Singh (SMS) highly appreciated.
2305.00612
Asymptotics of harmonic functions in the absence of monotonicity formulas
In this article, we study the asymptotics of harmonic functions. A typical method is by proving monotonicity formulas of a version of rescaled Dirichlet energy, and use it to study the renormalized solution -- the Almgren's blowup. However, such monotonicity formulas require strong smoothness assumptions on domains and operators. We are interested in the cases when monotonicity formulas are not available, including variable coefficient equations with unbounded lower order terms, Dirichlet problems on rough (non-$C^1$) domains, and Robin problems with rough Robin potentials.
Zongyuan Li
2023-05-01T00:33:17Z
http://arxiv.org/abs/2305.00612v1
# Asymptotics of harmonic functions in the absence of monotonicity formulas ###### Abstract. In this article, we study the asymptotics of harmonic functions. A typical method is by proving monotonicity formulas of a version of rescaled Dirichlet energy, and use it to study the renormalized solution -- the Almgren's blowup. However, such monotonicity formulas require strong smoothness assumptions on domains and operators. We are interested in the cases when monotonicity formulas are not available, including variable coefficient equations with unbounded lower order terms, Dirichlet problems on rough (non-\(C^{1}\)) domains, and Robin problems with rough Robin potentials. Key words and phrases:Unique continuation, asymptotic expansion, doubling index, Almgren's monotonicity formula 2010 Mathematics Subject Classification: 35J15, 35J25, 35B40 Z. Li was partially supported by an AMS-Simons travel grant. ## 1. Introduction We discuss asymptotics of solutions to elliptic equations near both interior and boundary points. Let us start from a simple case. Consider a harmonic function \(u\) in a bounded domain \(\Omega\subset\mathbb{R}^{d}\). Near an interior point \(x_{0}\in\Omega\), we know that \(u\) is analytic: \[u =\sum_{\alpha}\frac{D^{\alpha}u(x_{0})}{\alpha!}(x-x_{0})^{\alpha} =\sum_{k}P_{k}(x-x_{0})\] \[=P_{N}(x-x_{0})+O(|x-x_{0}|^{N+1}). \tag{1.1}\] Here \(P_{k}\) is a homogeneous harmonic polynomial of degree \(k\) and \(P_{N}\) represents the leading term. As is commonly known, expansion formulas like (1.1) can be useful, which are, however, not always available in the presence of variable coefficient operators or rough domains. For instance, under polar coordinates \((r,\theta)\) of \(\mathbb{R}^{2}\), consider \[u=\mathrm{Re}\frac{re^{i\theta}}{\log(re^{i\theta})},\quad r>0,\theta\neq\pi. \tag{1.2}\] See Figure 1. One can see that \(u\) is harmonic in the enclosed region in Figure 1, and equals to zero on the boundary given by \(r=e^{\theta\tan\theta}\), except at \((r,\theta)=(1,0)\) where \(u\) has a pole. Clearly it is impossible to write down an expansion like (1.1), due to the log drift. To capture the asymptotics of functions like (1.2), one typically uses the "Almgren's blowup" -- the rescaled limit as \(\lambda\to 0\) of \[u_{\lambda}(\cdot)=\frac{u(\lambda\cdot)}{(\oint_{\beta_{\lambda}}|u|^{2})^{1/ 2}}. \tag{1.3}\] For \(u\) in (1.2), one can simply see that \((\fint_{\partial B_{\lambda}}|u|^{2})^{1/2}\approx\lambda\log(\lambda)\) and \(u_{\lambda}\to Cr\cos(\theta)\) as \(\lambda\to 0\), where \(C\) is a normalizing factor. Actually such convergence is guaranteed by a more general theorem -- the Almgren's monotonicity formula on convex domains. Let us describe the motivation and method. In general, one hopes to prove that the family \(\{u_{\lambda}(\cdot)\}_{\lambda\in(0,1)}\) has one or more limits. For this, we bound a rescaled Dirichlet energy like \[F(r)=\frac{rD(r)}{H(r)}=\frac{r\int_{B_{r}}|\nabla u|^{2}}{\int_{\partial B_{r }}|u|^{2}}. \tag{1.4}\] In [4], Almgren observed that if \(\Delta u=0\) in \(B_{1}\), \(F(r)\) is monotonically increasing for \(r\in(0,1)\). From this, \(\{u_{\lambda}\}_{\lambda\in(0,1)}\) is uniformly bounded in \(H^{1}\), and hence is compact in \(L_{2}\). In literature, a quantity like (1.4) is usually called a (generalized) Almgren's frequencie. Its monotonicity property play an important role in blowup analysis. In this work, we are interested in three more general problems. **Variable coefficient equations, interior**. \[D_{i}(a_{ij}D_{j}u)+W_{i}D_{i}u+Vu=0\quad\text{in }B_{1}, \tag{1.5}\] where \(a_{ij}\) are symmetric, bounded, and uniformly elliptic. In [7], Garofalo-Lin proved that if \(a_{ij}\in C^{0,1},W_{i},V\in L_{\infty}\), a modified version of \(F\) in (1.4) is almost monotone. The condition \(a_{ij}\in C^{0,1}\) cannot be improved, due to the classical counterexample in unique continuation. Later, we will discuss the cases with unbounded \(W_{i},V\). **Dirichlet problem, boundary**. Suppose \(\Omega\subset\mathbb{R}^{d}\) and \(0\in\partial\Omega\). \[\begin{cases}\Delta u=0\quad\text{in }\Omega\cap B_{1},\\ u=0\quad\text{on }\partial\Omega\cap B_{1}.\end{cases} \tag{1.6}\] When \(\Omega\) is half space or a cone, the monotonicity formula holds. For curved domains, in [13, 2, 1], certain variations of \(F\) in (1.4) was proved to be almost monotone on \(C^{1,1}\), convex, and \(C^{1,Dini}\) domains, respectively. Some discussions on \(C^{1}\) domains were also made in [15]. It is worth mentioning that, the continuity of the normal direction \(\boldsymbol{n}|_{\partial\Omega}\) is essential in deriving the monotonicity formula, which is not available for rough domains, for instance general Lipschitz domains. **Neumann and Robin problem, boundary**. Suppose \(\Omega\subset\mathbb{R}^{d}\) and \(0\in\partial\Omega\). \[\begin{cases}\Delta u=0\quad\text{in }\Omega\cap B_{1},\\ \frac{\partial u}{\partial\boldsymbol{n}}=\eta u\quad\text{on }\partial \Omega\cap B_{1}.\end{cases} \tag{1.7}\] Figure 1. Nodal set of \(\operatorname{Re}(z/\log z)\) Again, when \(\Omega\) is half space or a cone and when \(\eta=0\) (Neumann), the monotonicity formula holds. In [1, 6], this was further generalized to the case when \(\partial\Omega\in C^{1,1}\) and \(\eta\in C^{0,1}\) (or \(\eta\in W^{1,1}\) with some pointwise control on \(\nabla\eta\)). See also a sharp quantitative version in [12]. In all these works, the differentiability of \(\eta\) cannot be dropped, which leaves the asymptotic analysis of (1.7) with rough \(\eta\) widely open, even in the case when \(\eta\) is non-negative and bounded. For instance, see the open question in [5]. ## 2. Alternative for monotonicity formula: convergence of doubling index ### Robin problems and variable coefficient equations In a recent work, we prove the following singular set estimate. **Theorem 2.1** ([11], Theorem 1.1 (b)).: _Let \(\Omega(\subset\mathbb{R}^{d})\in C^{1,1}\), \(d\geq 2\), and \(\eta\in L_{p}(\partial\Omega)\) for some \(p>2(d-1)\). Then for any nontrivial solution \(u\in H^{1}\) to (1.7), we have_ \[\dim(\{u=|\nabla u|=0\}\cap\Omega\cap B_{1/2})\leq d-2.\] Such estimate relies on blowup analysis near both interior and boundary points. As mentioned before, monotonicity formulas are only proved when \(\eta\) is differentiable. In [11], we first construct an auxiliary function and reduce the problem to blowup analysis for (1.5) with \(a_{ij}\in C^{0,1}\) and \(W_{i},V\in L_{q}\) with \(q>d\). However, there is still no monotonicity formula available -- recall that the work of Garofalo-Lin [7] requires \(W_{i},V\in L_{\infty}\). This requires us to design more robust methods for blowup analysis. It turns out the Federer's dimension reduction argument, which we used to prove Theorem 2.1, only needs the following: 1. a uniform \(C^{1}\) estimate for the "rescaled" boundary value problems; 2. compactness of the blowup sequence (1.3), as \(\lambda\to 0\); 3. the homogeneity of the blowup limit of (1.3), along subsequences. In [11], (a) was achieved with the aid of the aforementioned auxiliary function and a standard \(W_{p}^{2}\) regularity theory. For (b) and (c), which are typically proved via monotonicity formula, we prove the following alternative. **Lemma 2.2** ([11] Lemma 4.2).: _Let \(u\in H^{1}\) be a weak solution to (1.5) with_ \[\fint_{B_{r}}\left(|a_{ij}-a_{ij}(0)|^{2}+r^{2}|W|^{2}+r^{4}|V|^{2}\right) \to 0,\quad\text{as }r\to 0. \tag{2.1}\] _Then,_ \[\log_{2}\left(\frac{\fint_{B_{2r}}|u|^{2}}{\fint_{B_{r}}|u|^{2}}\right)^{1/2} \to\mathbb{N}\cup\{+\infty\},\quad\text{as }r\to 0.\] _Remark 2.3_.: 1. The condition (2.1) appears naturally after scaling: if \(u\) solves (1.5), then \(u_{\lambda}\) solves \[D_{i}(a_{ij}(0)D_{j}u_{\lambda})=D_{i}((a_{ij}(0)-a_{ij}(\lambda\cdot))D_{j}u_ {\lambda})-\lambda W_{i}(\lambda\cdot)D_{i}u_{\lambda}-\lambda^{2}V(\lambda \cdot)u_{\lambda}.\] 2. The condition (2.1) is guaranteed by \(a_{ij}\in C^{0}\), \(W_{i}\in L_{d}\), and \(V\in L_{d/2}\). 3. We will use Lemma 2.2 together with (SUCP) in [9, 3] -- if \(a_{ij}\in C^{0,1},W_{i}\in L_{d+\varepsilon},V\in L_{d/2}\) (or \(V\in L_{1+\varepsilon}\) when \(d=2\)), the limit in Lemma 2.2 has to be finite. Here in Lemma 2.2, we study the doubling index \[N(r):=\log_{2}\frac{(\oint_{B_{2r}}|u|^{2})^{1/2}}{(\oint_{B_{r}}|u|^{2})^{1/2}}\] instead of the frequency \(F(r)\). Note that when \(u\) is exactly a homogeneous polynomial of degree \(k\), \(N(r)\equiv k\). Hence, Lemma 2.2 can be interpreted as "the existence of the limiting homogeneity". Simple computation shows for harmonic functions, near an interior point \[N(r)=\int_{r}^{2r}\frac{F(s)}{s}\,ds.\] Hence, the monotonicity of \(F\) implies the convergence of \(N\), as \(r\to 0\). However, the condition in Lemma 2.2 is much weaker than that of a monotonicity formula -- recall in [7], it was required \(W_{i},V\in L_{\infty}\). Hence, we expect the conclusion of Lemma 2.2 can serve as a more robust tool in blowup analysis. The proof of Lemma 2.2 borrows ideas of Lin-Shen [14] when studying homogenization. Essentially, it is relies on fact that the monotonicity formula of harmonic functions has a rigidity property. **Lemma 2.4**.: _Suppose \(u\) is a harmonic function in \(B_{1}\). Then its Almgren's monotonicity function \(F\) ((1.4)) is either strictly increasing for \(r\in(0,1)\), or for some \(k\in\mathbb{N}\), \(F\equiv k/\log 2\) and \(u\) is a homogeneous harmonic polynomial of degree \(k\)._ From Lemma 2.4, it can be shown that for any non-integer real number \(\mu\), as \(r\) decreases, after certain small scale, the doubling index \(N(r)\) of \(u\) can no longer jump from below \(\mu\) to above. Hence, \(N(r)\) is trapped near an integer, from which Lemma 2.2 follows. **Dirichlet problem near a conical point.** In a recent joint work with Dennis Kriventsov, we also study the boundary asymptotics of harmonic functions. A long-standing conjecture in boundary unique continuation asks: _Conjecture 2.5_.: Suppose \(u\in H^{1}\) is a weak solution to (1.6) on a Lipschitz domain \(\Omega\). Then, if \(\{\partial u/\partial\boldsymbol{n}=0\}\cap\partial\Omega\) has a postive surface measure, we must have \(u\equiv 0\). The conjectured was proved in the case when \(\Omega\in C^{1,1},C^{1,Dini}\), and \(C^{1}\) in [13, 2], and [1], via several versions of Almgren's monotonicity formulas. For such formulas, the continuity of \(\boldsymbol{n}|_{\partial\Omega}\) seems inevitable, which is typically not true on general Lipschitz domains. We aim to discover the case when \(\boldsymbol{n}\) is not continuous. A point \(x_{0}\in\partial\Omega\) is called conical, if \[\frac{(\Omega-x_{0})\cap B_{r}}{r}\to\Gamma_{x_{0}}\ =\text{cone}.\] Clearly, all differentiable \(C^{1}\) points are conical with \(\Gamma\) being the tangent plane. Moreover, any boundary point of a convex domain is conical, due to the monotonicity. In [10], we prove the following. **Theorem 2.6** ([10]).: _Suppose \(0\in\partial\Omega\) is a conical point and \(u\in H^{1}\) is a nontrivial solution to (1.6). Then the limiting homogeneity of \(u\) exists. That is,_ \[\log_{2}\frac{(\oint_{B_{2r}\cap\Omega}|u|^{2})^{1/2}}{(\oint_{B_{r}\cap\Omega }|u|^{2})^{1/2}}\to\{\mu_{j}\}_{j}\cup\{+\infty\}\quad\text{as $r\to 0$},\] _where \(\mu_{j}\) are numbers determined by the spectrum of \(\Delta\) on the limit cone \(\Gamma\)._ It is worth mentioning that, our theorem only assumes an one-point condition at \(0\in\partial\Omega\) -- no smoothness of \(\Omega\) is needed. ## 3. Uniqueness of blowup and expansion formula _Problem 3.1_.: When is the subsequence limit in (1.3) unique? One the one hand, naturally one may further ask. _Problem 3.2_.: Does a monotonicity formula, which guarantees the existence of blowup limits, also guarantees the uniqueness of such limit? The answer is yes when the dimension is two. This is simply due to the fact that in 2D, all eigenspaces of the Laplace operator is one-dimensional. In higher dimension, the answer is no in general. In [10], we constructed a convex domain \(\Omega\) and a harmonic function \(u\) satisfying the Dirichlet problem (1.6) on it. From [2], the Almgren's monotonicity formula holds due to the convexity of the domain. However, along different subsequences, the blowup limits can be different. Actually, our \(\{u_{\lambda}\}_{\lambda\in(0,1)}\) rotates within a two-dimensional eigenspace. On the other hand, clearly an expansion formula like (1.1) leads to the uniqueness of blowup limit. One can simply see that the limit has to be exactly the leading term \(P_{N}\) upto a normalization. For Dirichlet problems, in [10] we prove that a slightly stronger condition than "conical" -- Holder conical will lead to such an expansion formula. A point \(x_{0}\in\partial\Omega\) is called Holder conical, if \[\frac{\operatorname{dist}((\Omega-x_{0})\cap P_{r},\Gamma_{x_{0}})}{r}\leq Cr ^{\alpha},\quad\text{for some $\alpha>0$.}\] **Theorem 3.3** ([10]).: _Suppose \(0\in\partial\Omega\) is Holder conical. Then for any non-trivial solution \(u\) to (1.6), either \(u=O(|x|^{N})\) for all \(N>0\), or there exists a \(\mu_{j}\)-homogeneous harmonic polynomial \(P_{\mu_{j}}\) on the cone \(\Gamma\), such that_ \[u(x)=P_{\mu_{j}}(x)+v(x),\quad\text{and $(\fint_{B_{j}\cap\Omega}|v|^{2})^{1/2} \leq Cr^{1+\varepsilon\alpha}$.}\] For Robin problem (1.7) and interior problem with variable coefficients (1.5), similar property holds -- certain scaling subcritical assumptions lead to uniqueness. We expect the following: _In Lemma 2.2, if we replace (2.1) with \(a_{ij}\in C^{\alpha}\), \(W_{i}\in L_{d+\varepsilon}\), and \(V\in L_{d+\varepsilon}\), then either \(u=O(|x|^{N})\) for all \(N>0\), or for some \(k\)-homogeneous harmonic polynomial \(P_{k}\), we can expand \(u(x)=P_{k}(x)+O(|x|^{k+\varepsilon^{\prime}})\)._ The proof is expected to be similar to the arguments in [8]. Actually, a gradient estimate is also expected for higher order terms. From these, combining the argument in [8] and [11], one can further prove the stratification of singular sets, which is stronger than our Hausdorff dimension estimate in Theorem 2.1.
2302.05915
Will Admins Cope? Decentralized Moderation in the Fediverse
As an alternative to Twitter and other centralized social networks, the Fediverse is growing in popularity. The recent, and polemical, takeover of Twitter by Elon Musk has exacerbated this trend. The Fediverse includes a growing number of decentralized social networks, such as Pleroma or Mastodon, that share the same subscription protocol (ActivityPub). Each of these decentralized social networks is composed of independent instances that are run by different administrators. Users, however, can interact with other users across the Fediverse regardless of the instance they are signed up to. The growing user base of the Fediverse creates key challenges for the administrators, who may experience a growing burden. In this paper, we explore how large that overhead is, and whether there are solutions to alleviate the burden. We study the overhead of moderation on the administrators. We observe a diversity of administrator strategies, with evidence that administrators on larger instances struggle to find sufficient resources. We then propose a tool, WatchGen, to semi-automate the process.
Ishaku Hassan Anaobi, Aravindh Raman, Ignacio Castro, Haris Bin Zia, Dami Ibosiola, Gareth Tyson
2023-02-12T13:46:34Z
http://arxiv.org/abs/2302.05915v1
# Will Admins Cope? Decentralized Moderation in the Fediverse ###### Abstract As an alternative to Twitter and other centralized social networks, the Fediverse is growing in popularity. The recent, and polemical, takeover of Twitter by Elon Musk has exacerbated this trend. The Fediverse includes a growing number of decentralized social networks, such as Pleroma or Mastodon, that share the same subscription protocol (ActivityPub). Each of these decentralized social networks is composed of independent instances that are run by different administrators. Users, however, can interact with other users across the Fediverse regardless of the instance they are signed up to. The growing user base of the Fediverse creates key challenges for the administrators, who may experience a growing burden. In this paper, we explore how large that overhead is, and whether there are solutions to alleviate the burden. We study the overhead of moderation on the administrators. We observe a diversity of administrator strategies, with evidence that administrators on larger instances struggle to find sufficient resources. We then propose a tool, WatchGen, to semi-automate the process. ## 1 Introduction The Fediverse encompasses a group of increasingly popular platforms and technologies that seek to provide greater transparency and openness on the web. [18, 30, 34, 13]. Well known Fediverse platforms include microblogging services (_e.g._ Pleroma [38], Mastodon [33]) and video sharing platforms (_e.g._ PeerTube [37]). The acquisition of Twitter by Elon Musk [11] has exacerbated this popularity with a large migration of Twitter users to the Fediverse [8]. In Fediverse social networks, individuals or organisations can install, own, and manage their own independent servers, also known as **instances**[15, 54]. For these instances to interact, they rely on **federation**[41], whereby instances interconnect in a peer-to-peer fashion to exchange posts. Note that this allows for users to exchange content across platforms. This results in a physically decentralized model that is logically interconnected where users can interact globally. Unfortunately, this creates challenges for instance **administators**, as activities on one instance impact others via federation. For example, recent work has shown that hateful material generated on one instance can rapidly spread to others [53]. To overcome this, most Fediverse social network implementations have in-built **federation policies**. These policies enable administrators to create rules to ban or modify content from instances that matches certain rules, _e.g._ banning content from a particular instance or associating it with warning tags. Although a powerful tool, this imposes an additional overhead on administrators [26, 14, 6]. Thus, we argue it is vital to better understand this process, and propose ways to improve it. This paper examines administrator activities in the Fediverse. We focus on Pleroma, a federated microblogging platform with similar functionality to Twitter. We collect a large-scale dataset covering 10 months: this includes 1,740 instances, 133.8k users, 29.9m posts, associated metadata, and importantly, the policies setup by the administrators. We find that instances are often "understaffed", with the majority of instances only having a single administrator, and recruiting no other moderators to assist, despite many having over 100K posts. This leads us to conjecture that some administrators may be overwhelmed. Indeed, we find that instance administrators often take many months before applying policies against other instances, even in cases where they exhibit clearly controversial traits (_e.g._ posting a large number of hate words). We therefore turn our attention to the policy configurations employed. We observe a growing number of instances enacting a wide range of policy types. Common are'maintenance' policies, such as those which automatically delete older posts (ObjectAgePolicy), as well as those aimed at preventing the spread of certain content (_e.g._ HashtagPolicy, which flags up posts with certain hashtags). We further observe a range of bespoke policies created by administrators, via the SimplePolicy, which can be configured to trigger a range of actions based on certain rules (_e.g._ blocking all connections from certain instances). The laborious nature of this moderation work leads us to explore automated techniques to assist administrators. We build a set of models to predict administrator actions. We embed them in WatchGen, a tool that can propose a set of instances for administrators to focus their moderation efforts on. To the best of our knowledge, this is the first study of Fediverse administrators. We make the following observations: 1. We find a diverse range of 49 policies used by administrators, capable of performing various management and moderation tasks. Despite this, we see that 66.9% of instances are still running, exclusively, on the default policies alone (Section 4). 2. The number of administrators does not grow proportionately with the number of posts (Section 5). This seems to impact moderation. For example, it takes an average of 82.3 days for an administrator to impose a policy against an instance after it first encounters it, even for well-known and highly controversial ones (_e.g._ gab.com[5]). 3. Intuitive features, such as the number of mentions and frequent use of hate words, are good indicators that an instance will later have a policy applied against it (Section 6). This suggests that there are key traits that garner more attention by moderators. 4. We show that it is possible to predict (F1=0.77) which instances will have policies applied against them (Section 6) and design _WatchGen_, a tool that flags particular instances for administrators to pay special attention to. ## 2 Pleroma: Overview Pleroma is a lightweight decentralized microblogging server implementation with user-facing functionality similar to that of Twitter. In contrast to a centralized social network, Pleroma is a federation of multiple independently operated servers (aka instances). Users can register accounts on these instances and share posts with other users on the same instance, or on different instances. Through these instances, users are able to register accounts and share posts (called statuses) to other users on the same instance, other Pleroma instances, or instances from other Fediverse platforms, most notably Mastodon. _Federation._ We refer to users registered on the same instance as **local**, and users on different instances as **remote**. A user on one instance can follow another user on a separate instance. Note that a user registered on their local instance does not need to register with the remote instance to follow the remote user. When the user wants to follow a user on a remote instance, the local instance subscribes to the remote user on behalf of the local user using an underlying subscription protocol (ActivityPub[2]). This process of peering between instances in the Fediverse is referred to as **federation**. The federated network includes instances from Pleroma and other platforms (_e.g._ Mastodon) that support the same subscription protocol (ActivityPub). Accordingly, Pleroma instances can federate and target their policies at non-Pleroma instances. The resulting network of federated instances is referred to as the _Fediverse_ (with over 23k servers [16]). _Policies._ Policies affect how instances federate with each other through different rule-action pairs. These allow certain actions to be executed when a post, user, or instance matches pre-specified criteria. For example, the SimplePolicy can perform a range of actions when a remote instance matches certain criteria such as rejecting connections. Note, there are numerous in-built policies, but tech-savvy administrators can also write their own bespoke policies. _Administators._ Instances are hosted and managed by specialized users called administrators. By default, the creator of an instance will take on the role of the administrator, however, it is also possible to delegate such responsibilities to multiple others. Instance administrators are responsible for carrying out the day-to-day administrative tasks on the instances. These include managing the front-end, users, uploads, database, emoji packs and carrying out administrative email tasks. The instance administrator is also responsible for accepting new user registrations and removing users where necessary. The administrator updates and backs-up the instance, set the terms of service and retains the ability to shutdown the instance. One essential responsibility of the instance administrator is the moderation of content (although they can also assign the role to other users called _moderators_). This can make instance administration a cumbersome task, and administrators a very important part of the Fediverse. ## 3 Data Collection _Instance & Administrator Dataset._ Our measurement campaign covers 16th Dec 2020 - 19th Oct 2021. We first compile a list of Pleroma instances by crawling the directory of instances from distsn.org and the-federation.info. We then capture the list of instances that each Pleroma instance has ever federated with using each instance's Peers API.1 Note, this includes both Pleroma and non-Pleroma instances. In total, we identify 9,981 instances, out of which 2,407 are Pleroma and the remainder are non-Pleroma (_e.g._ Mastodon). Footnote 1: \(\langle\)instance.uri\(\rangle\)/api/v1/instance/peers We then collect metadata for each Pleroma instance every 4 hours via their public API.2 We record the list of administrators and any delegated moderators. We also obtain the number of users on the instance, the number of posts, the enabled policies, the applied policies as well as the instances targeted by these policies, and other meta information. Footnote 2: \(\langle\)instance.uri\(\rangle\)/api/v1/instance/ From the 2,407 Pleroma instances, we are able to gather data from a total of 1,740 instances (72.28%). For the remaining 667 instances: 65.1% have non existent domains, 17.9% are not found (404 status code), 6.4% instances has private timelines (403), 4.5% result in Bad Gateway (502), 1.3% in Service Unavailable (503), and under 1% return Gone (410). _User Timelines._ Users in Pleroma have three timelines: (_i_) a _home_ timeline, with posts published by the accounts that the user follows (local and remote); (_ii_) a _public_ timeline, with all the posts generated within the local instance; and (_iii_) the _whole known network_, with _all_ posts that have been retrieved from remote instances that the local users follow. Note, the _whole known network_ is not limited to remote posts that a particular user follows: it is the union of remote posts retrieved by all users on the instance. We use the public Timeline API3 to gather posts data from 819 instances (the remaining 912 instances have either no posts or unreachable public timelines). Footnote 3: \(\langle\)instance.uri\(\rangle\)/api/v1/timelines/public?local=true _Ethics._ Our dataset covers Pleroma instances and their administrators. We exclusively focus on the policies that these administrators set, and do not investigate other aspects of administrator behavior (_e.g._ the posts they share). All data is available via public APIs. We emphasize that administrators, themselves, are the ones who control access to these APIs. Hence, the administrators covered in this paper consent for others to use this data. Further, the policies studied do not work on a per-user granularity and, thus, we cannot infer anything about individual users. All data is anonymized before usage, and it is stored within a secure silo. ## 4 Exploring Policy Configurations _Policy Footprint._ We first quantify the presence of policies across instances. In total, we observe 49 unique policy types. From our 1.74k Pleroma instances, we retrieve policy information from 93.2% of instances (the remainder do not expose their policies). These cover 94.2% of the total users and 94.5% of all posts. Figure 1 shows the distribution of the top 15 policy types enabled by the administrators across instances and the percentage of users signed up within those instances as well as the posts on the instances. We see a wide range of policies with diverse functionalities and varying coverage based on which metric is considered. For instance, whereas the ObjectAgePolicy (which performs an action on a post once it reaches a certain age) is installed on 74.8% of instances, this only covers 52.4% of the users. In contrast, the KeyWordPolicy (which performs an action on any posts containing a given keyword) covers 18.8% of users, but just 3% instances. Critically, there is a highly uneven distribution of policies, with the the top-5 covering 92.3% of all instances, 73.6% of users and 88.8% of the posts. _Default Policies._ Default policies come auto-enabled with new installations. Prior to version 2.3.0 in March, 2021, only the ObjectAgepolicy and NoOpPolicy are enabled by default. Since version 2.3.0, the TagPolicy and HashtagPolicy are also enabled with a new installation (or upgrade). 66.9% of instances only have these default policies running. Relying solely on default policies may indicate several things. For example, administrators maybe unaware of management and moderation functionalities, unable to use them or simply not have sufficient time. Alternatively, they may actively choose not to use them. Note, while the TagPolicy allows tagging user posts as sensitive (default: nsfw), the Hashtagpolicy allows the tagging of hashtags (_e.g._ nsfw sensitive). We find 54.6% and 34.3% of instances enabling these policies respectively. The other Pleroma default policy is the NoOpPolicy. This allows any content to be imported. This describes the default state of a new instance. Interestingly, we see administrators paying more attention to this policy: 89.7% of the instances have actively disabled it.4 This suggests that administrators are aware and concerned about importing undesirable content. Footnote 4: Note, this is overridden if a user enabled any other policy. _Non-Default Policies._ Non-default policies are those that instance administrators have to actively enable. Instances with these policies may indicate a more proactive administrator. We find 45 non-default policies during our data collection period. The most powerful policy available is the SimplePolicy, enabled on 28.8% of instances. This policy allows administrators to apply a wide range of actions against specific instances (_e.g._ gab.com). The most impactful and common is the reject action.5 56.9% of instance that enable the SimplePolicy employ the _reject_ action. Interestingly, although we see only Figure 1: The top 15 policies and percentage of instances that use each policy (sorted by the percentage of instances). 28.8% of instances with the SimplePolicy enabled, its application affects 85.4% of users and 90.3% of the posts on the Pleroma platform. We see noteworthy instances being amongst the top targets of this policy (_e.g. kiwilarfarms.cc and anime.website_), which are all commonly understood to share controversial material. Interestingly, only 18.5% of instances with the SimplePolicy applied against them are from the Pleroma platform (the most are from Mastodon [39]). This means that 81.5% of the recipients are from federated instances outside of Pleroma. _Policy Growth._ We next look at how the use of policies has changed over time. We conjecture that the longer administrators run their instances, the more experienced they become. As such, we expect to see greater application of policies. Here we focus on the 5 most popular policies as they account for 92.3% of the instances, 73.6% of users and 88.8% of the posts. For completeness, we include the sum of the other less popular policies too. Figure 2 presents the percentage of instances that activate each policy over time. Across our measurement period, we observe a growth of 40% in the total number of policies used. This suggests that the use of policies is becoming more common. 28.5% of these policies are introduced by new instances coming online, with newly installed default policies, _e.g._ ObjectAgepolicy, TagPolicy and HashtagPolicy. The remainder are instantiated by pre-existing instance administrators that update their policies, suggesting a relatively active subset of administrators. We also inspect the growth on individual instances. Overall, 42% of instances add policies during our measurement period. Of these instances, 52.3% enable only one extra policy and we see only a small minority (1.9%) enabling in excess of 5 new policies (_e.g._ chaos.is enables 13 and poa.st 12). A closer look at these instances show they mostly add common policies. However, we also see a wide range of other less common policies (_e.g._ KeywordPolicy). In contrast, the use of the SimplePolicy, with the most flexible range of moderation actions, has remained relatively stable. Actions under the SimplePolicy have instance-wide effect and can effectively control instance federation. Overall, we only see 28.8% of instances enabling this policy, without much growth across the measurement period (as seen in Figure 2). This could imply that administrators are unaware of this policy, do not have time to moderate their instances at this level or maybe find this policy too blunt (not fine-grained enough). The latter could lead to other issues, which administrators seek to avoid (_e.g._ collateral damage [22]). It is also worth noting that the SimplePolicy is one of the most complex, and administrators potentially shy away from these more labour-intensive policies. We argue that the diversity of policies could potentially overwhelm (volunteer) instance administrators (see Section 5). This suggest that they require further support to automate this process (see Section 6). ## 5 Characterising Administrators ### Distribution of Administrators _Number of Administrators Per-Instance._ We observe a total of 2,111 unique administrators from 1,633 instances (93.8% of 1.74k).6 Figure 3 presents the distribution of the number of administrators per instance. Although a majority of instances (71.6%) are managed by a single administrator, we also see some instances with a larger number of administrators (_e.g._ rakket.app: 16 and poa.st: 13). Footnote 6: The remaining instances do not publish their administrator(s) information. _Administrator Workload._ We next test if the number of administrators increases proportionately to the number of posts. We treat this as a rudimentary proxy for how much moderation must take place on an instance. Figure 4 presents the distribu Figure 3: Instances (%) by number of administrators. Figure 2: Time series showing the percentage of instances (Y-axis) that use the 5 most popular Pleroma policies. We include the sum of all the remaining policies as “Others”. _vs._ the number of administrators. Generally, we find that instances with more posts do have more administrators on average, _e.g._ instances with multiple administrators have more posts, with a ratio of 6:1. However, this is driven by a few instances (_e.g._ poa.st). Table 3 summarizes the top 10 instances that see the largest growth in administrators. Many of them are small instances with under 1000 users, and a proportionately small number of posts. This suggests that administrator growth does not necessarily occur on the instances that need it the most. To test if the number of administrators grow proportionately to the number of posts, Figure 5 plots the _growth_ of administrators _vs._ the growth of posts on each individual instance during our data collection period. We see that a growth in posts on a given instance does not necessarily correspond to the recruitment of new administrators. In fact, only 6.9% of instances record a growth in administrators during this period. Overall, there is a weak correlation (Spearman coefficient of 0.19 for the number of posts _vs._ number of administrators). In total, we see a 60.3% increase in the number of posts, but just a 35.6% growth in administrators. Unsurprisingly, instances that grow their administrator pool _do_ become more active. On average, instances with a growing number of administrators have 1.5x more policies than other instances. Specifically, looking at the policy with the most impact (reject), these instances apply it 1.8x more than others. Interestingly, instances with an increasing number of administrators also have 4x more policies applied against them. ### Administrators' Response Lag The previous section has shown that administrators face a growing moderation workload. To study this workload, we now look at how long it takes administrators to apply polices against particular instances. We focus on the SimplePolicy as this is clearly geared towards moderation, has instance-wide targeting, and lists the target instances. For each SimplePolicy against a given instance, we compute the lag between the date of the implementation of the policy and the date when the targeted instance was first federated with. This is a rudimentary proxy for how long it took an administrator to identify the problem. We temper our analysis with the fact that there could be many reasons for this delay, which we have limited vantage on. _Policy Creation Delay._ Figure 6 presents the distribution of delays (as defined above). Note, we exclude the 55% of federations that occurred before the beginning of our data collection (as we cannot know their timestamp). We plot the delay distributions for applying policies against: (_i_) All instances; (_ii_) "Controversial" instances with the most policies applied against them (top 10); and (_iii_) "Benign" instances with the fewest policies against them (bottom 10). It takes administrators an average of 82.3 days to apply any form of policy against other instances. Although, on average, it takes more time for a policy to be applied on the "bottom 10" instances than the "top 10" instances (74.7 and 59.5 days respectively), we see that there is a noticeable lag (almost 3 months) between federation occurring and policies being imposed. This may suggest Figure 4: Box plot of the number of posts per instances with different number of administrators. Figure 5: Per instance growth in the number of administrators (Y2-axis) and posts (Y1-axis). Individual instances are on the X-axis, sorted by the number of posts. Figure 6: CDF showing the distribution of days from federation to moderation for all moderated instances. We also show results for the top 10 and bottom 10 instances, based on the number of policies applied against them. that administrators find it difficult to keep-up with the need to rapidly identify instances that justify policy imposition. _Delay for Controversial Instances._ We next extract the top 10 instances that receive the most policies targeted against them. For each one, Figure 7 plots the distribution of delays (_i.e._ how long it takes other instances to impose a policy against them). In-line with expectations, we see that administrators take less time to apply policies against instances like gab.com, known for its right-wing stance (average of 19 days). However, we see much longer delays for other controversial instances that are less well-known (_e.g._ neckbeard.xyz), averaging up to 98.4 days. These instances are quite active, with significant growth in posts during our measurement period (_e.g._ neckbeard.xyz: 789.4k and kiwifarms.cc: 469.2k). With other instances such as anime.website posting "lolicon" (suggestive art depicting prepubescent females), it is expected that policies would be swift, however, we see a very wide breadth of delays. The diverse nature of these administrator reactions indicates that any future automated moderation tools should be specialized to the preferences of individual administrators. ### Administrators & Moderators _Moderation Delegation._ As administrators are responsible for a wide range of activities, they can delegate the task of content moderation to select individuals. These accounts are referred to as **moderators**. Of our 1.74k instances, 47% of them (819) expose information in our dataset. From these, only 12% (98) of instances have assigned the role of moderator to any other accounts. Of these, 73.5% (72) of the instances have the administrator also doubling as a moderator, while 29.6% (29) of the instances assign the entire moderator role to an account that is not the administrator. This implies that only 3.5% of instances have dedicated account(s) assigned the role of moderator. _Are moderators helpful?_ We conjecture that instances with dedicated moderators outside of their administrator team might be swifter in the application of policies. Figure 8 shows the percentage of instances that enable the 15 most popular policies (Figure 1). We present two bars for each policy: (_i_) Instances with additional moderators (who are not an administrator); and (_ii_) Instances without additional moderators. There is a broadly similar distribution across these two groups. However, we notice that instances without additional moderators have approximately 3x more of the NoOpPolicy configured. Recall, this is the default state of an instance and allows any content to be imported. This begins to suggest that instances with additional moderators do pay greater attention to policies. We expand this analysis in Figure 9, where we show the number of SimplePolicy actions and the delay to apply a policy after federation (in days) for instances in the two groups. We use the SimplePolicy for this analysis as it is the only moderation policy with instance-wide targeting and a list of targeted instance domains. The plot shows that instances with moderators take less time (average 103 days) to impose a SimplePolicy after federation, compared to instances without dedicated moderators (average 111 days). The figure also shows a marked difference in the number of instances that apply the SimplePolicy. Only 38% of the instances with dedicated moderators apply no SimplePolicy actions, compared to 70% for those without. This confirms that instances with additional moderators are more pro Figure 8: The percentage of instances that enable the top 15 most popular policies. We separate instances into two groups: (_i_) Instances without additional moderators; and (_ii_) Instances with additional moderators outside of the administrator set. Figure 7: Box plot showing the distribution of the number of days from federation to the imposition of policies for the top 10 instances with the most policies applied against them. ## 6 WatchGen: Automating Moderation Our results indicate that moderation is labor-intensive. We now explore techniques to assist administrators. We propose _WatchGen_,7 a tool that recommends to administrators a "watchlist" of instances that may require federated moderation. This watchlist must be on a per-instance basis, as different administrators may have varying views on what is considered appropriate for the instance they manage. WatchGen, helps administrators to more proactively identify instances requiring attention with regards to content moderation. We build WatchGen by compiling a large feature set for each instance, and experimenting with a number of classification models to flag instances that are more likely to require attention. Footnote 7: [https://github.com/anaobi/WatchGen.git](https://github.com/anaobi/WatchGen.git) _Feature Selection._ We first extract features for each instance. These features include information about user (_e.g._ number of users) and administrator activities with respect to moderation (_e.g._ number of rejected instances). We also extract features from post content (_e.g._ number of hate words in posts). We experiment with a total of 38 features (see Table 5). Through extensive manual experimentation, we distil this down to the 16 most determinant features (highlighted in Table 5). _Model Training._ Next, we train multiple machine learning models using the sklearn library, and GridSearchCV within 5-fold cross-validation to find the optimal hyper-parameter settings. We detail below the hyperparameters for each model. _Logistic Regression (LR)._ We only tune the C hyperparameter. This regularization parameter controls how closely the model fits to the training data. We test for best value of "C" using the values {0.001, 0.01, 0.1, 1, 10, 100, 1000}. _Multilayer Perceptron (MLP)._ We tune three hyperparameters: (_i_) hidden layer-size: dictates the number of hidden layers and nodes in each layer. We use a single hidden layer with varying hidden layer-sizes {10, 50, 100}; (_ii_) activation function: determines the type of non-linearity introduced into the model. We employ 3 activation functions {relu, tanh, logistic}; and (_iii_) learning rate: we tune how the initial learning rate parameter changes in finding the optimal model using {constant, invscaling, adaptive}. _Random Forest (RF)._ We tune 2 hyperparameters. (_i_) n_estimators: the number of independent trees (estimators). We test using 3 values {5, 50, 250}; and (_ii_) max_depth: the depth of the trees. We test for best result using 6 different depths {2, 4, 8, 16, 32, None}. _Gradient Boosted Trees (GB)._ We tune three hyperparameters. (_i_) n_estimators: The number of independent trees (estimators). We test with 4 value {5, 50, 250, 500}; (_ii_) max_depth: The depth of the trees. We test with 5 values {1, 3, 5, 7, 9}. (_iii_) Learning rate: This impacts the speed and granularity of the model training. We test 5 values {0.01, 0.1, 1, 10, 100}. ### Generating a Global Watchlist _Task._ We first assume a WatchGen central broker that compiles a global pool of training data, collected from all instances through their public APIs (similar to us in Section 3). We use this global pool of training data, with an 80:20 split, to predict if a given instance will be subject to _any_ policy (by any other instance). We then produce a 'watchlist' of instances that may be worthy of attention. To investigate how long it would take to garner sufficient data to train WatchGen, we also train several models on datasets covering increasing time windows. We first train on one month of data and increase the training dataset by one month at a time (up to 9 months). For our test dataset, we use the data remaining after the training snapshot. _Results._ Table 1 summarizes the result with the global pool of training data (80:20 split) with Random Forest being the best performing model (f1=0.77). Recall, that we also run experiments with a training set based on varying time windows. Figure 10 presents the f1 scores based on the size (duration) of the training set. We observe that it takes at least 5 months for a model to achieve its best score (_e.g._ Gradient Boosted Trees is month 5 and Random Forest in month 7). Note that the training sets are different from Table 1 and hence the scores differ. _Feature Importance._ We next inspect which features are most important. This sheds insight into which characteristics are most related to triggering policies. We use Figure 9: CDF of the number of SimplePolicy actions per instance (X1-axis) and the lag (in days) for instances to impose a policy after federation (X2-axis). We separate instances into (_i_) those with dedicated moderators; and (_ii_) those without dedicated moderators. the in-built functions for feature importance. Figure 11 presents the feature importance for the explainable models. We see that the top 3 features (transformed post, average number of mentions in a post, and number of posts on an instance) are all related to the number of posts on an instance. This suggests that the likelihood of an instance having a policy applied against it is closely related to the amount of content its users post. In other words, the more users and posts on an instance, the higher the probability of having a policy applied against it. This is expected as such instances are likely to attract more attention. Features such as the number of mentions and hate words in the posts also play an important role. This is in-line with prior work that observed how mentions and quote retweets result in more attention [17]. To better understand the importance of these secondary metrics, we retrain the model without the two top features (number of posts and transformed posts). We show the results in Table 4. Confirming our prior assertion, we retain relatively good performance. For Random Forest, we attain an f1 of 0.62 (_vs._ 0.77 with the full feature set in Table 5). This confirms that these other factors play an important role in determining if an instance has a policy applied against it. In other words, in addition to the size of an instance, other features are required to obtain a fairly good prediction of instances being subject to any policy. ### Generating a Local Watchlist _Task._ Our prior WatchGen models assume a central pool of training data, aggregated from all instances. This may be infeasible in practice due to the decentralized nature of the Fediverse. Hence, we next investigate how well our best model (Random Forest) performs when decentralizing the training process. For each instance, we extract its federated peers and exclusively build a local training set from their data (using the features highlighted in Table 5). For each pair of instances, we tag whether or not a directed policy is imposed, _i.e._ each instance only considers the policies it locally sees. Finally, each instance trains its own local model using the first 8 months of data (and tests on the last 2). This creates one independent model per-instance. Based on this, WatchGen predicts whether a policy will be applied against the instance. _Results._ Figure 12 presents the distribution of performance metrics per-instance. As expected, we observe an overall performance drop compared to the prior task based on a global model. Instances attain an average f1 score of 0.55. This is largely due to the significant reduction in per-instance training data. That said, we observe a wide array of performances across the instances: 42.6% of instances achieve above 0.6 f1, with a tail of 8.3% attaining below 0.4 f1. We find that performance is impacted by the training set size. Instances that perform relatively well (\(>\)=0.6 f1), tend to be larger (_i.e._ more posts and users). For example, 65.4% of the best performing instances (\(>\)=0.6 f1) have a local post count of over 50k (_e.g._ neckbeard.xyz and freespechetxremist.com). In contrast, only 4.4% of instances that perform poorly (\(<\)0.6 f1) have over 50k posts (_e.g._ princess.cat and sleepy.cafe). This implies that as instances grow, their local performance will improve. The above experiments show that instances _can_ use these locally trained models to generate a personalized watchlist of instances they peer with. Thus, we argue that these automatically compiled lists can helps administrator pay attention to these instances. ## 7 Related Work _Social Network Studies._ Extensive work has been carried out in the area of online social networks. However, most of these are on centralized social networks (_e.g._ Facebook and Twitter) [29, 31, 19, 28, 3, 36]. A number of these look at the anatomy of social graphs [23] and moderation challenges [20]. Others look into areas ranging from the evolution of user activities to demographics [32, 45]. In contrast to Pleroma, these social networking platforms tend to rely on central (commercial) administrators and moderators [48]. _Fediverse and Decentralized Web._ Only a small set \begin{table} \begin{tabular}{l c c c c} \hline \hline Algorithm & Acc. & Prec. & Recall & f1 score \\ \hline Logistic Regression & 0.86 & 0.85 & 0.34 & 0.49 \\ Multi-Layer Perceptron & 0.57 & 0.34 & 0.42 & 0.53 \\ Random Forest & 0.92 & 0.88 & 0.68 & 0.77 \\ Gradient Boosted Trees & 0.89 & 071 & 0.71 & 0.71 \\ \hline \hline \end{tabular} \end{table} Table 1: WatchGen performance results when using global training pool and the full feature set. Figure 10: Time series of f1-scores for the Logistic Regression, Multi-Layer Perceptron, Random Forest and Gradient Boosted Trees models. Note that we exempt month 10 as this leaves insufficient test data. of studies have focused on the Fediverse or Decentralized Web applications. Raman _et al._ looked at the challenges in the Fediverse, with a particular focus on the infrastructure and resilience of Mastodon [39]. Trautwein _et al._ studied the Inter Planetary File System (IPFS), a decentralized storage solution [44]. Guidi _et al._ and Datta _et al._ studied the structure, data management, and privacy aspects of decentralized social networks [7, 1]. Recent works have examined the standardization of related protocols [27, 35]. Bielenberg _et al._ analyzed the growth, topology and server reliability of Diaspora (a decentralized social network) [4]. Similarly, Zignani _et al._ studied the evolution of the Mastodon social graph [55]. Our work differs in that we focus on exploring _administrator_ actions within the Fediverse. _Online Moderation._ Prior work has investigated the roles that volunteer moderators play in platforms like Twitch [51]. Text-based content classification and filtering has been extensively studied too. These include computational techniques to detect cyberbullying [12, 49, 10], anti-social posting [43, 25, 42, 52], and hate speech [9, 46, 40, 50, 21, 47, 24]. These models have proven effective in reducing the workload of human moderators. For example, Cheng et. al. [25] use random forest and logistic regression classifiers to predict whether a user will be banned, reducing the manual load on moderators. Similarly, Zia _et al._[53] look at detecting the spread of toxic posts specifically in Pleroma (although not administrator reactions). In our prior work, we also studied the use of federation policies [22]. Here, we build on this, with a focus on the actions undertaken by administrators. We further propose WatchGen to assist administrators. To the best of our knowledge, this is the first large-scale study of administrator activities in the Fediverse. We hope that this can further contribute to the wider understanding of moderation in other platforms. ## 8 Conclusion and Discussion We have studied instance administrators in a popular Fediverse platform, Pleroma. Although 66.9% of instances are still running on default policies, we observe an uptake of more sophisticated management functions. We find evidence that some administrators may become overwhelmed with the growing number of posts and users they must manage. For instance, it takes an average of 82.3 days for administrators to apply any policy against a newly federated instance. Another sign of the overhead is that just 3.5% of instances share the load across multiple moderators. This lack of moderators may come with challenges: instances with fewer moderators tend to employ less sophisticated policy strategies (_e.g._ 70% of them apply no SimplePolicy actions). To alleviate this, we have proposed WatchGen, a tool that identifies instances in need of closer attention. We show that WatchGen can predict which instances will later have a policy imposed (f1 = 0.77). Our study opens up a number of lines of future work. First, we wish to expand our work to cover other Fediverse platforms, _e.g._ Mastodon or PeerTube. Second, we plan to experiment with alternate feature sets that can better identify instances that will later require policy attention. Through this we hope to improve WatchGen and pilot its deployment. Last, we want to perform a qualitative study to better understand the subjective opinions of administrators that underlie these trends. We conjecture that such qualitative insights might be invaluable for improving WatchGen. Figure 11: Feature importance for our explainable models. Figure 12: CDF of per-instance performance for Random Forest trained on data from local and federated instances. ## Acknowledgements This research was supported by EPSRC grants EP/S033564/1, EP/W032473/1, UKRI DSNmod (REPHRAIN EP/V011189/1), and EU Horizon Framework grant agreement 101093006 (TaRDIS).
2304.08167
Classification of news spreading barriers
News media is one of the most effective mechanisms for spreading information internationally, and many events from different areas are internationally relevant. However, news coverage for some news events is limited to a specific geographical region because of information spreading barriers, which can be political, geographical, economic, cultural, or linguistic. In this paper, we propose an approach to barrier classification where we infer the semantics of news articles through Wikipedia concepts. To that end, we collected news articles and annotated them for different kinds of barriers using the metadata of news publishers. Then, we utilize the Wikipedia concepts along with the body text of news articles as features to infer the news-spreading barriers. We compare our approach to the classical text classification methods, deep learning, and transformer-based methods. The results show that the proposed approach using Wikipedia concepts based semantic knowledge offers better performance than the usual for classifying the news-spreading barriers.
Abdul Sittar, Dunja Mladenic, Marko Grobelnik
2023-04-10T20:13:54Z
http://arxiv.org/abs/2304.08167v1
# Classification of news spreading barriers ###### Abstract News media is one of the most effective mechanisms for spreading information internationally, and many events from different areas are internationally relevant. However, news coverage for some news events is limited to a specific geographical region because of information spreading barriers, which can be political, geographical, economic, cultural, or linguistic. In this paper, we propose an approach to barrier classification where we infer the semantics of news articles through Wikipedia concepts. To that end, we collected news articles and annotated them for different kinds of barriers using the metadata of news publishers. Then, we utilize the Wikipedia concepts along with the body text of news articles as features to infer the news-spreading barriers. We compare our approach to the classical text classification methods, deep learning, and transformer-based methods. The results show that the proposed approach using Wikipedia concepts based semantic knowledge offers better performance than the usual for classifying the news-spreading barriers. News spreading barriers, News barrier classification, Text classification, Economic barrier, Political barrier, Cultural barrier, Linguistic barrier, Geographical barrier ## 1 Introduction Media coverage of local and global events defines and limits the discourse associated with different events. The priority is given to different contents based on cultural, political, social, linguistic, geographical, and economic biases [4; 45]. Similarly, the news relating to local events involves domestic factors, whereas the news about global events involves national and international factors that affect their news flow. These factors again include economic, political, cultural, linguistic, and geographical influences as [64] concluded that depending on the nature of an event, there are variations in information-spreading behavior across the different barriers including economic, cultural, geographical, political, and linguistic. Classification of these barriers can be helpful in the context of numerous real-world applications, such as event-centric news analysis, suspicious news detection, and content recommendations to readers and subscribers. Thus, it is highly important to classify the barriers to massive news spreading related to different events. It is important to understand the influence of the above-mentioned barriers to news spreading. Economic stability is one of the factors that influence media coverage [25]. Moreover, the influence of economic power varies across different events and issues (e.g. protests, online privacy, disasters) [55; 59]. The magnitude of economic interactivity between countries can also impact the news flow [73]. The national context in which the journalists work is frequently followed by news organizations. The SARS pandemic study, which discovered that cross-national contextual factors including political and economic situations affect news selection, is one of the related cases [14]. Political ideology is another factor that influences media coverage and news spreading. Also one of the factors involved in producing fake news or rumors is the political effect [15; 34]. [26] presented a model to capture the spreading process of rumors on social networks. A great amount of work regarding fake news dwells on different strategies and due to the engagement of journalists and political players, it has been convincingly demonstrated that controlling the news and making appropriate changes is a major method employed by news agencies [7; 44]. One of the determinants for influencing news spreading and coverage is the country's geographic and population size [23; 72]. According to certain theories, countries with close distances have some degree of cultural and linguistic affinities and because of that the flow of news spreading is much higher than in countries with long distances [20; 55; 56; 72; 73]. Generally, different types of semantic features have been used to perform news classification depending on the task [40; 50]. For instance, vectorized semantic and syntactical features for the spread of fake news over social, political, and economic context [37], and semantic features like sentiment, entities or facts for fake news classification [12]. Similarly, Stylistic and bag-of-word have been tested for the news classification at the publisher or regional level [65]. In this paper, we explore the classification of barriers to massive news spreading related to different events. We are interested in exploring the variations in news spreading across different topics and different barriers. We focus on five different types of barriers including cultural, political, linguistic, economic, and geographic. Since the considered barriers deal at the international level, we assume that the Wikipedia concepts of news articles including entities (locations, people, organizations) or non-entities (things such as personal computers, and toys) will help in the classification of barriers. ### Motivation The motivations behind our work are stemmed from the following facts: * The news agencies/news publishers always want to have more viewership of their content to earn more money. A news article has mainly consisted of two things. Selection of words/terms to report about any event and selection of events to be reported in a news article. Then the result is subsequent news reporting on the same event by other publishers. During this news reporting, many barriers may stop it from spreading further. These barriers could be of these: political, geographical, economic, cultural, and linguistic. In this context, the barrier classification in news spreading is getting attention as an important research problem. * The barrier classification intends to assist newspapers in general, but can also be useful for the public. Researchers who want to know the reasons for cultural differences in different communities may learn by comparing the written news articles. Thus, developing an efficient and automatic barrier classification system for newspapers comes out as an essential task. To the best of our knowledge, there is a lack of studies that address this challenging task. * By modeling the barriers (cultural, political, economic, geographic, and linguistic), news publishers can develop a better strategy to select an event and report about it, make models that take the news articles as input, and as a consequence control or modify reporting content, and in general, train systems to be better at detecting above mentioned barriers. ### Contributions The original scientific contributions of this paper are: * A novel approach to barrier classification based on news meta-data. * An annotation process, and class definitions. * A novel approach to inferring the news spreading barriers using Wikipedia concept based semantic knowledge. ### Hypothesis and research questions Barrier classification faces the challenge of efficiently analyzing huge amounts of news text. Our research hypothesis states that Wikipedia concept based semantic annotation of news articles will help in classifying the news-spreading barriers. We explore ten different types of news in this context including home, health, business, sports, recreation, shopping, computers, science, society, and games. In order to aid understanding of the influence of different barriers on different types of news, this article set three research questions: **Q1:** Does the information spreading in news varies across different topics and different barriers? **Q2:** What prominent relations appear between Wikipedia concepts and different barriers and categories? **Q3:** Which classification methods (classical or deep learning methods) yield the best performance to barrier classification task? The remainder of the paper is structured as follows. Section 2 describes the related work on an overview of the news spreading problems, the economic aspects of the news spreading, and breaching the barriers to extending viewership. The approach used for barrier classification is explained in Section 3. The data collection and the annotation guidelines are presented in Section 4. We present the experimental results in Section 5. Section 6 concludes the paper and outlines the areas for future work. ## 2 Related Word In this literature review, we present different economic aspects connected with online news spreading, the cultural influence in the news spreading, and the role of content and the framing of news events by the news media. **Economic aspects connected with the online news spreading** Effective dissemination is the key to bridging the gap in information spreading. For the scientists and the practitioners, it is necessary to participate in explicit, accurate, and unbiased dissemination of their respective areas of expertise to the public [31]. In the early stages of online experiments on the news spreading, there was fear that online content may erode the print edition. Therefore, the idea of charging a subscription fee to the users for online news access, and after that, the advertising model followed [16]. Newspapers have always been very valuable advertising channels for promotional campaigns, e.g. couponing, retailer ads, Figure 1: The circular bar charts show the statistics about the news articles that have the labels ”Information-crossing”, ”information-not-crossing”, and ”unsure” respectively (from left to right) for all the ten different categories. The circles show the count of the news articles, each bar represents a country, whereas the colors in each bar represent ten different categories (business, computers, games, and health, etc.). The purpose of this figure is to show the variations of the number of news articles that are either crossing or unsure or not crossing a barrier for different countries (see Section 4) etc., informative campaigns which provide extensive product information, and pure branding campaigns. Newspapers are a flexible medium that can reach large audiences although they can be used to address local targets. Newspapers are regarded as financially stable when 40-70 percent of their income comes from the advertising revenues [9]. There are many issues and confusions about the profit of online news media. Although the number of online newspapers is increasing [66], whether this will become a financially successful business or not is still not clear [49]. Uncertainty exists over how online newspapers define important things: a market that spans the local and global levels, placement in the market, connection between online and print products, and establishment of key strategies. Because a market consists of both consumers and suppliers and because online practitioners are constantly experimenting with the new mediums, market research frequently focuses on user demographics. However, online publishers' perspectives are equally, if not more, important in understanding online newspaper economies [66]. Online newspapers have experimented with various revenue models such as subscriptions, advertising, payper-use, sponsships, web site development, serving as ISPs (Internet service providers), and e-commerce [6; 17; 22]. These models define the geographical market for their online products. These models ask the following questions from participants - Do they define themselves as local, metro, regional, national, or global publications? Their response indicated a geographic market definition [8]. Apart from the economic aspects of news spreading, media activities are a means to secure social, cultural, or political status [8]. **Cultural influence in the news spreading** The result of communication is not only situation-specific but also inherently culturally bound because it is entrenched in human acts with intentions, interests, and wants as well as larger institutional, social, and cultural systems [28]. A culture-specific ideology is defined as the values, beliefs, attitudes, or interests expressed in a source text that is associated with a particular culture or source and that may be viewed as undesirable or incompatible with the dominant values, beliefs, attitudes, or interests of another culture or subculture. It defines the strategies adopted by text producers in bridging the divides in global news transmission. According to MCNelly's theory, the more distance an intermediary communicator has to travel before learning about a news occurrence, the less personally invested he is in it and the more he considers its "marketability" to editors or readers [68]. It has been said that countries with close distances share culture and the news reporting on the same events will not differ due to ideology, culture, and geopolitics [42; 55]. Countries that share a common culture are expected to have heavier news flow between them when reporting on similar events [73]. There are many quantitative studies that found demographic, psychological, socio-cultural, source, system, and content-related aspects [2]. **Framing of news events by news media and role of content** The role of content is an essential research topic in news spreading. Media economics scholars especially showed their interest in a variety of content forms since content analysis plays a vital role in individual consumer decisions and political and economic interactions [21]. In content, a frame is a means to highlight certain elements of a seen reality in a communication text so as to support a specific problem definition, causal interpretation, moral assessment, and/or therapy proposal for the thing being described. There are four places where frames can be found during communication: the text, the recipient, the communicator, and the culture [52]. The inverted pyramid reporting method, where the most significant facts are presented in order of importance, is a key component of news framing. Bias in the news can manifest in a variety of ways, these include "source bias", "unbalanced presentation of contested themes", and "frequent usage of packaged formula" [69]. Scheufele identifies five factors that influence how journalists frame news. These include societal expectations and ideals, organizational demands and restrictions, pressure from interest groups, journalistic practices, and journalists' ideological or political leanings [47]. A vast body of literature exists on how the news media frame the news events and consequently influence public perception of those events [38]. Existing literature posit that framing is often used intentionally for the purpose of changing the perception of content and to cater this, different computational methods have been applied [33; 61]. **News classification methods** Different text classification methods have been used to perform the classification of news articles belonging to different tasks [13; 19; 54]. [36] presents a hybrid architecture connecting BERT with RNN and uses it to create models for detecting fake news. A fake news detection model using the n-gram analysis and classical machine learning techniques is proposed where SVM appears as the best classifier [1]. It makes a comparison between two different feature extraction techniques and six different classical machine learning techniques. PAN is a series of scientific events and shared tasks which include classification based on textual data collected from social media [3; 10; 51]. [53] proposed novel approaches based on machine learning and deep learning for the fake news detection systems to address this phenomenon. It compares the performance of an optimized convolution neural network model with RNN, LSTM, and six regular ML techniques: Decision Tree, Logistic Regression, K Nearest Neighbor, Random Forest, SVM, and Naive Bayes using the four fake news benchmark datasets. [5] applied these methods and feature engineering techniques such as count vectorizer, TF-IDF, and word2vec. It shows that multinomial Naive Bayes with count vectorizer outperforms Hindi news headlines related to different categories (entertainment, sports, tech, lifestyle). **Semantic knowledge for text classification** Semantic knowledge is used to improve the performance of text mining algorithms by adding more semantic text [11; 32; 70]. Different tasks utilize different types of semantic text such as knowledge graphs, WordNet, Open Directory Project, or Wikipedia [43; 60; 62]. Wikipedia has been used for many studies as an external knowledge resource [27; 46; 48], we utilize the wikipedia concepts as a knowledge source for barrier classification. ## 3 Approach The presented research focuses on barrier classification in news articles. To this end, we propose a novel approach to barrier classification based on news meta-data, as shown in Figure 3. In the first step, we execute a query that extracts the news articles from the Event Registry belonging to different categories (business, computers, games, health, home, recreation, science, shopping, society, and sports) and published within a certain time span - in our case between 2016-2021 (see Subsection 4). Then we parse and save these news articles along with the source information such as the publishers' names and publishing dates. In the second step, we extract the meta-data related to the news publishers via searching the news publishers' on Google and extracting their Wikipedia links. Using this link, we obtain the necessary information from Wikipedia-infobox (see Subsection 4.2). In the third step, we perform the annotation of news articles. To annotate the news articles, we set the annotation guidelines 4.2. For cultural and economic barriers, we assign the ternary labels to news articles whereas, for the linguistic, geographical, and political barriers, we assign the binary labels to the news articles. Table 1 presents the examples of annotation for all the barriers. Afterward, we conduct experiments comparing machine learning state-of-the-art classification methods, deep learning, and transformer-based methods (see Figure 10). The results are presented in Section 5.4, 5.5 showing the performance of different features and different methods. Figure 2: Three Wikipedia-infobox for the three different newspapers/magazines with their political alignment ## 4 Dataset description We collected the news articles reporting on different events published between 2016-2021 in the English language using Event Registry [39] APIs 1. The dataset consists of 35 million news articles that take storage up to 150 GB. Each news article belongs to a different category (see Figure 1). Each news article consists of a few attributes: title, body text, name of the news publisher, date and time of publishing, event-ID, DMOZ-categories, and Wikipedia concepts. Footnote 1: [https://github.com/EventRegistry/event-registry-python/blob/master/eventregistry/examples/QueryArticlesExamples](https://github.com/EventRegistry/event-registry-python/blob/master/eventregistry/examples/QueryArticlesExamples). PY A few attributes are self-explanatory such as title, body text, name of the news publisher, and date and time of publishing. An event-id represents a unique number that is associated with all the news articles that belong to a same event. The DMOZ-categories represent the topics of the content/news article. It is a project that has hierarchical collection of web page links organized by subject matters 2. Around 50,000 categories are used by the Event Registry (top 3 layers of the DMOz taxonomy) 3. The statistics of all the categories for all the five barriers are presented in the pie charts (see Figure 5). Wikipedia concepts are used as a semantic annotation for the news articles and can represent entities (locations, people, organizations) or non-entities (things such as personal computers, and toys). In Event Registry, Wikipedia's URLs are used as concept URIs. Footnote 2: [https://dmoz-odp.org/](https://dmoz-odp.org/) Footnote 3: [https://eventregistry.org/documentation?tab=terminology](https://eventregistry.org/documentation?tab=terminology) Figure 4: Metadata for the five barriers (cultural, economic, geographical, linguistic, and political) Figure 3: An approach to barrier classification based on news meta-data. Data extraction from the Event Registry is the first step. Meta-data extraction through Google and Wikipedia scrapping is the second step. The third step is to annotate the news articles after calculating the euclidean distances. ### Similarity between news articles Event Registry is a platform that collects multilingual similar news articles from tens of thousands of news sources and identifies events[39]. It collects data using the News Feed service [67] which collects news articles from around 75.000 news sources in various languages (English, German, Spanish, and Chinese). To construct an event, it groups similar news articles. It calculates many features, and cross-lingual similarity of articles is one of them. It does not use any machine translators, but rather tries to frame the problem of finding similarities among cross-lingual news articles such as that they could use well-established machine learning tools designed for mono-lingual text-mining tasks. It looks at the distribution of articles across languages where English was the largest language and use as one of the hub languages which not only has an order of magnitude with more articles than other languages, but also many comparable articles with most of the other languages. ### Metadata for each barrier To fetch the metadata for each barrier, the essential thing is the news publisher's headquarters name. For each news publisher we get this information from Wikipedia-infobox (see Figure 2). We used Bright Data service 4 to crawl and parse Wikipedia-Infobox for almost more than 10,000 news websites. We retrieved the country name of the news publisher's headquarters name. For the economical barrier, we fetched the economical profile for each country using "The Legatum Prosperity Index" 5 as done by [64]. It has twelve dimensions that represent different economical aspects (see Figure 4). For the cultural barrier, we calculated differences among different regions using six Hofstede's national culture dimensions (HNCD) (see Figure 4). For the economic and cultural barrier, we calculated the euclidean distance among all the countries (for the economic barrier using the economical profile, and for the cultural barrier using the HNCD). Two countries have Figure 5: The pie charts show the statistics about the news articles for the five news spreading barriers (from left to right: cultural, economic, political, linguistic, and geographic) that belong to ten different categories (business, computers, games, health, home, recreation, science, shopping, society, and sports). We can see that a more percentage of news articles belong to science, society, and business categories. been labelled as: "information-not-crossing" if the distance score was \(\leq\) 0.1, "unsure" if the distance score was \(>\) 0.1 and \(\leq\) 0.4, "information-crossing" if the distance score was \(>\) 0.4 (see examples in the Table 1). For the geographical barrier, we stored general latitude and longitude. For the political barrier, we utilize the political ideology/alignment of the newspaper/magazine that we determined based on Wikipedia-infobox at their Wikipedia page [63](see Figure 2). The statistics about the annotated dataset are presented in Figure 7, and 8. The data is proprietary to Event Registry 6. People can ask if they need that kind of data. Footnote 6: [https://eventregistry.org/](https://eventregistry.org/) **Annotation Questions**: Based on the definitions above, we set the following annotation questions in order to identify barriers to news spreading. * Q1: _Do all the news articles reporting on an event, publish from a particular/same geographical location?_ * Q2: _Do all the news articles reporting on an event, publish from the locations having equal economic prosperity?_ * Q3: _Do all the news articles reporting on an event, publish from a particular/same locations having equal cultures?_ * Q4: _Do all the news articles reporting on an event, publish from the sources with a particular/similar political class?_ * Q5: _Do all the news articles reporting on an event, publish by the newspapers where the publishing language were same?_ Question 1 (Q1) intends to identify whether the news was published across different geographical places or not. The question is answered "Yes" for all the news articles reported on an event if they are published from one country otherwise "No". Question 2 (Q2) intends to identify whether the news was published across different economies or not. The economic similarity has been calculated using euclidean distance. The question is answered with "information-crossing" for all the news articles reported on an event if they are published from countries with similar economic situations. The question is answered with "unsure" for all the news articles reported on an event if at least one of the news articles published from a country that is labeled with "unsure" (see Subsection 4.2) otherwise "information-not-crossing". Question 3 (Q3) intends to identify whether the news was published across different cultures or not. The question is answered with "information-crossing" for all the news articles reported on an event if they are published from countries with a similar economic situation. The question is answered with "unsure" for all the news articles reported on an event if at least one of the news articles published from a country that is labeled with "unsure" (see Subsection 4.2) otherwise "information-not-crossing". Question 3 (Q3) intends to identify whether the news was published in newspapers with the same political alignments or not. The question is answered "Yes" for all the news articles reporting on an event if they are published from different newspapers where the publishing language was same otherwise "No". #### Barrier Categories Labels for the five types of barrier annotations are derived: * Economic barrier classes: _information-not-crossing_, _unsure_, and _information-crossing_. * Cultural barrier classes: _information-not-crossing_, _unsure_, and _information-crossing_. * Geographical barrier classes: _Not-crossed-GB_, and _Crossed-GB_. * Political barrier classes: _Not-crossed-PB_, and _Crossed-PB_. * Linguistic barrier classes: _Not-crossed-LB_, and _Crossed-LB_. #### Analysis of information spreading and Wikipedia concepts **Q1: Does the information spreading in news varies across different topics and different barriers?** \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Barrier** & **Time** & **Time: Link and death of a-Sear panel depicted limit US** & **Time: Star** & **U.S.** \\ \hline **Calvenal** & 2014-011.11 & 1734.00 & Figure Namit camp panel dependent by US flow in Germany & Controls & \\ **Calvenal** & 2014-011.11 & 1734.00 & Figure Namit camp panel dependent by US flow in Germany & Switzerland & Sumit Culture & information-set-creating \\ [MISSING_PAGE_POST] The line graphs (see Figures 9) compare the number of publishers, the average number of articles per publisher, and the average number of events per publisher for all the ten categories and the five barriers. Overall, it can be seen that the average articles and events per publisher are far higher in the political barrier for all ten categories, whereas the number of articles is far higher in the geographical barrier for all ten categories. With regard to the number of publishers for all the barriers in all ten categories, there is a huge difference in the business category such as the number of publishers are almost double for linguistic barrier than the cultural, economic, and political barrier, and similarly, double than the linguistic barrier for the geographic barrier. Then there is fluctuation for all the categories after a straight decline at the category games. Overall, the noticeable fact from this diagram is that the linguistic barrier includes the highest number of news publishers. The other three barriers have small variations for all the categories. The average number of news articles per publisher is almost equal for the economic and cultural barriers and the geographic and linguistic barriers. Whereas for the political barrier, it is always high for all the ten categories. We can see that the science category includes almost 280 news articles per publisher whereas in the health, home, and recreation categories, the count is almost 60 news articles per publisher and in business, shopping, and sports, the count is almost equal to 40 news articles per publisher. With regard to the number of events per publisher, the pattern is the same as the average number of news articles per publisher for the political, linguistic, economic, and cultural barriers. However, for the geographic barriers, this count Figure 7: This bar chart shows the class distribution for the political, linguistic, and geographic barriers (from left to right). The bar with blue color shows the distribution for the class ”Information-crossing” a barrier whereas the bar with red color shows the distribution for the class ”Information-not-crossing” a barrier. Each of the three-bar charts presents the class distribution for all the ten categories. reduces to almost half for the seven categories (business, computers, health, home, recreation, science, and society). The popularity of events can be shown by the number of news articles published by different news publishers and the scope of a category can be depicted with coverage [57]. We can see that ten different categories have different scopes across different barriers. However, we notice that the science and society categories have the highest number of news publishers and the highest average number of news articles and news events per publisher for all the barriers whereas the games category appears with a scarcity of popularity. **Q2: What prominent relations appear between meta-data such as political alignment, geographical place, economic conditions, cultural values, and publishing language?** Since the purpose of using semantic knowledge was to improve text classification, we analyzed the associated Wikipedia concepts to all the barriers. Also, we compared the occurrence of the list of Wikipedia concepts between the categories. We present an example to illustrate the comparison. To perform a comparison between all the barriers, we select the society category whereas to perform a comparison between the categories, we select the computers and society categories. The results of the intersection between the categories have been shown in Figure 6. ## 5 Experimental Results In this section, we present an analysis of information spreading and Wikipedia concepts, classification baselines, evaluation metric, and experimental results comparing simple (LR, SVM, DT, RF, kNN), deep learning (LSTM), and transformers (BERT) for the barrier classification task (see Figure 10). ### Evaluation Methodology We used Scikit-learn implementation of classical and deep learning models considering the following parameters, which are usually the default: hidden layers = 3, hidden units = 64, no. of epochs = 10, batch size = 64, and dropout = 0.001. For the training process of political, geographical, and linguistic barriers, we used Adam as the optimizer, categorical cross-entropy as the loss function, and sigmoid as the activation function. For economic and cultural barriers, we used Adam as the optimizer, binary cross-entropy as the loss function, and SoftMax as the activation function. ### Baselines For the comparison with the proposed Wikipedia concepts based semantic knowledge, we evaluated the barrier classification task using the body text of the news articles only. We adopted the term frequency (TF) and inverted document frequency (IDF) methods to represent the bag of words of each news article. For the barrier classification task, the experiments were conducted by utilizing three different types of machine learning algorithms: 1) traditional machine learning algorithms including Logistic Regression (LR), Naive Bayes (NB), Support Vector Classifier (SVC), k-nearest Neighbor (kNN), and Decision Tree (DT): The performance of LR for the text classification problems is same as of the SVM algorithm [58; 58]. SVMs use kernel functions to find separating hyper-planes in high-dimensional spaces [18]. SVM is difficult to interpret and there have to be many parameters that need to be set for performing the classification and one parameter that performs well in one task might perform poorly in other[58; 58]. Therefore many information retrieval systems use decision trees and naive bayes. However, these models lack accuracy [30; 35]. 2) LSTM (Long-Sort-term Memory): With the emergence of deep learning algorithms, the accuracy of text categorization has been greatly improved. Convolutional neural networks (CNN) and long short-term memory networks (LSTM) are widely used [30; 41; 41; 75; 41]. 3) State-of-the-art pre-training language model BERT (Bidirectional Encoder Representations from Transformers): It is trained on a large network with a large amount of unlabeled data and adopts a fine-tuning approach that requires almost no specific architecture for each end task and has achieved great success in a couple of NLP tasks, such as natural language inference, and text classification [24; 29; 74]. ### Evaluation metric To evaluate the performance of binary and multi-class barrier classification models, Accuracy and F1-score is used as evaluation measure. * **F1-Score:** It combines the precision and recall of a classifier into a single metric by taking their harmonic mean. It is defined as: \[F_{1}=\frac{2(Precision*Recall)}{Precision+Recall}\] * **Accuracy:** Accuracy is a metric used in classification problems and it is used to tell the percentage of accurate predictions (TP and TN). We calculate it by dividing the number of correct predictions (TP and TN) by the total Figure 8: **This bar chart shows the class distribution for the economic, and cultural barriers (from left to right). The bar with red color shows the distribution for the class ”Information-not-crossing” whereas the bar with green color shows the distribution for the class ”Unsure” a barrier. The bar with blue color shows the distribution for the class ”Information-crossing”. Each of the two bar charts presents the class distribution for all ten categories.** number of predictions (TP+FP+TN+FN). It is defined as: \[Accuracy=\frac{TP+TN}{TP+FP+TN+FN}\] Figure 10: Overview of the task of barrier classification using the Wikipedia concepts Figure 9: These line charts show the number of publishers, the average number of news articles per publisher, and the average number of events per publisher (from left to right). The lines with red, green, orange, blue, and gray colors represent the political, linguistic, geographic, economic, and cultural barriers respectively. ### Comparative analysis of the ten categories We compare the results of all ten news categories based on evaluation metrics, i.e. accuracy, and F1-score. The both matrices are compared on the bar chart in order to display a concise and perfect comparison. Since the results of LR among the five (LR, SVC, NB, DT, and kNN) traditional machine learning algorithms were higher in all the categories, we exclude the others. The words PM-LSTM (proposed model LSTM) and PM-BERT (proposed model BERT) in the figure 11 mean the usage of LSTM and BERT utilizing our approach with Wikipedia concepts based semantic knowledge. #### F1-Score The obtained bar chart is shown in Figure 11. It compares the results of LR, LSTM, and BERT with our proposed approach that is based on Wikipedia concepts based semantic knowledge. The F1 scores using BERT with the Wikipedia concepts based semantic knowledge are higher than LR, LSTM, and BERT for the business, computers, games, shopping, and sports (with the improvement of 0.03, 0.03, 0.03, 0.36, and 0.02 F1 score respectively); In case of recreation, science, and society, LSTM with our approach achieves higher F1 score (with the improvement of 0.03, 0.03, and 0.02 F1 score respectively); In case of health and home categories, we did not see any improvements of our approach in the results. #### Accuracy The obtained bar chart is shown in Figure 12. The accuracy using LSTM with Wikipedia concepts based semantic knowledge is higher than LR, LSTM, and BERT for games, home, recreation, science, and society (with the improvement of 0.07, 0.02, 0.01, 0.02, and 0.02 accuracy score respectively); In case of business, computers, shopping, and sports categories, BERT model with our approach achieves higher accuracy (with the improvement of 0.02, 0.02, 0.09, and 0.07 accuracy score respectively); By comparing and analyzing the results of different classification methods on ten different kinds of news categories, we can say that Wikipedia concepts based semantic knowledge helps in achieving a higher F1 score and accuracy. ### Comparative analysis of the three types of algorithms After discussing the results of all the ten news categories, we compare all the five different types of barriers based on improvements in classification results. Figure 13 presents the statistics about each barrier. **Q3:** Which classification methods (classical or deep learning methods) yield the best performance to barrier classification task? For the linguistic and geographic barrier, we see that our proposed methods (LSTM and BERT with semantic knowledge) outperform for six categories whereas for the five categories of political barrier, a slight improvement in classification results have been seen. It is also noticeable that the there are seven Figure 11: It presents the F1 score of the five different machine learning algorithms (LR, LSTM, BERT, PM-LSTM, and PM-BERT) for the ten different categories (business, computers, games, health, home, recreation, science, shopping, society, and sports). Figure 12: It presents the accuracy of five different machine learning algorithms (LR, LSTM, BERT, PM-LSTM, and PM-BERT) for the ten different categories (business, computers, games, health, home, recreation, science, shopping, society, and sports). categories in economic barrier where proposed methods yields the best score. However, there are slight improvement for cultural barrier. ### Analysis and discussion Experiments of the novel approach on the ten different kinds of news and for the five different barriers have brought some insights regarding information spreading. In order to support the hypothesis, we have set three research questions 1.3. To answer the first research question (Does the information spreading in news varies across different topics and different barriers?), we compare the number of news publishers, the average number of articles per publisher, and the average number of events per publisher for all the categories and barriers (see Figure 9). The comparative analysis indicates that the ten different categories have different scopes across the different barriers. However, the society and science categories appeared to have the highest number of news publishers, the highest average number of news articles, and the news events per publisher for all the barriers whereas the games category appeared to have a minimum number of articles and publishers. To answer the second research question (What prominent relations appear between Wikipedia concepts, and different barriers and categories?), we find the intersection between the Wikipedia concepts belonging to different barriers and categories (see Figure 6). The results suggest that although Wikipedia concepts are shared among the barriers, a category in each barrier has some unique Wikipedia concepts. Similarly, the same fact exists between the different categories. Therefore it might be possible that it will help in improving the classification results. The results of the annotation show that the data does not have higher imbalanced data for both binary and ternary class classification (see Figures 7, 8). Therefore we consider using it for classification without using any technique to make it balanced. To answer our third research question (Which classification methods (classical or deep learning methods) yield the best performance to barrier classification task?), We perform classification with traditional machine learning methods including Logistic Regression (LR), Naive Bayes (NB), Support Vector Classifier (SVC), k-nearest Neighbor (kNN), and Decision Tree (DT). Afterward, we perform classification with and without Wikipedia concepts using LSTM and BERT. We evaluate the models using accuracy and F1 score (see Subsection 5.3). We analyze the classification results by comparing the ten categories 5.4 and three types of classification methods 5.5. The results suggest that for the linguistic and geographic barrier, our proposed approach yields the best scores for the six categories, whereas for the political barrier, we see a slight improvement in the classification of the five categories. On the other hand, LSTM and BERT with Wikipedia concepts yield the best score for the seven categories of the economic barrier. Overall, we can say that Wikipedia concepts-based semantic knowledge help in achieving a higher F1 score and accuracy. ## 6 Conclusions In this paper, we focused on the classification of news-spreading barriers by utilizing semantic knowledge in form of Wikipedia concepts. We consider news related to ten different categories (business, computers, games, health, home, recreation, science, shopping, society, and sports). After completing the automatic annotation of news data for the five barriers including cultural, economic, political, linguistic, and geographical (binary class classification of the linguistic, political, and geographical barrier and ternary class classification of the cultural and political barrier), we perform classification with traditional machine learning methods (LR, NB, SVC, kNN, Figure 13: It presents two bars for each barrier. The green bar means the number of categories for whom the classification methods show improved F1 and accuracy scores using our proposed approach (using Wikipedia concepts based semantic knowledge). The gray bar means the number of categories for whom the classification methods do not improve the F1 and accuracy score. and DT), deep learning (LSTM) and transformer-based method (BERT). Our findings suggest that Wikipedia concepts-based semantic knowledge help in achieving a higher F1 score and accuracy. ## 7 Acknowledgments The research described in this paper was supported by the Slovenian research agency under the project J2-1736 Causalify and by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 812997.
2307.11166
Exploring reinforcement learning techniques for discrete and continuous control tasks in the MuJoCo environment
We leverage the fast physics simulator, MuJoCo to run tasks in a continuous control environment and reveal details like the observation space, action space, rewards, etc. for each task. We benchmark value-based methods for continuous control by comparing Q-learning and SARSA through a discretization approach, and using them as baselines, progressively moving into one of the state-of-the-art deep policy gradient method DDPG. Over a large number of episodes, Qlearning outscored SARSA, but DDPG outperformed both in a small number of episodes. Lastly, we also fine-tuned the model hyper-parameters expecting to squeeze more performance but using lesser time and resources. We anticipated that the new design for DDPG would vastly improve performance, yet after only a few episodes, we were able to achieve decent average rewards. We expect to improve the performance provided adequate time and computational resources.
Vaddadi Sai Rahul, Debajyoti Chakraborty
2023-07-20T18:01:48Z
http://arxiv.org/abs/2307.11166v1
# Exploring reinforcement learning techniques for discrete ###### Abstract We leverage the fast physics simulator, MuJoCo to run tasks in a continuous control environment and reveal details like the observation space, action space, rewards, etc. for each task. We benchmark value-based methods for continuous control by comparing Q-learning and SARSA through a discretization approach, and using them as baselines, progressively moving into one of the state-of-the-art deep policy gradient method DDPG. Over a large number of episodes, Q-learning outsourced SARSA, but DDPG outperformed both in a small number of episodes. Lastly, we also fine-tuned the model hyper-parameters expecting to squeeze more performance but using lesser time and resources. We anticipated that the DDPG's new design would vastly improve performance, yet after only a few episodes, we were able to achieve decent average rewards. We expect to improve the performance provided adequate time and computational resources. Furthermore, without any modification, the methods were adapted to the other MuJoCo contexts. 1Northeastern University 360 Huntington Avenue Boston, Massachusetts 02115 [email protected], [email protected] ## Introduction We will look at several reinforcement learning strategies for solving problems in discrete and continuous observation spaces, as well as discrete and continuous action spaces, in this work. In a continuous environment, predicting behaviors over a continuous space has always been a challenging challenge for an agent. Distinguishing the observation and action spaces is an obvious approach to these problems. The loss of information that occurs with dividing a continuous region into 'K' buckets is a significant constraint. Increasing the number of buckets can assist, but for continuous areas, it becomes intractable when the action-value table grows exponentially huge. We investigate the various environments in OpenAI gym's MuJoCo in this paper. To cope with continuous methods via bucketing, we use model-free temporal difference learning approaches - Q-learning and SARSA as a baseline. To improve the findings, the Deep Deterministic Policy Gradient (DDPG) was used. We consider a typical reinforcement learning setup in which an agent interacts with its environment. The environment we have taken into consideration here, is **MuJoCo** (stands for **M**ulti-**J**oint dynamics with **C**ontact). It is a general-purpose physics engine designed to help with research and development in robotics, biomechanics, machine learning, and other fields that need quick and precise modeling of articulated structures interacting with their surroundings. ### Environment. MuJoCo is a C/C++ library with a C API, which operates on low-level data structures which are pre-allocated by the built-in XML parser and compiler. Interactive visualization with a native GUI, produced in OpenGL, is included in the package. It provides continuous control tasks, running in a fast physics simulator and a plethora of utility functions for computing physics-related numbers. MuJoCo offers a range of continuous control task: ### Model elements. The elements of a MuJoCo model are as: 1. _Body_: Bodies are the components that make up kinematic trees, _i.e._, a tree of rigid bodies, such as, the human body, only having a predefined mass and inertia. Bodies do not posses any geometric properties. 2. _Joint_: Joints are defined inside bodies. Joints help to create motion between the particular body and its parent, otherwise they would be stiff and immovable. MuJoCo joints have four primitive types: slide, hinge, ball and free. 3. _DOF_: Degrees of freedom (DOFs) refers to the limits to which physical movement of the rigid bodies are possible. They are closely related to joints, however, different joints can have multiple DOFs. DOFs can have properties like damping, maximum velocity, armature, inertia, friction and other relevant data from coordinate systems. 4. _Geom_: Geoms are mass-less geometric objects primarily used in collision detection. MuJoCo supports geom types as plane, sphere, capsule, ellipsoid, box, cone, and mesh. 5. _Site_: Sites are locations of interest that are defined in the bodies' local frames and hence move with them. They are utilized in the engine to route tendons and apply various sorts of forces, but they may also be used by the application to encode sensor positions and other information. 6. _Constraint:_ Constraints are used to define a set of pre-formulated rules that specify how they will behave in the environment, like restraining ball or hinge joints. 7. _Tendon:_ A tendon can be used to impose constraints as "...the shortest path that passes through a sequence of specified sites or wraps around specified geoms." 8. _Actuator:_ Actuators receive control inputs from the environment that directly co-relate to the movement or kinematics of the model. They can transmit forces (_e.g_. torque), on any of joints, sites or tendons. ## 3 Related work Researchers have recently achieved substantial success by integrating deep learning capabilities for learning feature representations with reinforcement learning. Some instances include teaching agents to play video games using raw pixel data and teaching them sophisticated manipulation skills. Other instances include designing generalized agents that can "reinforce" itself into any task, given enough time and resources. Expected SARSA might be employed for TD-learning approaches. However, due to the high spatial complexity, Tabular representations proved inefficient. Another solution to our problem might be deep Q-learning. The features are calculated using a neural network in a deep form of approximate Q-learning. Despite the fact that it operates with continuous data, it models the probability distribution of discrete actions, necessitating the binning of the action space. For our scenario, Actor-Critic, a policy gradient approach, might have also been employed. It performs effectively in areas where continuous control is required. However, the intended Q-value and present Q-value are both created by the same network, which is a huge disadvantage.The calculated TD error becomes inconsistent as a result of inconsistent weight changes. In recent times, the state of the art in reinforcement learning in continuous control tasks are achieved in some or the other variation of deterministic policy gradient methods, _e.g_., Deep Deterministic Policy Gradient, Advantage Actor Critic (A2C), Asynchronous Advantage Actor Critic (A3C), Twin delayed deep deterministic policy gradient (TD3) etc.,. ## 4 Background The tasks in the environment can be primarily associated with Locomotion, although few overlap with basic and hierarchical task as well, _i.e_.,, Locomotion + Food collection. ### Ant. The task is to make a 3-dimensional four-legged robot walk. XML descriptionThe ant has a spherical torso, with each of its four legs constituting of three capsule mesh geoms, connected by two hinge joints. The four legs are connected to the "torso" by four free joints. * The **cfrc_ext* * are the external forces (force x,y,z and torque x,y,z) applied to each of the links at the center of mass. This is 14 * 6: the ground link, the torso link plus the 12 links for all legs (3 links for each leg). Action space.Has a shape of (8, ), translating directly as torque upon the 8 hinge joint actuators (2 for each leg). Rewards.The rewards are represented as: ``` 1ctrl_cost=self.control_cost(action) 2contact_cost=self.contact_cost 3 4forward_reward=x_velocity 5healthy_reward=self.healthy_reward 6 7 rewards=forward_reward+healthy_reward 8costs=ctrl_cost+contact_cost 9 10reward=rewards-costs ``` * Episodic reward is calculated by inflicting a cost on the total reward for the ant. * One of the cost is a control cost for taking actions in the environment. Another is directly proportional to how many contacts the ant makes with the ground. * This cost is deducted from the summed reward for moving forward and for being upright most of the time. HalfCheetah.The task is to make a 2-dimensional cheetah robot run. XML descriptionThe head and torso of the HalfCheetah are both capsule mesh geoms. Each thigh, shin and feet are capsules with a hinge joint connecting them together. This is obviously true for both the front and the back legs. ``` WorldBodyHeadTorsoBackthighBackshinBackfoot ``` State spaceHas a shape of (17, ), as position and velocity for the slider joints, and angle and angular velocities for the hinge joints (3 for each leg, 3 axes for body). ``` Name&Joint&Parameter ``` ``` rootx&slider&position(m)rootz&slider&position(m) ### Humanoid. The task is to make a 3-dimensional two-legged robot walk. XML descriptionThe head, torso and uwaist are sphere and two capsule geom meshes respectively. Also, conjoined are the left arm, right arm and the lower waist. Pelvis is a part of the lower waist which have the legs connected as a hinge joint. Both the 2 legs and 2 arms have 2 hinge joints each, responsible for moving both the lower arm and hand, and both the shin and foot respectively. XML descriptionThe head, torso and uwaist are sphere and two capsule geom meshes respectively. Also, conjoined are the left arm, right arm and the lower waist. Pelvis is a part of the lower waist which have the legs connected as a hinge joint. Both the 2 legs and 2 arms have 2 hinge joints each, responsible for moving both the lower arm and hand, and both the shin and foot respectively. * **self_sim.data.qpos** are the positions, with the first 7 element being the 3D position (x,y,z) and orientation (quaternion x,y,z,w) of the torso, and the remaining 8 positions being the joint angles. * The **[2:], operation** removes the first 2 elements from the position _i.e._, the X and Y position of the agent's torso. * **self_sim.data.qvel** are the velocities, with the first 6 elements being the 3D velocity (x,y,z) and 3D angular velocity (x,y,z) and the remaining 8 are the joint velocities. * The **cfrc_ext* * are the external forces (force x,y,z and torque x,y,z) applied to each of the links at the center of mass. This is 14 * 6: the ground link, the torso link plus the 12 links for all legs (3 links for each leg). * **qfrc_actuator** are likely the actuator forces. **cinert** seems the center of mass based inertia and **cvel** the center of mass based velocity. Figure 3: Two-legged humanoid learning how to walk Action space.Has a shape of (17, ), translating directly as torque upon the 17 hinge joint actuators listed in the tree. Reward.Represented same as for the Ant agent. InvertedDoublePendulum The task is to balance a pole on a pole, on a cart. XML descriptionThe root of this model is a capsule geom on a rail of joint type slide. It is connected to a pole of geom type capsule which in turn, is connected to another pole of geom type capsule via a hinge joint. self.sim.data.qpos.flat[2:] self.sim.data.qvel.flat[:2], self.get_body_com("fingertip") - self.get_body_com("target"), \begin{tabular}{c c} \hline **Name** & **Joint** & **Parameter** \\ \hline joint0 & hinge & angle (rad) \\ joint1 & hinge & angle (rad) \\ joint0 & hinge & angular velocity (rad/s) \\ joint1 & hinge & angular velocity (rad/s) \\ target & slider & position (m) \\ \hline \end{tabular} **Action space.** Has a shape of (2, ), represented as torque on the two joints, resulting in the agent reaching the target. \begin{tabular}{c c c} \hline **Name** & **Actuator** & **Parameter** \\ \hline joint0 & motor & torque (Nm) \\ joint1 & motor & torque (Nm) \\ \hline \end{tabular} **Reward.** The rewards are represented as: \begin{tabular}{c c} \hline **Name** & **Actuator** & **Parameter** \\ \hline rot2 & motor & torque (Nm) \\ rot3 & motor & torque (Nm) \\ \hline \end{tabular} **Reward.** The rewards are represented as: \begin{tabular}{c c} \hline **Name** & **Actuator** & **Parameter** \\ \hline rot2 & motor & torque (Nm) \\ rot3 & motor & torque (Nm) \\ \hline \end{tabular} **Reward.** The rewards are represented as: \begin{tabular}{c c} \hline **Name** & **Actuator** & **Parameter** \\ \hline rot2 & motor & torque (Nm) \\ rot3 & motor & torque (Nm) \\ \hline \end{tabular} **Reward.** The rewards are represented as: \begin{tabular}{c c} \hline **1** & **def control_cost(self, action):** \\ **2** & **control_cost** = self_ctrl_cost_weight * np.sum(np. \\ square(action)) \\ return control_cost \\ 4 & \\ 5** & **xy_position_before** = self.sim.data.qpos[0:2].copy() \\ 6** & **xy_position_after** = self.sim.data.qpos[0:2].copy() \\ 7 & \\ 8** & **xy_velocity** = (**xy_position_after** - **xy_position_before**) / \\ self.dt & \\ 9** & **x_velocity**, **y_velocity** = **xy_velocity** \\ 10 & \\ 11** & **forward_reward** = self_forward_reward_weight * \\ x_velocity & \\ 12 & \\ 13 & **ctrl_cost** = self.control_cost(action) \\ 14 & reward = forward_reward - **ctrl_cost** \\ \hline \end{tabular} ### Hopper. The task is to make a 2-dimensional robot hop. **XML description.** The torso of the agent sequences a single thigh, a single leg and a single foot, all of them being mesh capsule geoms, and all connected via a hinge joint. \begin{tabular}{c c} \hline **Word Body** \\ **Torso** \\ \begin{tabular}{l l l} \hline **Name** & **Actuator** & **Parameter** \\ \hline thigh\_joint & motor & torque (Nm) \\ leg\_joint & motor & torque (Nm) \\ foot\_joint & motor & torque (Nm) \\ \hline **Reward.** & The rewards are represented as: \\ \hline \end{tabular} \begin{tabular}{l l} \hline **Name** & **Actuator** & **Parameter** \\ \hline thigh\_joint & motor & torque (Nm) \\ leg\_joint & motor & torque (Nm) \\ foot\_joint & motor & torque (Nm) \\ \hline \end{tabular} \begin{tabular}{l l} \hline **Reward.** & The rewards are represented as: \\ \hline \end{tabular} ## Project description ### Online Value-Based Methods "Bootstrapping" in reinforcement learning means that the estimate of one state \(V_{\pi}(s)\) builds upon the estimate of successor states \(V_{\pi}(s^{\prime})\). Dynamic programming uses bootstrapping and is a model-based learning. Other methods do not rely on bootstrapping and are known as model-free methods like Monte-Carlo. Temporal difference learning combines Monte-Carlo (model-free) and Dynamic programming (model-based). The Temporal difference (TD) error is given by: \[\delta_{t}=R_{t+1}+\gamma*V(s_{t+1})-V(s_{t})\] \(\delta_{t}\): TD error \(V(s_{t})\): value estimate of state '\(s_{t}\)', \(V(s_{t+1})\): value estimate of next state '\(s_{t+1}\)', \(R_{t+1}\): reward obtained on transition from '\(s_{t}\)' to '\(s_{t+1}\)' #### Sarsa. SARSA combines Generalized Policy Iteration with Temporal Difference learning to find improved policies. It uses action-values (Q-value) form of TD. The name 'SARSA' stands for \(S_{t},A_{t},R_{t+1},S_{t+1},A_{t+1}\rightarrow(state,action,reward,nextstate, nextaction)\). It is an on-policy TD control method. The update equation used by SARSA is: \[Q(s_{t},a_{t})\Leftarrow Q(s_{t},a_{t})+\alpha*[R_{t+1}+\gamma*Q(s_{t+1},a_{ t+1})-Q(s_{t},a_{t})]\] \(Q(s_{t},a_{t})\): action-value estimate for state \(s_{t}\) and action \(a_{t}\) \(\alpha\): learning rate \(\gamma\): discount factor \(Q(s_{t+1},a_{t+1})\): action-value estimate for state \(s_{t+1}\) and action \(a_{t+1}\) In Generalized policy iteration with SARSA, we continually estimate \(Q_{\pi}\) for the behavior policy \(\pi\), at the same time change \(\pi\) towards greediness with respect to \(Q_{\pi}\). #### Q-Learning. Q-learning is an off-policy Temporal difference control algorithm. In Q-learning, the incremental update is given by \[Q(s_{t},a_{t})\Leftarrow Q(s_{t},a_{t})+\] \[alpha*[R_{t+1}+\gamma*\max_{a}Q(s_{t+1},a)-Q(s_{t},a_{t})]\] \(\alpha\): learning rate \(\gamma\): discount factor \(Q(s_{t},a_{t})\): action-value estimate for state \(s_{t}\) and action \(a_{t}\) \(Q(s_{t+1},a_{t+1})\): action-value estimate for state \(s_{t+1}\) and Figure 7: One-legged robot learning to hop action \(a_{t+1}\) The target policy is \(\pi^{*}=\operatorname*{argmax}_{a}Q(s,a)\). The term \(\max_{a}Q(s_{t+1},a)\) selects greedy actions irrespective of actual policy \(\pi\) (behavior policy). ### Policy gradient Until now, we considered action-value estimates for learning an optimal policy. As the observation and action spaces tend to grow, tabular methods prove inefficient due to the exponential growth of Q-table size, resulting in the curse of dimensionality problem. Now, we consider the class of methods that can select actions without using a value function. These are called as policy gradient methods. This method is applicable for learning optimal policies in the continuous observation space, using probability distributions over the action space. For this, we use a parameterized policy given by \(\pi(a|s,\theta)=Pr\{A_{t}=a|S_{t}=s,\theta_{t}=\theta\}\) where, '\(\theta\)' is the policy's parameter vector. Like the weight parameter vector 'w' we use for approximate action-value functions \(\hat{q}(s,a,w)\), here we use '\(\theta\). The constraints on policy parameterization are: \[\pi(a|s,\theta)\geq 0\ \forall\ a\ \in\ A\text{ {and}}\ s\ \in\ S\] \[\Sigma_{a\epsilon A}\pi(a|s,\theta)=1\ \forall\ s\ \in\ S\] \(\pi(a|s,\theta)\) is differentiable with respect to parameter '\(\theta\)' _i.e._, \(\nabla\ \pi(a|s,\theta)\) exists. In order to satisfy these conditions, we use a "softmax policy parameterization". \[\pi(a|s,\theta)\ =\ e^{h(a,s,\theta)}\ /\ \Sigma_{b\in A}e^{h(b,s,\theta)}\] \(h(s,a,\theta)\) is knows as parameterized numerical preferences where \(h(s,a,\theta)\in R\). The action with the highest preferences in each state are given the highest probabilities of being selected according to equation. Numerical preferences can be computed by a deep Artificial Neural Network, where \(\theta\) is the vector of all connection weights of the network or could simply be linear in features. \(h(s,a,\theta)=\theta^{T}X(s,a)\) where X(s,a) is some feature vector. The goal of RL is maximizing rewards in the long run \(R_{t},R_{(t+1)},R_{(t+2)}...\). In the policy gradient case, our objective maximizing the average reward \(r_{\pi}\) hence, we use gradient ascent. \[r(\pi)=\Sigma_{s}\mu(s)\Sigma_{a}\pi(a|s,\theta)\Sigma_{s^{\prime},r}p(s^{ \prime},r|s,a)*r\] \[\nabla_{\theta}r(\pi)=\nabla_{\theta}[\Sigma_{s}\mu(s)\Sigma_{a}\pi(a|s, \theta)\Sigma_{s^{\prime},r}p(s^{\prime},r|s,a)*r]\] From the product rule of calculus, \[\nabla_{\theta}r(\theta) =\Sigma_{s}\mu(s)\nabla_{\theta}\Sigma_{a}\pi(a|s,\theta)\Sigma_{ s^{\prime},r}p(s^{\prime},r|s,a)*r\] \[+\Sigma_{s}\nabla_{\theta}\mu(s)\Sigma_{a}\pi(a|s,\theta)\Sigma_{ s^{\prime},r}p(s^{\prime},r|s,a)*r\] The challenge of this method lies in computing the gradient of the state distribution \(\pi(s)\) as it changes with \(\theta\). To address this, we use the "policy gradient theorem" which returns a simplified expression independent of \(\nabla_{\theta}\mu(s)\). ### Deep Deterministic Policy Gradient Earlier method worked well with discrete action spaces but fails for continuous control problems. Deep Deterministic Policy Gradient (DDPG) incorporates Deterministic Policy Gradient (DPG) into the Actor-Critic structure to extend to continuous action spaces. It relies on off-policy updates using target networks. DDPG makes use of 4 networks in total - **actor network**, **critic network**, **target-actor network**, and **target-critic network**. The actor network computes the deterministic policy \(a_{t}=\mu(s_{t}|\theta^{\mu})\), where \(\theta^{\mu}\) are the weights for the actor network. However, this policy might not explore the full state and action space. To encourage exploration, it makes use of a random process called the Ornstein-Uhlenbeck Noise \(N_{t}\). In a continuous setting, it is defined as: \[dN_{t}=\beta*(\mu-N_{t})*dt+\sigma*dW_{t}\] In the discrete case, \[N_{t+1}=(1-\beta)*N_{t}-\mu+\sigma*(W_{t+1}-W_{t})\] \(N_{t}:\) noise at time 't' \(\beta:\) decay or growth rate of the system \(\mu:\) asymptotic mean \(\sigma:\) variation or size of noise \(W:\) wiener process The Weiner process also known as Brownian motion is a stationary process with white noise increments of a noise distribution \(N_{t}\) with \(\mu=0\) and \(\sigma=1\). The Critic network \(Q(s_{t},a_{t}|\theta^{Q})\) evaluates state-action pairs where \(\theta^{Q}\) are its weights. The target actor and critic network denoted by \(Q^{\prime}\) and \(\mu^{\prime}\) with weights \(\theta^{Q^{\prime}}\) and \(\theta^{\mu^{\prime}}\) respectively are a soft copy of the weights of actor and critic network \(\theta^{\mu}\) and \(\theta^{Q}\) respectively. \[\theta^{Q^{\prime}} \Leftarrow\theta^{Q}\] \[\theta^{\mu^{\prime}} \Leftarrow\theta^{\mu}\] Replay Buffer \(R\) stores the transition dynamics of the environment _i.e._, \(R=\{(s_{t},a_{t},r_{t},s_{t+1})\}\ \forall\ a_{t}\in A\)_and_\(s_{t}\in S;t\in[1,M^{\prime}]\) where \(M^{\prime}\) is the memory limit. Whenever an agent takes an action in the environment, the transition tuple \((s_{t},a_{t},r_{t},s_{t+1})\) is added to the replay buffer. The objective of the critic network is minimizing the temporal difference between the target-critic network's output and the estimated Q-value from its network. \[y_{i}=r_{i}+\gamma*Q^{\prime}(s_{i+1},\mu^{\prime}(s_{i+1}|\theta^{\mu^{\prime}} )|\theta^{Q^{\prime}})\] \[L=(1/n)*(y_{i}-Q(s_{i},\mu(s_{i}|\theta^{\mu})|\theta^{Q}))\] \(\mu^{\prime}(s_{i+1}|\theta^{\mu})\): estimated target-actor network's policy \(\mu(s_{i}|\theta^{\mu})|\theta^{Q})\): estimated actor network's policy \(Q^{\prime}(s_{i+1},\mu^{\prime}(s_{i+1}|\theta^{\mu^{\prime}})|\theta^{Q^{ \prime}})\): estimated target-critic network's Q-value \(Q(s_{i},\mu(s_{i}|\theta^{\mu})|\theta^{Q})\): estimated critic network's Q-value \(n\): number of random samples from replay buffer \(L\): critic loss The objective of the actor network is to learn the optimal policy that maximizes the expected return. It uses the policy gradient to achieve its goal. \[J(\theta)=E[Q(s,a)|s=s_{t},a_{t}=\mu(s_{t})]\] \[\nabla_{\theta\mu}J(\theta)\approx\nabla_{a}Q(s,a)\nabla_{(}\theta^{\mu})\mu(s| \theta^{\mu})\] Across 'n' mini-batch samples from replay buffer, \[\nabla_{\theta^{\mu}}J(\theta)\approx(1/n)*\] \[\Sigma_{i}\nabla_{a}Q(s,a|\theta^{Q})|_{s=s_{i},a=\mu(s_{i})}\nabla_{\theta^{ \mu}}\mu(s|\theta^{\mu})|_{s_{i}}\] The target networks are updated using a moving average equation with parameter '\(\tau\)', which indicates the fraction of weights carried over from the original actor-critic networks to the corresponding target networks. \[\theta^{Q^{\prime}}\Leftarrow\tau*\theta^{Q}+(1-\tau)*\theta^{Q^{\prime}}\] \[\theta^{\mu^{\prime}}\Leftarrow\tau*\theta^{\mu}+(1-\tau)*\theta^{\mu^{\prime}}\] The original pseudo-code for DDPG is illustrated below: **Deep Deterministic Policy Gradient (DDPG)** ``` 1:Input: initial policy parameters \(\theta\), Q-function parameters \(\delta\), empty replay buffer \(\mathcal{D}\) 2:Set target parameters equal to main parameters \(\theta_{\text{max}}\leftarrow\theta\), \(\phi_{\text{max}}\leftarrow\phi\) 3:repeat 4: Observe state \(s\) and select action \(a=\text{clip}(\mu_{\theta}(s)+\epsilon,a_{\text{true}},\mu_{\theta_{\text{min}}})\), where \(\epsilon\sim\mathcal{N}\) 5: Execute a in the environment 6: Observe next state \(s^{\prime}\), reward \(r\), and close signal \(d\) to indicate whether \(s^{\prime}\) is terminal 7: Store \((s,a,r,s^{\prime},d)\) in replay buffer \(\mathcal{D}\) 8: If \(s^{\prime}\) is terminal, reset environment state. 9:if it's time to update then 10: for flower transport updates do 11: Randomly sample a batch of transitions, \(B=\{(s,a,r,s^{\prime},d)\}\) from \(\mathcal{D}\) 12: Compute targets \[y(r,s^{\prime},d)=r+\gamma(1-d)Q_{\text{max}}(s^{\prime},\mu_{\text{max}}(s^{ \prime}))\] 13: Update Q-function by one step of gradient descent using \[\nabla_{i}\frac{1}{|B|}\sum_{(s,a^{\prime},d)\in B}(Q_{i}(s,a)-y(r,s^{\prime}, d))^{2}\] 14: Update policy by one step of gradient ascent using \[\nabla_{i}\frac{1}{|B|}\sum_{s\in\theta}Q_{i}(s,\mu_{\theta}(s))\] 15: Update target networks with \[\phi_{\text{max}}\leftarrow\phi_{\text{max}}+(1-\rho)\phi\] \[\theta_{\text{max}}\leftarrow\phi_{\text{max}}+(1-\rho)\theta\] 16:endfor 17:endif 18:until convergence ``` **Algorithm 1** Deep Deterministic Policy Gradient. ## Experiments All the experiments were run on a Nvidia GeForce GTX 1060 with Max-Q Design and Intel Core i7-7700HQ CPU, with a physical memory (RAM) size of 16 GB. Tabular Q-learning and SARSA (State-action-reward-state-action) were the baseline methods chosen. Our initial approach was to experiment with the performance of discrete observation and action space methods on continuous observation and control tasks. As the ranges were [-inf, inf] for each observation, we sampled across 10k observations and clipped the maximum and minimum ranges to [-25, 25]. We discretized the continuous values into 2 buckets categorized into 0, 1, for both the action and observation spaces. We varied the learning rate starting from 0.2, 0.3,..., 0.9. Other parameters chosen were: \(\gamma=0.99\), number of episodes (epochs) = 500 and number of steps per episode = 1000. Actions were selected using an epsilon-greedy policy with \(\epsilon=0.99\) decaying at a rate of: \[\epsilon=log_{10}((e^{\epsilon}+1)/25)\] The following curves were observed for Tabular Q-learning and SARSA. **Q-learning vs. SARSA** The following observations listed are from Figure 10: * The plot shows that Q-learning has extremely stochastic behavior, whereas SARSA exhibits more stable behavior over time. This is due to Q-learning's off-policy nature, in which the target and behavior policy are not the same. * learning rate) * Q(s, a) = 0 and we rely only on greediness in Q-learning or randomness in SARSA. * Q-learning performance improves as the learning rate rises until the learning rate reaches one. However, SARSA's performance is rather stable across all learning rates. It was followed by a Deep Deterministic Policy Gradient (DDPG) network as it was known to perform well on continuous control tasks. The parameter values taken were: \[\gamma=0.4,\tau=0.99,\theta=0.15,\] \[\mu=0.0,\sigma=0.3,M^{\prime}=10000,n=100\] The actor and critic networks were built using two different architectures. The initial architecture included two hidden layers, with the 1st and 2nd hidden layers containing 32 and 16 neurons, respectively. The other architecture used 4 hidden layers, each having 32, 64, 32, and 16 neurons for the 1st, 2nd, 3rd, and 4th hidden layers, respectively. The Adam Figure 8: Pseudo-code from the original paper Figure 9: Average rewards for Q-learning (left) and SARSA (right) when run under the same conditional parameters in the HalfCheetah-v2 task from the MuJoCo environment. optimizer was employed for adaptive moment estimation. Mini-batch size = n was used to train the critic network, and mini-batch size = 1 was used to train the actor network. **Actor network (HalfCheetah-v3 task).** The number of episodes (epochs) were 10, and each episode had 1000 steps. For the former architecture [32, 16] we observe the average rewards begin with -1200 and after 10 episodes amounted to -300. The latter architecture [32, 64, 32, 16] comparatively performed much better with an initial average reward of -0.621 and converged to -0.401 at the end of 10 episodes. **Improvements.** The significant improvement involved few changes: 1. In contrast to the Linear activation utilized in the actor network of the former architecture, the later used a hyperbolic tangent (tanh) activation function. We assumed that a tanh activation would be meaningful because the values for the actions varied from [-1, 1], and it maps its input from that range. 2. More hidden layers were added with a 0.2 probability of dropout in the later architecture as compared to fewer layers in the former. As a result, the networks may have learned additional features to better estimate the action-value and policy. The decline in the latter phases of training can be ascribed to overfitting or a greater learning rate leading to overshooting the point of maximum average reward. Figure 11: Current average and predicted average rewards for DDPG in HalfCheetah-v2 task with 10000+ iterations. Figure 10: Average rewards for Q-learning (left) and SARSA (right) with \(\alpha\) = [0.01, 0.05, 0.1, 0.5, 1] in the HalfCheetah-v2 task from the MuJoCo environment. Figure 11: Current average and predicted average rewards for DDPG in HalfCheetah-v2 task with 10000+ iterations. ## Conclusion The average rewards received by Q-learning and SARSA for different learning rates are compared in this paper. Q-learning had somewhat better rewards than SARSA, while DDPG, a deterministic policy gradient approach, had even better outcomes than Q-learning and SARSA. There are two things that may be deduced from this. 1. [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] 2. Off-policy methods work better compared to on-policy methods on continuous tasks. 3. Deterministic Policy gradient methods work well in continuous control problems. By enumerating through the replay buffer, we were able to get minibatches using a non-vectorized version of DDPG. This might be the cause of the algorithm's slowness. In the future, we want to employ vectorized implementations. Furthermore, given additional simulation time, the anticipated plot in Figure 11 shows that DDPG would eventually lead to greater and better payouts.
2304.07254
Dynamic Mobile-Former: Strengthening Dynamic Convolution with Attention and Residual Connection in Kernel Space
We introduce Dynamic Mobile-Former(DMF), maximizes the capabilities of dynamic convolution by harmonizing it with efficient operators.Our Dynamic MobileFormer effectively utilizes the advantages of Dynamic MobileNet (MobileNet equipped with dynamic convolution) using global information from light-weight attention.A Transformer in Dynamic Mobile-Former only requires a few randomly initialized tokens to calculate global features, making it computationally efficient.And a bridge between Dynamic MobileNet and Transformer allows for bidirectional integration of local and global features.We also simplify the optimization process of vanilla dynamic convolution by splitting the convolution kernel into an input-agnostic kernel and an input-dependent kernel.This allows for optimization in a wider kernel space, resulting in enhanced capacity.By integrating lightweight attention and enhanced dynamic convolution, our Dynamic Mobile-Former achieves not only high efficiency, but also strong performance.We benchmark the Dynamic Mobile-Former on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection, and instanace segmentation.For example, our DMF hits the top-1 accuracy of 79.4% on ImageNet-1K, much higher than PVT-Tiny by 4.3% with only 1/4 FLOPs.Additionally,our proposed DMF-S model performed well on challenging vision datasets such as COCO, achieving a 39.0% mAP,which is 1% higher than that of the Mobile-Former 508M model, despite using 3 GFLOPs less computations.Code and models are available at https://github.com/ysj9909/DMF
Seokju Yun, Youngmin Ro
2023-04-13T05:22:24Z
http://arxiv.org/abs/2304.07254v1
Dynamic Mobile-Former: Strengthening Dynamic Convolution with Attention and Residual Connection in Kernel Space ###### Abstract We introduce Dynamic Mobile-Former(DMF), maximizes the capabilities of dynamic convolution by harmonizing it with efficient operators. Our Dynamic Mobile-Former effectively utilizes the advantages of Dynamic MobileNet (MobileNet equipped with dynamic convolution) using global information from light-weight attention. A Transformer in Dynamic Mobile-Former only requires a few randomly initialized tokens to calculate global features, making it computationally efficient. And a bridge between Dynamic MobileNet and Transformer allows for bidirectional integration of local and global features. We also simplify the optimization process of vanilla dynamic convolution by splitting the convolution kernel into an input-agnostic kernel and an input-dependent kernel. This allows for optimization in a wider kernel space, resulting in enhanced capacity. By integrating lightweight attention and enhanced dynamic convolution, our Dynamic Mobile-Former achieves not only high efficiency, but also strong performance. We benchmark the Dynamic Mobile-Former on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection, and instanace segmentation. For example, our DMF hits the top-1 accuracy of 79.4% on ImageNet-1K, much higher than PVT-Tiny by 4.3% with only \(1/4\) FLOPs. Additionally, our proposed DMF-S model performed well on challenging vision datasets such as COCO, achieving a 39.0% mAP, which is 1% higher than that of the Mobile-Former 508M model, despite using 3 GFLOPs less computations. Code and models are available at [https://github.com/ysj9909/DMF](https://github.com/ysj9909/DMF) ## 1 Introduction Recently, vision transformer(ViT) [1, 2] demonstrates the advantage of global processing and excellent capabilities with scalability over convolutional neural networks (CNNs). However, since ViT has a time complexity that scales quadratically with the sequence length, its computational efficiency decreases when working with longer sequences, making it more challenging to train and deploy ViT models in scenarios with limited computational resources. If we reduce the computational cost to under 500M FLOPs, MobileNet [3, 4, 5] and other similar-sized lightweight CNNs [6, 7, 8] still outperform ViT variants in terms of efficiency, thanks to their ability to perform local processing using techniques such as depth-wise separable convolution [4] and group convolution with channel shuffle [6, 7]. Along with great advances in efficient CNN architecture design, dynamic convolution [9, 10] has recently gained popularity for the implementation of lightweight networks due to its ability to achieve significant performance gains with negligible computational cost. This has led to its adoption for multiple vision tasks [10, 11, 12, 13, 14, 15]. Moreover, as many computer vision applications are not constrained by the number of parameters, but instead have strict latency requirements for inference, such as real-time object detection and tracking for autonomous driving, the potential of using dynamic convolution in these scenarios is increasing. The basic idea of dynamic convolution is to dynamically aggregate multiple convolution kernels into a convolution weight matrix, using an input-dependent attention mechanism. \[\mathbf{W}_{dynamic}(x)=\sum_{k=1}^{K}\pi_{k}(x)\mathbf{W}_{static}^{k}\ \ \ s.t.\ \ \ 0\leq\pi_{k}(x)\leq 1, \tag{1}\] where K static convolution kernels \(\mathbf{W}_{static}^{k}\) are weighted summed using attention scores \(\ \pi_{k}(x)\). Dynamic convolution has two limitations: 1) The input to a kernel attention module may have limited representational power, which can restrict the ability of the attention mechanism to calculate relevant filters effectively. 2) a joint optimization of attention scores and static kernels can be challenging. In this work, we propose our dynamic residual group convolution that can efficiently compute input-specific local features while addresses the limitations mentioned above. [9, 10] uses the global average pooled features as inputs to the kernel attention module to generate attention scores. However, spatially squeezed features have limited representational power, because all features at each location are merged with the same weight. To solve this limitation, our Dynamic Mobile-Former utilizes global salient features, which are calculated using light-weight attention, as input to the kernel attention module. This allows appropriate kernels to be selected for each input. And [9] proposed the use of a sigmoid layer to generate attention scores, leading to a significantly large space for the convolution kernel that makes the learning of attention scores difficult. [10] replaced the sigmoid layer with a softmax function to compress the kernel space, thereby facilitating learning of the attention module. However, the model capacity is reduced as the kernel space is limited. To solve this problem, we separate the existing convolution kernel into an input-agnostic kernel and an input-dependent kernel to ease learning of the kernel attention module. In addition to our method, we use sigmoid activation to achieve better performance than [9, 10]. Its effectiveness has been verified by experimental results in section 4.3. We also propose a more efficient module by combining the above techniques with group convolution(see section 3.3). Our DMF model achieves superior performance in terms of both accuracy and FLOPs, as demonstrated through extensive experiments on ImageNet and other downstream tasks. Our method shows the favorable tradeoff between accuracy and FLOPs (see Fig. 1). For example, DMF-S achieves 79.4% top-1 accuracy on ImageNet, which is higher than that of the MobileFormer-508M [19] (current state-of-the-art) with less computations. DMF-S also hits 0.5% higher performance in COCO detection compared to ResNet101 [20], while utilizing almost half the computations required by ResNet101 (315G \(\rightarrow\) 165G). ## 2 Related Work **Efficient CNNs:** MobileNet [3, 4, 5] uses depthwise separable convolution to decompose a standard k x k convolution into a depthwise convolution and a pointwise convolution. ShuffleNet [6, 7] combines group convolution and channel shuffle to improve the efficiency of the network. EfficientNet [8, 21] proposes a compound scaling method that scales up the depth, width, and resolution of the network in a principled manner. Other efficient operators include cheap linear transformations in GhostNet [22], mixing up multiple kernel sizes [23], and using additions to trade multiplications in AdderNet [24]. Our Dynamic Mobile-Former effectively combines the efficient operators such as depthwise separable convolution, group convolution presented mentioned above. **Vision Transformers and efficient variants:** The pioneering work ViT [1] directly applied the transformer [25] to classification with image patches as input. Since then, there have been numerous attempts [26, 27, 28, 29, 30, 31, 32, 33, 34, 35] to apply transformer in computer vision tasks. Furthermore, research aimed at harmonizing strengths of ViT and CNN models [36, 31, 37, 38, 39, 40, 41, 42, 26] has gained popularity in the computer vision community. [31, 26, 38] leverage a convolutional projection into a vanilla attention [1, 25] module. There are studies that utilize both attention and convolutional blocks either sequentially [36, 40] or in parallel [19, 37, 41, 42]. Figure 1: **Performance comparison between DMF and other methods. Left: Top-1 accuracy on ImageNet-1K [16]. Right: Object detection results on COCO val2017 [17] of various backbones using RetinaNet [18] framework, all numbers are for single-scale training, 12 epochs (1\(\times\)) training schedule. Best viewed in color.** Different from the above ViT variants, there is another line of works [19, 43] that use cross-attention with very few learnable tokens. Our Dynamic Mobile-Former also uses cross-attention with very few learnable tokens for calculating global sailent features. Our Dynamic Mobile-Former endows Dynamic Convolution [9, 10] to convey global information to local features, in contrast to [19] which utilizes cross-attention. **Dynamic Neural Networks:** Dynamic networks increase their representation power by adjusting their parameters or activation functions [44] based on the input. [45, 46] re-calibrates channel information by squeezing global context. Dynamic convolution [9, 10] aggregates multiple kernels based on input dependet attention. DCD [47] proposes a dynamic convolution decompsition method which can get more compact yet competitive models to handle limitations of the dynamic convolution. ODConv [48] introduces a dynamic convolution approach that is not only computationally efficient, but also parametrically efficient. Instead, in this paper we also aim to address the limitations of dynamic convolution in different manner, see the Introduction and Method sections for details. ## 3 Dynamic Mobile-Former ### Overall Architecture As shown in Fig. 2, Dynamic Mobile-Former(DMF) combines the MobileNet [4] and Transformer [1] using dynamic convolution and lightweight cross attention. DY-Mobile (refers to Dynamic Mobile block) extracts local features input-dependently from an input image, while Former and Cross-Attn (refer to Transformer block and cross attention layer, respectively) extract global features using learnable tokens. Unlike ViT, which projects local image patches linearly, Former and Cross-Attn use significantly fewer parameters (e.g. 6 or fewer tokens) resulting in reduced computational cost. As mentioned in [36], ViTs and CNNs are complementary, and it is important to combine features of attention and convolution appropriately. To achieve this goal, different approaches have been proposed such as adding attention layers at the end of each stage as in [36], configuring two layers in parallel and element-wise adding their outputs as in [42], and utilizing cross-attention layers as in [19]. Following [19], our DMF uses cross attention layer to fuse local features to global tokens. However, to fuse global tokens into local features, we input the global token into the kernel attention module in dynamic convolu Figure 2: **The overall architecture of Dynamic Mobile-Former(DMF) and details of DMF block**. Following [19], Dynamic Mobile-Former adopts parallel design for processing both local and global features. Each layer includes a Dynamic Mobile block, a cross attention block, a Former block,and a Inverted Residual FFN. Also DYR-gconv and DYR-dwconv in Dynamic Mobile block means dynamic residual group convolution, dynamic residual depth-wise convolution respectively. Best viewed in color. tion [9, 10]. To improve model efficiency, we use group convolutions on 1x1 layers, which reduces computational costs by ensuring each convolution operates on its corresponding input channel group. However, when multiple group convolutions are stacked together, a side effect can occur where the outputs from a particular channel are derived from only a small fraction of input channels. To enable feature communication between different groups of channels, we apply Inverted Residual Feed-Forward Network(IRFFN) [38] after the DY-Mobile. Fully-connected layers in IRFFN can solve the above problem by connecting all channels of groups that are separated from each other. The DMF block is able to capture local dependencies by generating input-specific kernel using global information. ### DMF Block The proposed DMF block consists of a Dynamic Mobile block(DY-Mobile), a Former block with cross attention layer, and an Inverted Residual Feed-Forward Network(IRFFN), as illustrated in Fig. 2 right. DMF block has two inputs. 1) Local feature map \(\mathbf{X}^{i}\in\mathbb{R}^{N\times C}\) with \(C\) channels and sequence length \(N=H\times W\) (the resolution of the input of current block), and 2) global tokens \(\mathbf{Z}^{i}\in\mathbb{R}^{M\times d}\), where \(M\) and \(d\) are the number and dimension of tokens, respectively. Note that \(M\) and \(d\) are consistent across all blocks. DMF block outputs the updated local feature map \(\mathbf{X}^{i+1}\) and global tokens \(\mathbf{Z}^{i+1}\), which are used as input for the subsequent block. We will describe these Four parts in the following. ### DY-Mobile DY-Mobile is built based on the inverted bottleneck in [4] with three modifications. Firstly, we substitute all vanilla convolutions with our dynamic residual convolutions. The dynamic residual convolution kernel consists of two parts: an input-agnostic kernel and a dynamic kernel calculated as Eq.1. specifically, adding the input-agnostic kernel to vanilla dynamic kernel leads to \[\mathbf{W}_{dy-res}(x)=\mathbf{W}_{dynamic}(x)+\mathbf{W}_{input-agnostic}\] \[where\quad\mathbf{W}_{dynamic}(x)=\sum_{k=1}^{K}\pi_{k}(x)\mathbf{W}_{static }^{k},\quad 0\leq\pi_{k}(x)\leq 1, \tag{2}\] Where \(\mathbf{W}_{static}\) and \(\mathbf{W}_{input-agnostic}\) are static but \(\mathbf{W}_{static}\) is initialized to zero while \(\mathbf{W}_{input-agnostic}\) is initialized randomly. Fig. 3 provides a schematic visualization of dynamic residual convolution layer. specifically, to compute attention scores, we use sigmoid activation function. And following [10], we adopt a temperature annealing strategy in the early training process to suppress the near zero output of the sigmoid function. By doing so, all convolution kernels are optimized simultaneously in early training epochs. Secondly, we use group convolution for point-wise convolution. Since a high expansion ratio(3 - 6) is used in the block [4], efficiency can be greatly increased by using group convolution. Lastly, we replace ReLU with GELU [49] as the activation function, following [50]. Note that the kernel size of depth-wise convolution is 3\(\times\)3 for all layers. DY-Mobile consumes computations of \(O(NC^{2})\). Along with IRFFN (specified in sec 3.5), it accounts for the majority of the computational complexity. ### Former with Cross Attention The light-weight cross attention from local feature map \(\mathbf{X}\) to global tokens \(\mathbf{Z}\) is computed as: \[\mathrm{CrossAttn}=\mathrm{Concat}(\mathrm{head}_{1},...,\mathrm{ head}_{N})W^{O}, \tag{3}\] \[\mathrm{head}_{i}=\mathrm{Attention}(\mathbf{Z}_{i}W_{i}^{Q}, \mathbf{X}_{i},\mathbf{X}_{i}),\] (4) \[\mathrm{Attention}(\mathbf{q},\mathbf{k},\mathbf{v})=\mathrm{ Softmax}(\mathbf{q}\mathbf{k}^{\mathsf{T}}/\sqrt{d_{head}})\mathbf{v}, \tag{5}\] where \(\mathrm{Concat}(\ \cdot\ )\) is the concatenation operation. \(W_{i}^{Q}\in\mathbb{R}^{d\times d_{head}}\) and \(W^{O}\in\mathbb{R}^{d\times d}\) are linear projection weights. \(N\) is the head number of the attention layer. Therefore, the dimension of each head \(d_{head}\) is equal to \(\frac{d}{N}\). To reduce computational requirements from high resolution feature map \(\mathbf{X}\), we eliminate the projections (\(W^{K}\), \(W^{V}\)). Through these formulas, the tokens \(Z\) are able to learn global priors. The subsequent Former sub-block is a standard Transformer [25] block including a multi-head self attention and a feed-forward network(FFN). An expansion ratio of 2 is utilized for FFN. The global features, obtained as described above, is fed into the dynamic residual convolutions. When processing an input feature map of size Figure 3: **A Dynamic Residual Convolution layer.** Firstly, the global average pooled features of the input and the first global token are concatenated. These concatenated features are then passed through a kernel attention module to generate attention scores (\(\pi\)) and obtain dynamic convolution weights using Eq. 1. Finally, a dynamic residual kernel is obtained by adding the obtained dynamic kernel and the input-agnostic kernel as described in Eq. 2. And \(W_{static}\) are zero initialized according to the concept of residual connection, but \(W_{input-agnostic}\) is initialized randomly. \(N\times C\) along with \(M\) global tokens of dimension \(d\), Former model with cross-attention has a computational complexity of \(O(M^{2}d+Md^{2}+MNC+MdC)\). ### Irffn The IRFFN sub-block differs slightly from the inverted residual FFN in [38] by replacing Batch Normalization [52] after shortcut connection with Global Response Normalization(GRN) [53]. specifically, the first layer expands the dimension by a factor of a certain number, and the second layer reduces the dimension by the same ratio. Between these two layers, the depth-wise convolution with shortcut connection is used: \[\mathrm{IRFFN}(\mathbf{X})=1\mathrm{x}1\mathrm{Conv}(\mathrm{SC}( 1\mathrm{x}1\mathrm{Conv}(\mathbf{X}))), \tag{6}\] \[\mathrm{SC}(\mathbf{X})=3\mathrm{x}3\mathrm{DWConv}(\mathbf{X})+ \mathbf{X}, \tag{7}\] where the activation(GELU [49]), GRN is omitted. We also include the batch normalization [52] after two 1x1 convolutions. This alteration facilitates communication between features that were calculated independently in each channel group of previous DY-Mobile. By using GRN [53], various features can be calculated in different groups in subsequent DY-Mobile. This sub-block has the same computational costs as DY-Mobile (\(O(NC^{2})\)). ### Model Specification table 1 shows the detailed architectures at different computational complexities (i.e. 499M - 198M FLOPs). Dynamic Mobile-Former(DMF) has three models with different configurations based on the number of groups in DY-Mobile. While they share similar model designs in terms of having the same number of channels per group, they differ in the number of groups and expansion ratio in IRFFN. As an illustration, DMF with 499M FLOPs for image size 224\(\times\)224, which stacks 11 DMF blocks. All blocks contain six global tokens with a dimension of 192, following [19]. It begins with a 3\(\times\)3 convolution as stem and a lite bottleneck block [51] in stage 1. The bottleneck block expands and then squeezes the number of channels by stacking a 3x3 depth-wise and point-wise convolution. For downsampling across multiple blocks, a 3\(\times\)3 depth-wise convolution with stride of 2 is used. To compute head input, we concatenate global average pooled local features with first global token. These features are then passed through two fully-connected layers with hard-swish [5] in between. DMF generates four hierarchical feature maps with vary \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \multirow{2}{*}{stage (output size)} & \multicolumn{2}{c|}{**DMF - S**} & \multicolumn{2}{c|}{**DMF - XS**} & \multicolumn{2}{c}{**DMF - XXS**} \\ \cline{2-10} & operator & \#exp & \#out & operator & \#exp & \#out & operator & \#exp & \#out \\ \hline Tokens (\# heads) & \multicolumn{3}{c|}{6 × 192 (8)} & \multicolumn{3}{c|}{6 × 192 (8)} & \multicolumn{3}{c|}{6 × 192 (8)} \\ \hline Stem (112 × 112) & vanilla 3x3conv & - & 24 & vanilla 3x3conv & - & 16 & vanilla 3x3conv & - & 12 \\ \hline 1 (112 × 112) & bneck-lite & 48 & 24 & bneck-lite & 32 & 18 & bneck-lite & 24 & 12 \\ \hline Downsampling & 3x3 dwconv & - & 24 & dwconv 3x3 & - & 18 & 3x3 dwconv & - & 12 \\ \hline 2 (56 × 56) & DMF block(2,4,2.7) & 144 & 40 & DMF block(2,3,2) & 108 & 30 & DMF block(2,2,3) & 72 & 20 \\ 2 (56 × 56) & DMF block(2,4,2.7) & 120 & 40 & DMF block(2,3,2) & 90 & 30 & DMF block(2,2,3) & 60 & 20 \\ \hline Downsampling & 3x3 dwconv & - & 40 & 3x3 dwconv & - & 30 & 3x3 dwconv & - & 20 \\ \hline 3 (28 × 28) & DMF block(2,4,2.7) & 240 & 72 & DMF block(2,3,2) & 180 & 54 & DMF block(2,2,3) & 120 & 36 \\ 3 (28 × 28) & DMF block(2,4,2.7) & 216 & 72 & DMF block(2,3,2) & 162 & 54 & DMF block(2,2,3) & 108 & 36 \\ \hline Downsampling & 3x3 dwconv & - & 72 & 3x3 dwconv & - & 54 & 3x3 dwconv & - & 36 \\ \hline 4 (14 × 14) & DMF block(2,4,2.7) & 432 & 128 & DMF block(2,3,2) & 324 & 96 & DMF block(2,2,3) & 216 & 64 \\ 4 (14 × 14) & DMF block(4,4,2.7) & 512 & 128 & DMF block(3,3,2) & 384 & 96 & DMF block(2,2,3) & 256 & 64 \\ 4 (14 × 14) & DMF block(4,4,2.7) & 768 & 176 & DMF block(3,3,2) & 576 & 132 & DMF block(2,2,3) & 384 & 88 \\ 4 (14 × 14) & DMF block(4,4,2.7) & 1056 & 176 & DMF block(4,3,2) & 792 & 132 & DMF block(4,2,3) & 528 & 88 \\ \hline Downsampling & 3x3 dwconv & - & 176 & 3x3 dwconv & - & 132 & 3x3 dwconv & - & 88 \\ \hline 5 (7 × 7) & DMF block(4,4,2.7) & 1056 & 240 & DMF block(4,3,2) & 792 & 180 & DMF block(4,2,3) & 528 & 120 \\ 5 (7 × 7) & DMF block(8,4,2.7) & 1440 & 240 & DMF block(6,3,2) & 1080 & 180 & DMF block(4,2,3) & 720 & 120 \\ 5 (7 × 7) & DMF block(8,4,2.7) & 1440 & 240 & DMF block(6,3,2) & 1080 & 180 & DMF block(4,2,3) & 720 & 120 \\ 5 (7 × 7) & DYR - 1x1conv & - & 1440 & DYR - 1x1conv & - & 1152 & DYR - 1x1conv & - & 960 \\ \hline pool \& concat & - & - & 1632 & - & - & 1344 & - & - & 1152 \\ \hline FC1 & - & - & - & 1920 & - & - & 1920 & - & - & 1600 \\ FC2 & - & - & - & 1000 & - & - & 1000 & - & - & 1000 \\ \hline \# FLOPs & \multicolumn{3}{c|}{499M} & \multicolumn{3}{c|}{285M} & \multicolumn{3}{c|}{198M} \\ \end{tabular} \end{table} Table 1: **Dynamic Mobile-Former Architectures.** The ”bneck-lite” refers to the lite bottleneck block [51]. ”DYR - 1 × conv” denotes our dynamic residual point-wise convolution. We employ depth-wise convolution with stride 2 to handle the spatial downsampling. All dynamic residual convolutions used in models use 8 static kernels. ”DMF block(\(H\), \(G\), \(R\))” denotes Dynamic Mobile-Former block with \(H\) heads for the cross attention layer, \(G\) groups for the point-wise convolution in DY-Mobile, and \(R\) expansion ratio for the IRFFN, respectively. ing resolutions, similar to [4, 8, 20, 30, 38]. These feature maps have strides of 4, 8, 16, and 32 with respect to the input image, enabling DMF to obtain multi-scale representations. This makes DMF well-suited for downstream tasks such as object detection and instance segmentation, as it can capture both fine and coarse details of an object at different scales. ## 4 Experiments In this section, we evaluate the proposed Dynamic Mobile-Former on ImageNet-1K classification [16], COCO object detection [17], and instance segmentation, by comparing it with representative ViTs, CNNs and their hybrid models. Ablation analysis is also conducted to showcase the contribution of each novelty in our method. Our implementation is based on PyTorch library [62] and Timm codebase [63]. ### ImageNet Classification **Implementation Details.** ImageNet [16] provides approximately 1.28M training and 50K validation images for 1000 categories. We train our Dynamic Mobile-Former(DMF) models at an input resolution of 224x224 with a batch size of 1024. All models are trained from scratch using AdamW [64] optimizer for 470 epochs. We use cosine learning rate schedule [65] with linear warmup for 20 epochs. Data augmentation includes Random Erasing [66], Horizontal Flip, Random Resized Crop(RRC), and RandAugmentation [67]. Further we use multi-scale sampler [59] during training. For regularization, we use weight decay, dropout(different combinations for models with different model configurations), and stochastic depth [68] with a rate of 0.1. We also use Exponential Moving average (EMA) [69] with a momentum of 0.9995 for training. **Results of DMF.** Table 2 shows the performances of the proposed DMFs that are specified in table 1. Our models consistently outperform efficient CNNs and multiple variants of vision transformers, with fewer FLOPs. In particular, our DMF-XXS achieves a 0.6% higher top-1 accuracy, while using only 60% of the FLOPs of MobileNetV2 [4] with ODConv(\(4\times\)) [48]. This proves that our dynamic residual convolution with parallel design improves the representational power efficiently. We also compare our Dynamic Mobile-Former with various ViT variants(EdgeViT [54], Swin [29], BoT [61], CMT [38], MPViT [58]). Specifically, Compared to [29, 58], our DMFs achieve higher accuracy but use 3\(\sim\)4 times less computations. This is because that DMF uses efficient operators(group convolution [6], dynamic convolution [10]) combined with light-weight cross-attention with fewer tokens to extract local features effectively. Note that our DMF (trained in 470 epochs) even outperforms CMT-Tiny [38] which leverages much longer training (1000 epochs). We plot the accuracy-FLOPs curve in Fig. 1 (Left) to have an intuitive comparison between these methods. Prior works on dynamic convolution [10, 47, 48, 9] have shown efficiency, but their performance still falls short compared to low FLOPs regime efficient models. In contrast, our DMF maximizes representation power combining efficient operators and dynamic convolution in an appropriate manner, striking a balance between accuracy and FLOPs. ### Object Detection and Instance Segmentation **Implementation Details.** We validate our DMF as an efficient vision backbone for object detection and instance segmentation with RetinaNet [18] and Mask R-CNN [71], respectively. The experiments are conducted on COCO [17], which contains 118K training images and 5K validation images of 80 classes. We pretrain the backbones on the ImageNet-1K and replace the original backbones with our DMFs to generate multi-scale feature maps. For RetinaNet [18], All models are trained under standard single-scale and "1\(\times\)" schedule (12 epochs) from ImageNet pre \begin{table} \begin{tabular}{c c c c c c} Model & \#Pub & Res. & **Plarms** & FLOPs & Top-1 \\ \hline EdgeViT-XXS [54] & ECCV’22 & 256\({}^{2}\) & 4.1M & 557M & 74.4 \\ EdgeNx-XS [55] & ECCV’22 & 256\({}^{2}\) & 2.3M & 538M & 75.0 \\ MobileNetV3 1.0c (s) & ICCV’19 & 224\({}^{2}\) & 5.4M & 217M & 75.2 \\ MobileNetV2 1.0c (s) +ODConv4 [48] & ICLR’22 & 222\({}^{2}\) & 11.5M & **190M** & **76.0** \\ \hline **DMF-XXS** & - & 224\({}^{2}\) & 15.1M & 195M & 76.5 \\ \hline Conf-S [56] & arXiv21 & 224\({}^{2}\) & 10.1M & 1.5G & 76.5 \\ EfficientNet-B0 [8] & ICML’19 & 224\({}^{2}\) & 5.3M & 390M & 77.1 \\ Swin-1G [29] & ICCV’21 & 224\({}^{2}\) & 7.3M & 1.0G & 77.3 \\ EfficientNetV1 [57] & arXiv22 & 192\({}^{2}\) & 7.9M & 394M & 77.7 \\ \hline **DMF-XS** & - & 224\({}^{2}\) & 19.8M & **285M** & **77.8** \\ \hline PVT-T [50] & ICCV’21 & 224\({}^{2}\) & 13.2M & 1.9G & 75.1 \\ EdgeViT-XS [54] & ECCV’22 & 256\({}^{2}\) & 6.7M & 2.0G & 77.5 \\ MFViT-T [58] & CVPR 22 & 224\({}^{2}\) & 5.8M & 1.6G & 78.2 \\ MobileNetV5 [59] & ICLR’22 & 256\({}^{2}\) & 5.6M & 2.0G & 78.4 \\ EfficientNetV2-B0 [60] & ICML’21 & 224\({}^{2}\) & 7.4M & 70M & 78.7 \\ EdgeNx-S [55] & ECCV’22 & 224\({}^{2}\) & 5.6M & 963M & 78.8 \\ BoT-S1-50 [61] & CVPR’21 & 224\({}^{2}\) & 20.8M & 4.3G & 79.1 \\ CMFT’17 [38] & CVPR’22 & 160\({}^{2}\) & 9.5M & 600M & 79.1 \\ Swin-2G [29]\({}^{\dagger}\) & ICCV’21 & 224\({}^{2}\) & 12.8M & 2.0G & 79.2 \\ MobileFormer-50SM [19] & CVPR’22 & 224\({}^{2}\) & 14.0M & 508M & 79.3 \\ \hline **DMF-S** & - & 224\({}^{2}\) & 25.1M & **499M** & **79.4** \\ \end{tabular} \end{table} Table 2: Comparison on ImageNet-1k benchmark. light-weight CNNs, ViT variants, and hybrid models with similar accuracy are grouped together for comparison. The proposed DMFs consistently outperform other models with less computational budget. \({}^{\dagger}\) means the results are from [19]. \begin{table} \begin{tabular}{c c|c|c|c} AFKA & RCRKS & \#Params & FLOPs & Top-1 \\ \hline ✗ & ✗ & 10.0M & 194.0M & 73.5 \\ ✗ & ✗ & 12.6M & 196.6M & 73.6 (+ 0.1) \\ ✗ & ✗ & 12.4M & 195.9M & 74.2 (+ 0.7) \\ ✗ & ✗ & 15.1M & 198.0M & 74.6 (+ 1.1) \\ \hline \end{tabular} \end{table} Table 3: **Ablation of AFKA and RCRKS. Here, DMF-XXXS is used and all models are trained on ImageNet dataset for 250 epochs. “AFKA” and “RCRKS” mean Attention Features for Kernal Attention module, Residual Connection in Kernel Space, respectively.** trained weights. For Mask R-CNN [71], we train models for both "1\(\times\)" schedule and "3\(\times\)" schedule (36 epochs) with a multi-scale training strategy [29]. We use AdamW [64] optimizer with an initial learning rate of 0.0001, weight decay of 0.0001, and a batch size of 16. We use the popular MMDetection toolbox [72] for experiments with all models. **Results of DMF.** In Table 4 for object detection with RetinaNet, we compare our Dynamic Mobile-Former with efficient networks (CNNs : ShuffleNetV2 [7], MobileNetV3 [5], ResNet [20] ViTs : PVTs [30, 70], ConTNet [56] Hybrid : MobileFormer [19]). Under similar computational cost, our DMF surpasses ShuffleNetV2 [7], MobileNetV3 [5] by 9.4 points of mAP and 8.1 points of mAP respectively. Furthermore, it is worth noting that our DMF consistently outperforms MobileFormer [19] by a margin of 0.7 - 1.0% with less computations. For instance segmentation with Mask R-CNN, we report the performance comparison results of instance segmentation in Table 5. Our DMF-S achieves 42.4 AP\({}^{b}\), 39.0 AP\({}^{m}\), outperforming ResNet50 [20] and PVT-Tiny [30] which consume more FLOPs (59 - 79G). This proves effectiveness of the proposed Dynamic Residual Convolution with light-weight attention features. See Fig. 1(Right) for intuitive comparison. ### Ablations In this section, we conduct ablation studies on each component of DMF-XXS to investigate the effectiveness of the proposed dynamic residual convolution on ImageNet classification. **Effectiveness of Dynamic residual convolution.** As shown in Table 3, (a) inputting attention features to kernel attention module and (b) residual connection in kernel space only cost additional 4M FLOPs, but result in a 1.1% increase in top-1 accuracy over the baseline that use dynamic convolution [10]. While (a) alone has a relatively small impact on performance, it becomes more effective when used in combination with (b). It is worth noting that our proposed methods achieve meaningful improvements while incurring \begin{table} \begin{tabular}{l|c c|c c c|c c c} \hline Backbone & \#Params & FLOPs & mAP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) \\ \hline ShuffleNet-V2 [7] & 10.4M & 161G & 25.9 & 41.9 & 26.9 & 12.4 & 28.0 & 36.4 \\ MobileNet-V3 [5] & 12.3M & 162G & 27.2 & 43.9 & 28.3 & 13.5 & 30.2 & 37.2 \\ MobileFormer-151M [19] & 14.4M & 161G & 34.2 & 53.4 & 36.0 & **19.9** & 36.8 & 45.3 \\ \hline **DMF-XSS** & 17.4M & **160G** & **35.3** & **54.8** & **37.0** & 19.1 & **37.9** & **47.6** \\ \hline ResNet50 [20] & 38.0M & 239G & 36.3 & 55.3 & 38.6 & 19.3 & **40.4** & 48.8 \\ MobileFormer-294M [19] & 16.1M & 164G & 36.6 & 56.6 & 38.6 & 21.9 & 39.5 & 47.9 \\ PVT-V2-B0 [70] & 13.0M & 177G & 37.2 & **57.2** & 39.5 & **23.1** & **40.4** & 49.7 \\ **DMF-XS** & 20.2M & **161G** & **37.3** & **57.2** & **39.8** & 21.4 & 40.1 & **50.1** \\ \hline ResNet101 [20] & 56.7M & 315G & 38.5 & 57.6 & 41.0 & 21.7 & **42.8** & 50.4 \\ PVT-V1-Tiny [30] & 23.0M & 221G & 36.7 & 56.9 & 38.9 & 22.6 & 38.8 & 50.0 \\ ConTNet-M [56] & 27.0M & 217G & 37.9 & 58.1 & 40.2 & **23.0** & 40.6 & 50.4 \\ MobileFormer-508M [19] & 17.9M & 168G & 38.0 & 58.3 & 40.3 & 22.9 & 41.2 & 49.7 \\ **DMF-S** & 23.6M & **165G** & **39.0** & **59.4** & **41.6** & 22.9 & 42.7 & **51.4** \\ \hline \end{tabular} \end{table} Table 4: **Object detection results on COCO val2017.** All models use RetinaNet [18] as basic framework and are trained on COCO [17] train2017 for 12 epochs (1 \(\times\)) with single-scale training inputs. All backbones are pretrained on ImageNet-1K. The FLOPs(G) are measured at resolution 800\(\times\)1333. \begin{table} \begin{tabular}{l|c c|c c c|c c c} \hline \multicolumn{8}{c}{Mask R-CNN 1x} \\ \hline Backbone & \#Params & FLOPs & AP\({}^{b}\) & AP\({}^{b}_{50}\) & AP\({}^{b}_{75}\) & AP\({}^{m}\) & AP\({}^{m}_{50}\) & AP\({}^{m}_{75}\) \\ \hline ResNet50 [20] & 44.0M & 260G & 38.0 & 58.6 & 41.4 & 34.4 & 55.1 & 36.7 \\ PVT-V1-Tiny [30] & 33.0M & 240G & 36.7 & 59.2 & 39.3 & 35.1 & 56.7 & 37.3 \\ ResNet50 + DyConv [10]\({}^{\ddagger}\) & 121.8M & 260G & 39.2 & 60.3 & 42.5 & - & - & - \\ PVT-V2-B0 [70] & 23.0M & 195G & 38.2 & 60.5 & 40.7 & 36.2 & 57.8 & 38.6 \\ **DMF-S** & 34.2M & **181G** & **39.8** & **61.6** & **43.0** & **37.1** & **58.8** & **39.8** \\ \hline \multicolumn{8}{c}{Mask R-CNN 3x} \\ \hline ResNet50 [20] & 44.0M & 260G & 41.0 & 61.7 & 44.9 & 37.1 & 58.4 & 40.1 \\ PVT-V1-Tiny [30] & 33.0M & 240G & 39.8 & 62.2 & 43.0 & 37.4 & 59.3 & 39.9 \\ **DMF-S** & 34.2M & **181G** & **42.4** & **63.9** & **46.3** & **39.0** & **61.2** & **41.7** \\ \hline \end{tabular} \end{table} Table 5: **Instance segmentation results on COCO val2017.** All models use Mask R-CNN [71] as basic framework and are trained on COCO [17] train2017 for 12 epochs (1 \(\times\)) with single-scale training inputs and 36 epochs(3 \(\times\)) with multi-scale training inputs. All backbones are pretrained on ImageNet-1K. The FLOPs(G) are measured at resolution 800\(\times\)1333. \({}^{\ddagger}\) means the results are from [48]. only negligible computational costs. **Activation function for attention scores.**[9] proposed the use of sigmoid function to generate attention scores \(\pi_{k}(x)\) in Eq.2. However incorporating sigmoid layer can result in a significantly large kernel space, which can make learning of the attention scores challenging. [10] stabilized the learning process by reducing the kernel space using softmax and demonstrated better performance with fewer kernels and FLOPs. But, as shown in the table 6, we utilize the sigmoid function to enhance performance by increasing the kernel space, as our residual connection in kernel space helps to alleviate optimization difficulties. **Initialization for \(\mathbf{W_{static}}\).** We assume that the previous approach [9, 10] of using the dynamic convolution module to calculate both input-agnostic and input-dependent kernels in the kernel attention module results in optimization difficulties. Therefore, to address this issue, we introduce the residual connection that connects input-specific dynamic kernel to the existing input-agnostic kernel. The results presented in table 7 also align with the philosophy of the residual connection [20]. Note that input-agnostic kernel \(\mathbf{W_{input\_agnostic}}\) is initialized randomly. **Static kernel number.** We not only use temperature annealing to achieve stable training, but also utilize residual connection in kernel space to ease the optimization and expand the kernel space. Table 8 demonstrates that our dynamic residual convolution approach enables us to increase the kernel space by increasing the number of static kernels while maintaining stable training and improved performance. Therefore, all variants of DMF take 8 static kernels, which is zero initialized and use sigmoid activation for generating attention scores. ## 5 Limitations Our Dynamic Mobile-Former (DMF) has fewer FLOPs, but there are other factors that need to be considered for fast inference. For example, if memory access cost is high, then even if FLOPs are low, it may not be fast enough. In terms of memory access cost, the channel expansion in DY-Mobile and IRFFN is a bottleneck for DMF. For example, first point-wise convolution in DY-Mobile increase channels by 6 times, result in much higher memory access that can cause non-negligible delay and slow down the overall computations. Therefore, in future work, we will explore ways to reduce memory while actively utilizing our dynamic residual convolution. Additionally, our convolution module is computationally efficient but not parametrically efficient. Thus, exploring the application of omni-dimensional dynamic convolution [48] will also be an important research direction. Lastly, [73] introduce partial convolution more efficient than group convolution (depth-wise convolution) in terms of FLOPS(short for **f**oating-point **o**perations per second). We will investigate the combination of these newly proposed modules and our methods. ## 6 Conclusion In this paper, we present a Dynamic Mobile-Former(DMF), a model that enhances the potential of dynamic convolution by combining it with efficient operators in a synergistic way. Our proposed dynamic residual convolution enhances stable learning by adding input-specific kernel to input-agnostic kernel. Based on this, we can further increase the model's performance by actively expanding the kernel space. By integrating lightweight attention and enhanced dynamic convolution, our DMF achieves high efficiency and strong performance on a series of vision tasks. In particular, DMF demonstrates comparable performance to state-of-the-art models in image classification, and outperforms light-weight CNNs and vision transformer variants with significantly fewer FLOPs in object detection and instance segmentation. By simplifying the optimization process and effectively utilizing the capabilities of dynamic convolution, our DMF demonstrates the potential for further improvements in mobile vision applications.
2305.10401
Data Extraction via Semantic Regular Expression Synthesis
Many data extraction tasks of practical relevance require not only syntactic pattern matching but also semantic reasoning about the content of the underlying text. While regular expressions are very well suited for tasks that require only syntactic pattern matching, they fall short for data extraction tasks that involve both a syntactic and semantic component. To address this issue, we introduce semantic regexes, a generalization of regular expressions that facilitates combined syntactic and semantic reasoning about textual data. We also propose a novel learning algorithm that can synthesize semantic regexes from a small number of positive and negative examples. Our proposed learning algorithm uses a combination of neural sketch generation and compositional type-directed synthesis for fast and effective generalization from a small number of examples. We have implemented these ideas in a new tool called Smore and evaluated it on representative data extraction tasks involving several textual datasets. Our evaluation shows that semantic regexes can better support complex data extraction tasks than standard regular expressions and that our learning algorithm significantly outperforms existing tools, including state-of-the-art neural networks and program synthesis tools.
Qiaochu Chen, Arko Banerjee, Çağatay Demiralp, Greg Durrett, Isil Dillig
2023-05-17T17:46:26Z
http://arxiv.org/abs/2305.10401v2
# Data Extraction via Semantic Regular Expression Synthesis ###### Abstract. Many data extraction tasks of practical relevance require not only syntactic pattern matching but also semantic reasoning about the content of the underlying text. While regular expressions are very well suited for tasks that require only syntactic pattern matching, they fall short for data extraction tasks that involve both a syntactic and semantic component. To address this issue, we introduce _semantic regexes_, a generalization of regular expressions that facilitates combined syntactic and semantic reasoning about textual data. We also propose a novel learning algorithm that can synthesize semantic regexes from a small number of positive and negative examples. Our proposed learning algorithm uses a combination of neural sketch generation and compositional type-directed synthesis for fast and effective generalization from a small number of examples. We have implemented these ideas in a new tool called Smore and evaluated it on representative data extraction tasks involving several textual datasets. Our evaluation shows that semantic regexes can better support complex data extraction tasks than standard regular expressions and that our learning algorithm significantly outperforms existing tools, including state-of-the-art neural networks and program synthesis tools. + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTe Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates matching construct_ which accepts strings that (a) belong to a category \(\tau\) (e.g., business, location, person) and (b) satisfy a predicate \(\phi\) when interpreted as an instance of type \(\tau\). For example, this construct can be used to match strings that (a) correspond to a City (type \(\tau\)), and (b) further satisfy some additional criterion, such as being in the United States or in the state of California (predicate \(\phi\)). Under the hood, semantic pattern matching employs large language models like GPT-3 (Brown et al., 2020; Chowdhery et al., 2022) to test membership in some category \(\tau\) but further allows refining the query result using a logical predicate \(\phi\). In this sense, one can view our semantic regexes as deciding membership in a refinement type and then combining the matching strings using standard regex operators. Beyond proposing the notion of semantic regexes, another key contribution of this paper is a new synthesis algorithm for learning semantic regexes from positive and negative examples. The learning problem in this context is more challenging than traditional regex synthesis because semantic regexes are much more expressive than standard regexes. As a result, the hypothesis space in this setting is very large, which has two important consequences: * First, the semantic regex learning problem cannot be solved using a purely search-based approach due to the sheer size of the search space. In fact, the search space is theoretically not even bounded because our semantic regex language does not restrict the types \(\tau\) to a pre-defined vocabulary. * Second, due to the extremely large hypothesis space, there are typically many semantic regexes consistent with a small number of examples. Hence, to find the _intended_ semantic regex, our learning algorithm must have a strong inductive bias towards user intent. The synthesis technique proposed in this paper surmounts these challenges using a novel combination of three key ideas: 1. [leftmargin=*] 2. **Neural sketch generation:** Our learning algorithm uses a large language model (GPT-3) to generate a sketch of the desired semantic regex. Our key observation is that LLMs are well-suited to this task because they are effective at identifying semantic commonalities between the positive examples and inferring appropriate types to be used within the semantic pattern matching constructs. 3. **Compositional synthesis:** Our learning algorithm decomposes the synthesis task into multiple simpler sub-problems. Because the holes (i.e., unknowns) in the generated sketches are _typed_, the synthesis technique lends itself to a compositional solution, where we can synthesize each hole largely (though not entirely) independently. 4. **Type-directed search:** The presence of type information in the sketches makes it possible to fill each hole in a type-directed way. Specifically, we utilize a type system with subtype polymorphism to infer the space of _valid_ completions of a hole. Figure 1 shows the workflow of our proposed learning approach, which first utilizes the provided examples to generate a semantic regex _sketch_ using GPT-3. In the next step, our approach searches for completions of the sketch by (a) decomposing the overall problem into several subproblems and (b) using type-directed synthesis to solve each subproblem. If the sketch has a valid completion, Figure 1: Schematic overview of our approach. the resulting semantic regex is returned to the user. Otherwise, our approach analyzes the root cause of failure and uses this information to query the language model for a more accurate sketch. We have implemented the proposed technique in a tool called Smore and evaluated it on information extraction tasks involving several different datasets. Our evaluation shows that these data extraction tasks can be successfully automated using our proposed semantic regexes and that our learning algorithm is quite effective for automating the desired data extraction task. In particular, our approach achieves an average \(F_{1}\) score of 0.87 on the test data, while prior data extraction techniques achieve a maximum \(F_{1}\) score of 0.65. To summarize, this paper makes the following contributions: * We propose _semantic regular expressions_ to combine the flexibility of syntactic pattern matching with semantic queries involving types and logical predicates. * We describe a new learning technique for synthesizing semantic regexes from positive and negative examples. Our approach combines the power of large language models with type-directed synthesis for effective automation of data extraction tasks. * We evaluate our tool, Smore, on representative data extraction tasks and show that semantic regexes are useful for these tasks and that our learning approach outperforms other data extraction techniques in terms of average \(F_{1}\) score. ## 2. Overview In this section, we illustrate our technique using the motivating example shown in Figure 2, which contains information about artworks exhibited at a museum. Given this dataset, suppose that a user wants to extract all European artists who were born before the 20th century and whose name contains Thomas. This data extraction task is challenging because it requires both syntactic and semantic reasoning: * Death Year", we first need to _syntactically_ parse the input string into its four constituent fields and check whether the first field (corresponding to the artist name) contains "Thomas". * **Semantics:** After performing syntactic pattern matching, we then need to perform semantic reasoning about the contents of each row to understand whether (a) the first field describes a name, (b) the artist's nationality is European and (c) they were born before the 20th century. ### Semantic Regexes Our proposed _semantic regex_ concept is a natural fit for the data extraction task illustrated in this example. Semantic regexes combine the convenience of regexes for syntactic pattern matching with the power of semantic reasoning about data types. In addition to supporting the standard regex operators (concatenation, disjunction, Kleene star), semantic regexes provide the following Figure 2. Dataset about pieces of art exhibited in a museum. _semantic_ pattern matching construct, written using a refinement-type-like notation: \[\{v:\tau\mid\phi\}\] This construct matches any string that is semantically of type \(\tau\) and that further satisfies the (optional) logical qualifier \(\phi\). For instance, going back to our example, recall that we need to pattern match strings that correspond to a European country. This can be expressed using the semantic regex \(\{v:\text{Country}\mid v\in\text{Europe}\}\), which, for example, matches the strings "France", "Britain" and "North Netherlands", but fails on the strings "United States", "Korea" etc. Similarly, we can express the desired constraint on the artists' birth year using the following semantic regex: \[\{v:\text{Year}\mid v<1900\}\] which matches strings that (a) correspond to a year and (b) whose value is less than or equal to 1899. Putting all of this together, our desired data extraction task can be accomplished using the following overall semantic regex: \[\begin{array}{l}\text{\tiny{$\tau_{1}\,\text{-}\,$}},\text{\tiny{$\tau_{2}\, \text{-}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}},\text{\tiny{$\tau_{4}\,$}} \\ \text{\tiny{$\tau_{1}\,\text{-}\,$}},\text{\tiny{$\tau_{2}\,\text{-}\,$}}, \text{\tiny{$\tau_{3}\,\text{-}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}}, \text{\tiny{$\tau_{4}\,$}}\\ \text{\tiny{$\tau_{4}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}},\text{\tiny{$ \tau_{2}\,\text{-}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}},\text{\tiny{$\tau_ {4}\,$}}\\ \text{\tiny{$\tau_{4}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}},\text{\tiny{$ \tau_{2}\,\text{-}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}},\text{\tiny{$\tau_ {4}\,$}},\text{\tiny{$\tau_{4}\,$}},\text{\tiny{$\tau_{3}\,\text{-}\,$}}, \text{\tiny{$\tau_{4}\,$}},\text{\tiny{$\tau_{4}\,$}},\text{\tiny{$\tau_{3}\, \text{-}\,$}},\text{\tiny{$\tau_{4}\,$}},\text{\tiny{$\tau_{4}\,$}},\text{\tiny{$ Starting with the GPT-3-synthesized sketch, our method decomposes the synthesis problem into multiple sub-problems, one for each hole in the sketch, and performs a type-directed search to complete each hole. For this example, our synthesis method infers the following positive examples for each hole: \begin{tabular}{c c c} \hline \hline \(\{\Box:\text{Name}\}\) & \(\{\Box:\text{Country}\}\) & \(\{\Box:\text{Year}\}\) \\ \hline John Thomas Young Gilroy & Britain & 1898-1985 \\ Thomas Hudson & Britain & 1701-1779 \\ Thomas Couture & France & 1815-1879 \\ \hline \end{tabular} Note that it is not possible to propagate negative examples for individual holes, as it suffices for the synthesized regex for _one_ hole to reject its corresponding string, but we do not a priori know which one. In particular, for this example, it would _not_ be accurate to deduce that "Alma Thomas", "Sandro Botticelli", and "Thomas Nolle" as negative examples for the first hole. Given this decomposition, our approach tries to synthesize a regex \(r_{i}\) for each hole \(\{\Box:\tau_{i}\}_{i}\) such that (a) the type of \(r_{i}\) is a subtype of \(\tau_{i}\) and (b) \(r_{i}\) matches all of its corresponding positive examples. For this example, our synthesis algorithm can immediately deduce that the sketch is incorrect since no subtype of Year can match the corresponding positive examples for the third hole. To repair the sketch, our learning algorithm localizes parts of the sketch for which synthesis failed (in this case, Year) and synthesizes a different sketch for the failing part. In the next iteration, suppose that we consider the following correct sketch: \[\{\Box:\text{Name}\}\cdot\text{``}\cdot\text{``}\cdot\{\Box:\text{Country}\} \cdot\text{``}\cdot\text{``}\cdot\{\Box:\text{Year}\}\cdot\text{``}-\text{``} \cdot\{\Box:\text{Year}\}\] Our synthesis algorithm tries to independently find the completion of each hole with the appropriate type and satisfy the corresponding decomposed positive examples. As before, the positive examples are used to prune the search space: for example, since the second hole must match the strings "Britain" and "France", the synthesizer can rule out completions such as \(\{v:\text{Country}\mid v\in\text{Asia}\}\) and \(\{v:\text{Country}\mid v\in\text{Asia}\ \wedge\ldots\}\). Similarly, type information in the sketch is critical, enabling the synthesizer to avoid enumerating useless sub-programs. For instance, when synthesizing the last hole in the sketch, the synthesizer would not enumerate programs such as \(\{v:\text{Month}\mid\ldots\}\cup\{v:\text{Date}\mid\ldots\}\), since this regex can match strings that are not of type Year. It would, however, consider regexes of the form \(\{v:\text{Year}\mid v\leq\ldots\}\), as the strings that are matched by this regex would be a subtype of year. After independently synthesizing each hole, the algorithm checks whether the resulting regex \(r\) rejects all negative examples and, if so, returns \(r\) as a solution. Otherwise, it generates a different regex by looking for a different completion for at least one of the holes. ## 3. Semantic Regular Expressions In this section, we describe the syntax and semantics of our proposed semantic regular expression language. At a high level, semantic regexes combine standard regular expression operators with pre-trained neural networks that identify semantic types and provide knowledge about the world. **DSL Syntax.** The syntax of our semantic string matching language is presented in Figure 3. A semantic regex \(\rho\) takes as input a string \(s\) and returns a boolean indicating whether there is a match. Semantic regexes include all the standard regular expression constructs, including constant strings \(c\), character classes like letters and numbers (denoted \(cc\)), concatenation (\(\cdot\)), complement (\(\neg\)), union (\(\cup\)), intersection (\(\cap\)), and Kleene star (\(*\)). Additionally, the notation \(r\{k_{1}\}\) denotes repetition of \(r\)\(k_{1}\) times and \(r\{k_{1},k_{2}\}\) denotes \(r\) repeated between \(k_{1}\) to \(k_{2}\) times. As standard, \(r\)? indicates an optional occurrence of \(r\), and \(r+\) denotes one or more occurrences of \(r\). In addition to these standard regex constructs, Figure 3 includes two _semantic pattern matching constructs_, denoted as \(\{v:f(\tau_{q})\}\) and \(\{v:f(\tau_{b})\mid\phi\}\), where \(f\) is an (optional) built-in function, is a built-in type (Integer, Month, etc) and \(\tau_{q}\) is an _arbitrary_ (user-defined) type. Note that the DSL does not place any restrictions on \(\tau_{q}\), so the user can provide any arbitrary string to define their own type. However, we only allow a logical qualifier \(\phi\) to be used for built-in types. In the most basic form, the construct \(\{v:\tau\}\) matches strings that are semantically of type \(\tau\), where \(\tau\) can either be a built-in or user-defined type. For example, \(\{v:\mathsf{Place}\}\) matches any string that corresponds to a geographical location. The optional function \(f\) used in this construct allows refining the query result by performing additional semantic-preserving string processing. For example, \(\{v:\mathsf{tUpper}(\mathsf{Place})\}\) matches any string that corresponds to a location name in upper case letters (e.g., "NEW YORK"). More generally, \(\{v:f(\tau)\}\) matches a string \(s\) if \(s\) is equal to \(f(s^{\prime})\) where \(s^{\prime}\) is a string of type \(\tau\). As another example, \(\{v:\mathsf{abbreviate}[.](\mathsf{Place})\}\) matches the strings "N.Y.", "S.F." etc. because the function \(\mathsf{abbreviate}[c]\) abbreviates a string through initialism, using the character \(c\) as a separator. When performing semantic pattern matching using built-in types \(\tau_{b}\), one can additionally use a logical qualifier \(\phi\). In particular, \(\{v:\tau_{b}\mid\phi\}\) matches those strings that are of type \(\tau_{b}\) and additionally satisfy predicate \(\phi\). To check whether a string \(s\) satisfies \(\phi\), \(s\) is first parsed as an instance \(o\) of type \(\tau_{b}\) and then checked for conformance against \(\phi\). Note that these semantics justify why logical qualifiers are only allowed with built-in types: because we need to parse the string as an instance of \(\tau_{b}\), there must be some built-in mechanism for deserializing the string, which only makes sense for pre-defined types. As an example, the semantic regex \(\{v:\mathsf{Float}\mid v<0.1\}\) matches strings that can be interpreted as a floating point number whose value is less than 0.1 (e.g., 0.0051). As another example, \(\{v:\mathsf{tUpper}(\mathsf{City})\mid v\in\mathsf{Europe}\}\) matches strings, such as "ROME" that (a) correspond to European cities and (b) are in upper case letters. **DSL Semantics.** Figure 4 presents the formal semantics of our DSL for semantic string matching, where \(\llbracket r\rrbracket\) denotes the set of all strings that \(r\) matches.1 Observe that the semantics of the DSL is parametrized by a helper function called \(\mathsf{SemanticType}\), which is implemented by a pre-trained neural network and which is used to check whether the type of a string \(s\) is \(\tau\). Hence, the construct \(\{v:f(\tau_{q})\}\) matches all strings \(s\) such that (a) \(s=f(s^{\prime})\) for some string \(s^{\prime}\), and (b) where \(\mathsf{SemanticType}(s^{\prime})=\tau_{q}\). Similarly, \(\{v:f(\tau_{b})\mid\phi\}\) matches all strings \(s\) such that (a) \(s=f(s^{\prime})\) for some string \(s^{\prime}\), (b) \(s^{\prime}\) is an instance of built-in type \(\tau_{b}\), and (c) when \(s^{\prime}\) is parsed into an object \(o\) of type \(\tau_{b}\), o satisfies predicate \(\phi\). Footnote 1: Semantics of functions are provided in the appendix. Figure 3. Semantic string matching language. \(c\) is a constant string, \(cc\) is a character class (e.g. letters). \(\tau_{b}\) is a built-in base type, and \(\tau_{q}\) is an arbitrary base type in our type system. Also, \(k\in\mathbb{Z}\), \(n\in\mathbb{R}\), and \(a\in\mathsf{Attributes}\), where \(\mathsf{Attribute}\) is type-dependent. \[\begin{array}{rcl}\llbracket\lambda s.\texttt{match}(s,r)\rrbracket s=&s\in\llbracket r\rrbracket\\ \llbracket c\rrbracket=&\{c\}\\ \llbracket\neg r\rrbracket=&\{s\mid s\notin\llbracket r\rrbracket\}\\ \llbracket r^{0}\rrbracket=&\{e\}\\ \llbracket r^{i}\rrbracket=&\{s_{1}\cdot s_{2}\mid s_{1}\in\llbracket r^{i-1} \rrbracket,s_{2}\in\llbracket r\rrbracket\}\\ \llbracket r\ast\rrbracket=&\bigcup_{n\in\{0..0\}}\llbracket r^{n}\rrbracket \\ \llbracket r_{1}\cdot r_{2}\rrbracket=&\{s_{1}\cdot s_{2}\mid s_{1}\in \llbracket r_{1}\rrbracket,s_{2}\in\llbracket r_{2}\rrbracket\}\\ \llbracket r_{1}\cup r_{2}\rrbracket=&\llbracket r_{1}\rrbracket\cup\llbracket r _{2}\rrbracket\\ \llbracket r_{1}\cap r_{2}\rrbracket=&\llbracket r_{1}\rrbracket\cap\llbracket r _{2}\rrbracket\\ \llbracket\llbracket o:f(\tau_{q})\rrbracket=&\{\llbracket f\rrbracket s \mid\textsf{SemanticType}(s)=\tau_{q}\}\\ \llbracket\{o:f(\tau_{b})\mid\phi\}\rrbracket=&\{\llbracket f\rrbracket s \mid\textsf{SemanticType}(s)=\tau_{b}\wedge\textsf{Cast}{<}\tau_{b}{>}(s)=o \wedge\phi(o)\}\\ \end{array}\] Fig. 4. Semantics of matching part of the DSL. Here, SemanticType is an oracle that determines the semantic type of string \(s\), Cast<r> casts string \(s\) to object \(o\) of type \(\tau\). \[\begin{array}{rcl}\tau:=&\text{Any}\mid\text{Optional}(\tau^{\prime})\mid \tau^{\prime}\\ \tau^{\prime}:=&\text{Semantic}(\tau_{s})\mid\text{CharSeq}\\ \tau_{s}:=&\text{Person}\mid\text{Organization}\mid\text{Product}\mid\text{ Event}\mid\text{Work of Art}\\ \mid\text{Number}\mid\text{Integer}\mid\text{Float}\\ \mid\text{Date}\mid\text{Year}\mid\text{Month}\mid\text{Day}\\ \mid\text{Time}\mid\text{Hour}\mid\text{Minute}\mid\text{Second}\\ \mid\text{Place}\mid\text{Location}\mid\text{Nationality}\mid\text{Country}\mid\text{ City}\\ \end{array}\] Fig. 5. Type syntax. **Example 3.1**.: The semantic regex \(\{o:\text{Date}\mid o.\texttt{month}=5\}\) matches all strings that represent dates in May. In particular, any string matching a Date is first parsed into a datetime object and its month field is checked for being equal to 5. Examples of strings matched by this regex include "May 2023" and "2023-05-01". ## 4. Overview of the Type System While our semantic regex DSL is not _explicitly_ typed, our approach utilizes a type system to facilitate effective synthesis. In this section, we give an overview of the type system. ### Type Syntax The syntax of our type system is shown in Figure 5, where Any corresponds to the top element in the type system and CharSeq indicates any string without semantic meaning, such as "1a2b3c", ".,3d," etc. The type Semantic(\(\tau_{s}\)) indicates strings that can interpreted as instance of \(\tau_{s}\) (e.g., Date). In addition, the type Optional(\(\tau\)) includes both \(\epsilon\) (empty string) as well as any string of type \(\tau\). Semantic types \(\tau_{s}\) include both built-in types \(\tau_{b}\) (e.g., Integer, Float, Date) as well as user-defined types \(\tau_{q}\). Hence, the type syntax is _not_ fixed a priori and is parametrized over any user-defined types that occur in the program. ### Subtyping Our type system supports subtype polymorphism because there is a natural subtyping relation between many entities of interest. We formalize the subtyping relation Figure 6 using the standard judgment \(\vdash\tau_{1}<:\tau_{2}\), indicating that \(\tau_{1}\) is a subtype of \(\tau_{2}\). In Figure 6, the first three rules are straightforward and establish Any as the top element of the type system. The following rules (until Trans) show the subtyping relation involving built-in semantic types. For example, according to these rules, Year, Month, and Day are all subtypes of the more generic Date type. The Trans rule states the transitivity of the subtyping relation and the Semantic rule lifts the subtyping relation to Semantic(\(\tau\)). The last two rules for Optional are also standard: Optional-Width states that any type \(\tau\) is a subtype of Optional(\(\tau\)) and the last rule lifts the subtyping relation to optional types. Finally, the last rule handles subtyping between user-defined types. If the set of objects represented by \(\tau_{1}\) is a subset of those represented by \(\tau_{2}\), we have \(\tau_{1}<:\tau_{2}\). In practice, we perform this check by querying a semantic ontology (specifically, DBPedia [1] in our implementation). ### Typing Rules We present the typing rules for assigning types to DSL terms in Figure 7. These rules derive judgments of the form \(t:\tau\) indicating that term \(t\) has type \(\tau\). Note that Figure 7 only shows a representative subset of the typing judgments; the full set is presented in the Appendix under supplementary materials. **Constant and characters.** The first four rules show how to assign types to string constants and character classes. For constants, we determine their type by querying a semantic oracle (GPT-3 in our implementation) and assign CharSeq if the oracle does not return a semantic type.2 Character Figure 6. Subtyping relations. \(\gamma(\tau)\) is the concretization function denoting the set of objects represented by \(\tau\). Figure 7. Typing rules. classes only have semantic meaning for numbers, so we assign the Semantic(Number) type if the character is a number, and CharSeq otherwise. **Semantic matching.** The MatchSem rules present the typing rules for the semantic matching construct. The type of the expression is identical to the type specified as part of the program syntax. **Union and intersection.** The typing rules for union and intersection presented in the Union and And rules, respectively. These rules utilize the \(\vee\) and \(\wedge\) operators, which are defined in Figure 8. At a high level, the meet and join of two types are determined as the least upper bound (\(\sqcup\)) and the greatest lower bound (\(\sqcap\)), respectively, in the corresponding type lattice. However, there is a special case for the CharSeq type: Intuitively, taking the intersection of a semantic type \(\tau\) and CharSeq further refines the objects of type \(\tau\) by placing an _additional_ syntactic restriction; hence, Semantic(\(\tau\)) \(\wedge\) CharSeq is defined as Semantic(\(\tau\)). In contrast, the join of Semantic(\(\tau\)) and CharSeq is the top element Any, as expected. **Not and concatenation.** The Not and Concat are two cases where specific types cannot be inferred. Even though the type of their arguments is known, the resulting type cannot be determined, resulting in an output type of Any. \[\tau\wedge\text{Any} = \tau\] \[\tau_{1}\wedge\text{Optional}(\tau_{2}) = \tau_{1}\wedge\tau_{2}\] \[\text{Optional}(\tau_{1})\wedge\text{Optional}(\tau_{2}) = \text{Optional}(\tau_{1}\wedge\tau_{2})\] \[\text{Semantic}(\tau_{1})\wedge\text{Semantic}(\tau_{2}) = \text{Semantic}(\tau_{1}\sqcap\tau_{2})\] \[\text{Semantic}(\tau)\wedge\text{CharSeq} = \text{Semantic}(\tau)\] \[\tau\vee\text{Any} = \text{Any}\] \[\tau_{1}\vee\text{Optional}(\tau_{2}) = \text{Optional}(\tau_{1}\vee\tau_{2})\] \[\text{Optional}(\tau_{1})\vee\text{Optional}(\tau_{2}) = \text{Optional}(\tau_{1}\vee\tau_{2})\] \[\text{Semantic}(\tau_{1})\vee\text{Semantic}(\tau_{2}) = \text{Semantic}(\tau_{1}\sqcup\tau_{2})\] \[\text{Semantic}(\tau)\vee\text{CharSeq} = \text{Any}\] ## 5. Learning semantic regexes from examples In this section, we describe our synthesis algorithm for solving the semantic string matching problem from examples. Our method involves two main steps: generating a _typed sketch_ from the positive examples and completing the sketch using an enumerative search-based synthesizer. If sketch completion fails, our method refines the sketch and performs synthesis using the new sketch. In the rest of this section, we first provide some preliminary information, then present our top-level learning algorithm, and then describe each of its key components. ### Sketch Language Our learning algorithm crucially relies on the notion of a _typed sketch_ whose syntax is shown in Figure 9. At a high level, the sketch language extends our semantic regex DSL by allowing a "typed hole" (denoted \(\{\square:\tau\}\)) which represents an arbitrary expression of type \(\tau\). Given a sketch \(S\), we use the notation \(\llbracket S\rrbracket\) to denote the set of all semantic regexes that can be obtained by completing holes in \(S\) by valid expressions of the corresponding type. Figure 9 also defines sketch semantics in terms of the space of all programs they represent. Example 5.1 ().: Consider the sketch \(\{\square:\text{Organization}\}\cdot{}^{*}\)._com_", which represents the space of semantic regexes that match strings consisting of an organization name followed by the string constant ".com". Possible completions of this sketch include, but are not limited, to the following Figure 8. Type intersection and union. semantic regexes: (1) \(\{\{\)\(v:\)\(\mathsf{Company\}\)\(\cdot\)\( ### Decomposing the Specification To perform compositional synthesis, our learning algorithm decomposes the global specification into a _set_ of specifications, one for each hole in the sketch. In this section, we describe the Get-NextDecomp procedure for specification decomposition using the inference rules in Figure 11, which derive judgments of the following shape: \[\mathcal{E}^{+}\vDash\Psi\] The meaning of this judgment is that, given positive examples \(\mathcal{E}^{+}\), \(\Psi\) is a _possible_ decomposition that maps each hole in the sketch to its corresponding positive examples. As mentioned earlier, the decomposition is, in general, _not_ unique, so there can be multiple decompositions \(\Psi_{1},\ldots,\Psi_{n}\) for a given sketch \(S\). We now explain the decomposition rules from Figure 11 in more detail. The first rule, labeled Sketch-Match, considers a program sketch with top-level operator \(f\) (e.g., concatenation or intersection) and sub-sketches \(S_{1},\ldots,S_{n}\). To infer a specification for each hole in \(S\), we first generate a regex \(r^{\star}\) that over-approximates \(S\) (via the call to OverApprox). Intuitively, OverApprox generates a regex \(r^{\star}\) such that for _any_\(r\in[\![S]\!]\), \(r^{\star}\) accepts every string that is accepted by \(r\). Because our over-approximation approach is exactly the same as used in prior work (Chen et al., 2020; Lee et al., 2016), we do not formally present it, but the basic idea is to replace each hole that appears under an even (resp. odd) number of negation symbols by the regex \(.*\) (resp. \(\emptyset\)). This method guarantees that the resulting regex \(r^{\star}\) will accept every string that is accepted by any instantiation of \(S\). Furthermore, note that \(r^{\star}\) is a standard regex without any semantic pattern matching constructs, as all holes have been replaced by either the universal or the empty set. Next, once we generate the over-approximation \(r^{\star}\), we infer positive examples for each sub-sketch \(S_{1},\ldots,S_{n}\) used in \(S\). To do so, for each positive example \(e\), we use a standard regex matching tool to find a parse of \(e\) into the format \(f(S_{1},\ldots,S_{n})\) with corresponding sub-strings \(e_{i}\) for each sub-sketch \(S_{i}\). After propagating each example \(e_{i}\) to nested sketch \(S_{i}\) and recursively applying the inference rules, we obtain the decomposed specifications \(\Psi_{1},\ldots,\Psi_{n}\) for each of the sub-sketches in \(S\). These mappings are finally combined via the call to the Merge function, defined as follows: \[\mathtt{Merge}(\Psi_{1},\ldots,\Psi_{n})=\left\{\begin{array}{ll}\bot&\text {if }\exists i\in[1,n].\;\Psi_{i}=\bot\\ \cup_{i=1}^{n}\Psi_{i}&\text{otherwise}\end{array}\right.\] where the notation \(\uplus\) indicates disjoint union. Figure 11. Procedure for GetNextDecomp\((S,\mathcal{E}^{+})\). OverApprox\((S)\) returns a concrete regex that over-approximates \(S\). Merge returns \(\bot\) if one of its argument is \(\bot\), otherwise it disjointly unions all its arguments. The next rule, labeled Sketch-NoMatch, corresponds to an infeasible sketch or decomposition. Because every string accepted by \(r\in[S]\) must also be accepted by the over-approximation \(r^{\star}\), the algorithm yields \(\bot\) to indicate a failure when \(r^{\star}\) doesn't match at least one of the positive examples. The remaining rules correspond to the base cases of the recursive decomposition algorithm. Specifically, the rules prefixed with Concrete consider the case where the sketch is a concrete regex \(r\) without a hole. Specifically, we check the feasibility of \(r\) by testing whether it matches all of the positive examples. If so, the sketch is feasible, and the algorithm returns the empty mapping \(\emptyset\). Otherwise (the Concrete-Infeasible case), the algorithm returns \(\bot\) to indicate failure. The final two rules correspond to base cases for a hole and utilize the fact that sketches are typed. In particular, given a hole of type \(\tau\), if there exists a positive example \(e\in\mathcal{E}^{+}\) whose type is not \(\tau\), this indicates a conflict and the algorithm returns \(\bot\) in the Hole-Infeasible rule. Otherwise, in the Hole-Feasible rule, the constructed specification maps this hole to the input positive examples \(\mathcal{E}^{+}\). Example 5.2 ().: Consider the positive examples from Section 2 and the following sketch: \[\{\Box:\textsc{Name}\}\cdot\ ",\ "\cdot\{\Box:\textsc{Country}\}\cdot\ ",\ "\cdot\{\Box:\textsc{Year}\}_{1}\cdot\ "-"\cdot\{\Box:\textsc{Year}\}_{2}\] The over-approximation for this sketch is the following regex: \[\cdot\ *\cdot\ ",\ "\cdot\cdot\ *\cdot\ ",\ "\cdot\cdot\ *\cdot\ "\cdot\ *\] Using our decomposition technique, we infer the following positive examples for each hole: \[\begin{array}{c c c}\hline\hline\{\Box:\textsc{Name}\}&\{\Box:\textsc{Country }\}&\{\Box:\textsc{Year}\}_{1}&\{\Box:\textsc{Year}\}_{2}\\ \hline\hline\textsc{John}&\textsc{Thomas}&\textsc{Young}&\textsc{Briain}&1898&1985 \\ &\textsc{Thomas}&\textsc{Hudson}&\textsc{Briain}&1701&1779\\ &\textsc{Thomas}&\textsc{Couture}&\textsc{France}&1815&1879\\ \hline\hline\end{array}\] We conclude this subsection by stating the theorem about the soundness of decomposition: Theorem 1 ().: _Consider the synthesis problem with positive examples \(\mathcal{E}^{+}\). Let \(S\) be a candidate sketch and let \(r\) be a completion of \(S\) mapping each hole \(h_{i}\) in \(S\) to a semantic regex \(r_{i}\). If \(r\) satisfies all positive examples \(\mathcal{E}^{+}\), then there exists some \(\Psi\in\textsc{GetNextDecomp}(S,\mathcal{E}^{+})\) such that every \(r_{i}\) satisfies \(\Psi[h_{i}]\)._ ### Compositional Type-Directed Synthesis Next, we explain our compositional learning technique for synthesizing a semantic regex for a given sketch and decomposed specification. This algorithm, called SynthesizeFromDecomp, is shown in Figure 12. Given a sketch \(S\), specification \(\Psi\), and negative examples \(\mathcal{E}^{-}\), the recursive SynthesizeFromDecomp procedure lazily generates possible sketch completions until it finds a regex that is globally consistent with the top-level specification. To perform synthesis for a given specification, the algorithm starts by choosing one of the holes \(h\) in the sketch (line 2) and synthesizes a completion \(r\) for that hole _only_ by calling GetNextCompletion at line 4. Then, the loop in lines 6-10 tries to find a completion for the remaining holes. In particular, in each iteration of the nested loop, the algorithm recursively calls SynthesizeFromDecomp to fill all remaining holes, assuming that \(h\) is replaced by \(r\). If synthesis fails (i.e., \(M\equiv\bot\) at line 8), the algorithm moves on to a different completion of \(h\). Otherwise, it checks if the current solution (which is obtained by instantiating \(S\) with \(M\cup[h\mapsto r]\)) rejects all negative examples, and if so, returns this solution. The final missing piece for our sketch instantiation algorithm is the GetNextCompletion procedure shown in Figure 13 which performs synthesis for a _single_ hole. At a high level, this algorithm performs top-down enumerative search and uses a combination of types (Frankle et al., 2016; Osera and Zdancewic, 2015; Polikarpova et al., 2016) and observational equivalence (Morris, 1968) to prune the search space. As standard in top-down search, this algorithm utilizes the notion of _partial programs_(Feng et al., 2018, 2017), which can be thought of as an abstract-syntax tree where some of the nodes are labeled with non-terminals to be expanded later. In more detail, the hole synthesis algorithm utilizes a worklist \(\mathcal{W}\), which is initialized to a partial program \(P_{0}\) with a single node (lines 2-3). Each node in the partial program is annotated with a grammar symbol (in this case, the start symbol \(s_{\mathcal{G}}\)) and its corresponding type (in this case, \(\tau_{h}\)). Then, in each iteration of the loop in lines 4-16, the algorithm dequeues one of the partial programs \(P\) in the worklist and processes it. If the partial program is complete (meaning that all nodes are labeled with terminal symbols), the algorithm performs the following checks: 1. **Type consistency:** If the type of \(P\) is _not_\(\tau_{h}\), \(P\) clearly does not have the intended type and is rejected (line 7). 2. **Consistency with examples:** If \(P\) does not satisfy all positive examples \(\mathcal{E}^{+}\), it does not satisfy the specification and is also rejected at line 7. 3. **Observational equivalence:** If \(P\) rejects the _exact same set_ of strings as a program the algorithm has previously encountered, it is redundant to consider \(P\), as it is observationally equivalent to Figure 12. Sketch completion algorithm for a given decomposition. Figure 13. Hole synthesis algorithm. OverApprox follows the procedure as described in Regel(Chen et al., 2020). another solution \(P^{\prime}\) that has been rejected. Hence, the algorithm only yields \(P\) as a solution if it is observationally different from a previously encountered solution (lines 8-9). On the other hand, if the current partial program \(P\) is _incomplete_ (meaning it has at least one "open" node labeled with a non-terminal), the algorithm chooses one of the open nodes and expands it using the available productions in the grammar (line 11). In particular, given an open node \(n\) labeled with a non-terminal \(N\), the Expand procedure considers each production of the form \(N\to\alpha\) and adds new nodes where each new node with a grammar symbol and its corresponding (inferred) type. However, because a resulting expansion \(P^{\prime}\) may not necessarily be feasible, the algorithm performs two additional checks before adding \(P^{\prime}\) to the worklist at line 16: * **Type-directed feasibility check:** For each complete subprogram \(P_{i}\) of \(P^{\prime}\), the algorithm checks if the actual type of \(P_{i}\) is a subtype of its annotated goal type (line 12). If this type feasibility check fails for _any_ node \(n\), then program \(P^{\prime}\) is pruned from the search space, and none of its expansions are considered. * **Feasibility check using over-approximation:** Additionally, the algorithm constructs an over-approximating regular expression \(r^{\star}\) that accepts every string that is accepted by any \(r\in\llbracket P^{\prime}\rrbracket\) using the same OverApprox procedure from Section 5.3. If this over-approximation \(r^{\star}\) fails to match one of the positive examples, \(P^{\prime}\) is infeasible and therefore pruned away at lines 14-15. Otherwise, \(P^{\prime}\) is added to the worklist, and the search process continues until a solution is found. Theorem 2 ().: _Let \(R\) be the set of solutions returned by GetNextCompletion\((\tau_{h},\mathcal{E}^{+},\mathcal{E}_{\star})\). We have:_ * **Soundness:** _Every_ \(r\in R\) _is a solution to the hole synthesis problem, meaning (1)_ \(r\) _has type_ \(\tau_{h}\) _and (2) satisfies examples_ \(\mathcal{E}^{+}\)__ * **Completeness:** _If_ \(r\notin R\)_, then_ \(r\) _is either not a solution or is observationally equivalent to some_ \(r^{\prime}\in R\) _for strings_ \(\mathcal{E}_{\star}\)_._ ### Sketch Generation In the final part of this section, we describe our technique for generating typed sketches from examples. In particular, we employ few-shot prompting and build our sketch generator on top of GPT-3 (Brown et al., 2020). #### 5.5.1. Background on Few-Shot Prompting with LLMs In recent years, large language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022) have made major breakthroughs in natural language understanding. These are models \(P(\mathbf{x})=P(x_{1})P(x_{2}\mid x_{1})\dots P(x_{n}\mid x_{1},\dots,x_{n-1})\) modeling a sequence as a product of distributions over each next word via the chain rule. By showing LLMs a few examples of a task to perform and then giving them a test example, LLMs can perform that task on the test example via _in-context learning_, without retraining or fine-tuning the model's parameters. The user only needs to provide a few examples and invoke the model's next-word prediction capabilities (repeatedly taking the most likely next token under the model). To give a concrete example, consider the task of transforming numbers in strings to texts, a task that GPT-3 has not specifically been trained on. Figure 14 shows a typical usage scenario of GPT-3 when performing such a task: here, line 1 provides the task description, lines 2-4 provides a few examples, line 5 is the query, and the output of the model is highlighted in red. #### 5.5.2. Querying LLM for Sketches To obtain typed sketches, our approach prompts GPT-3 with suitable queries.3 As shown in Figure 15, the GetNextSketch procedure takes as input positive examples \(\mathcal{E}^{+}\) and an optional infeasible sketch \(S_{f}\), which is used in later iterations of the algorithm for sketch repair. Initially, the algorithm starts by querying GPT-3 for a sketch using the GetSketch procedure, as illustrated in Figure 16. The prompt to GPT-3 contains a task description, a manually-curated set of representative examples (in the form of a query and its desired output), and, finally, the prompt itself (lines 12-17 in Figure 16). The GetSketch procedure then attempts to parse the model's output into a typed sketch; however, there is no guarantee that the GPT-3 output will belong to our sketch grammar. Hence, if parsing fails, the GetSketch procedure keeps prompting GPT-3 for a new sketch until the model's output is parseable.4 Footnote 4: Past work has explored few-shot semantic parsing from natural language into DSLs using structured natural language as an intermediate representation [20]; however, more recent work has shown that LLMs can do well at this task without such guidance, even in the presence of adversarial perturbations [17]. In future invocations of GetNextSketch, this procedure may be invoked with an infeasible sketch \(S_{f}\) that needs to be repaired. Lines 8-11 of Figure 15 deal with this sketch repair aspect of Figure 16. GPT-3 input structure for generating a sketch for the semantic string matching task. Figure 14. Sample input for a few-shot string transformation to GPT-3 and its output is highlighted in red. Figure 15. Sketch generation procedure. GetSketch(\(\mathcal{E}^{+}\)) prompts the neural model for a new sketch, as illustrated in Figure 16. the algorithm. Specifically, given the infeasible sketch \(S_{f}\) and positive examples \(\mathcal{E}^{+}\), LocateError produces a _repair specification_, which consists of a so-called _meta-sketch_\(\mathcal{S}\) and a specification \(\Psi\). A meta-sketch is like a sketch except that it contains _untyped_ "meta-holes" that need to be instantiated with a _typed sketch_. The specification \(\Psi\) maps each meta-hole in \(\mathcal{S}\) to a set of positive examples. Such a meta-sketch is instantiated into a regular sketch by querying GPT-3 via the GetSketch procedure for each of the meta-holes \(h_{i}\) in \(\mathcal{S}\) and its corresponding examples \(\Psi[h_{i}]\). Finally, we turn our attention to the LocateError procedure, which is presented as inference rules in Figure 17. These rules derive judgments of the following shape: \[\mathcal{E}^{+}\vdash S\hookrightarrow\mathcal{S},\Psi\] meaning that \((\mathcal{S},\Psi)\) is a repair specification for infeasible sketch \(S\) and examples \(\mathcal{E}^{+}\). The fault localization rules in Figure 17 largely resemble GetNextDecomp for performing decomposition in that they use over-approximations. We explain these rules in more detail below. **Sketch-Single-Fail**. This rule applies to a sketch \(S\) of the form \(f(S_{1},\ldots,S_{n})\) where (1) there is at least one positive example that is not matched by the over-approximation of \(S\) (premise on the first line) and (2) where only one of the sub-sketches \(S_{i}\) is faulty. To determine whether condition (2) holds, this rule replaces the entire sub-sketch \(S_{i}\) with a single hole and then checks whether the over-approximation of the resulting sketch can accept all positive examples. If so, it recursively performs fault localization on \(S_{i}\) and returns a meta-sketch by replacing \(S_{i}\) in \(S\) with its corresponding meta-sketch \(\mathcal{S}_{i}\). **Sketch-Multi-Fail.** This rule is similar to the first one except that it deals with the scenario where there are multiple faulty sub-sketches. That is, even after we replace any individual sub-sketch with a hole, there is _still_ at least one positive example that is not matched by the over-approximation. In this case, we generate a meta-sketch that consists of a single hole. Figure 17: Procedure for LocateError. **Sketch-Nested-Fail**. This rule also applies to a sketch \(S\) of the form \(f(S_{1},\ldots,S_{n})\) but considers the case where the over-approximation of \(S\) matches all the positive examples. However, as the sketch is infeasible, there must nonetheless be at least one problem inside the next sub-sketches. Hence, our fault localization technique recursively localizes the error in the sub-sketches and returns the merged result. **Hole-Repair.** This rule applies to the case where the type of a hole is incorrect in that its annotated type is inconsistent with at least one of the positive examples. In this case, our algorithm generates a meta-sketch by erasing the type annotation of this hole. **Hole-Correct, Concrete-Correct.** Since these rules apply to base cases without any problems, fault localization returns the original sketch. **Concrete-Fail.** This rule applies to the case where a concrete regex does not match at least one of the examples. In this case, we simply replace the concrete regex with a meta-hole. **Example 5.3**.: Consider the positive examples from Section 2 and the following sketch: \[\{\square:\text{Name}\}\ \cdot\ ^{*},\ ^{*}\cdot\{\square:\text{Country}\}\ \cdot\ ^{*},\ ^{*}\cdot\{\square:\text{Year}\}\] Suppose the synthesizer concluded this sketch to be infeasible since the string "1898-1985" cannot be identified as a year and sends this as a failed sketch to the sketch generator. To repair this sketch, we follow the Sketch-Nested-Fail rule to recursively traverse through each part of the sketch until we locate the faulty hole, \(\{\square:\text{Year}\}\). We then gather the positive examples that should be matched by this hole, which are "1898-1985", "1701-1779" and "1815-1879", and replace the faulty typed hole with a new hole with no type (rule Hole-Fail). With the generated repair specification, we query GPT-3 to generate a new sketch for the faulty hole, and it returns a new sketch \(\{\square:\text{Year}\}\ \cdot\ ^{*}-\ ^{*}\cdot\{\square:\text{Year}\}\). ## 6. Implementation We have implemented our synthesis algorithm in a new tool called Smore written in Python. In this section, we provide implementation details about different components of Smore. **Implementation of the semantic matching construct**. Our tool heavily relies on the use of GPT-3 to identify the semantic meanings of strings.5 Our few-shot prompt (following the discussion in Section 5.5) to accomplish this is shown in Figure 18. The input begins with a task description that asks the model to identify _all possible substrings_ of a particular semantic type, and we instruct the model to return "none" if it does not find any. Following the task descriptions, we provide 8 examples,6 each of which shows the structure of a query: the first line provides the string of interest, and the second line specifies the semantic type of interest. Furthermore, we provide sample outputs for each example in the expected output format. Footnote 5: We use the text-davinci-003 model. Footnote 6: We provide all the in-context examples we use in the supplementary material. **Implementation of checking observational equivalence**. In the GetNextCompletion procedure (Figure 13), we use the set \(\mathcal{E}_{\star}\) to prune out programs that are observationally equivalent to previously synthesized programs. In Figure 12, \(\mathcal{E}_{\star}\) corresponds to all substrings of the negative examples \(\mathcal{E}^{-}\), but this set might contain too many strings in practice, leading to considerable overhead in the observational equivalence check. To address this issue, we only obtain the substring of the negative examples that are relevant to the specific hole under consideration. This strategy provides the full benefits of checking observational equivalence but significantly reduces overhead in some cases. **Ranking heuristic**. Because there are often multiple semantic regexes that are consistent with the provided examples, it is important to use a ranking heuristic to choose between possible solutions. To this end, our method prioritizes sketches that maximize the number of type annotations, and it prefers decompositions that minimize the number of holes that are assigned empty strings as positive examples. Finally, when choosing between multiple regexes for a given hole, our algorithm prefers those with smaller ASTs, first ranked by height and then by the number of nodes. **Hyperparameters**. The Smore system has a hyperparameter that controls the maximum depth of the synthesized programs for each hole, which is set to 4 by default. For GPT-3 hyperparameters, we set the temperature to 0 (corresponding to greedy inference) and maximum length to 256.7 Footnote 7: We also define the suitable stop sequences for each prompt to ensure GPT-3 doesn’t have to generate 256 tokens. ## 7. Evaluation In this section, we describe the results of our experimental evaluation, which is designed to answer the following research questions: * **RQ1.** How does our proposed data extraction approach compare against existing approaches? * **RQ2.** How does our synthesis algorithm compare to relevant baselines? * **RQ3.** How important are the different components of our synthesis algorithm for successfully solving these benchmarks? * **RQ4.** Do semantic regexes help humans more effectively solve data extraction tasks compared to standard regexes? **Benchmarks**. To answer these questions, we evaluate Smore on 50 data extraction tasks involving 10 different datasets, which cover a wide range of domains like sales, science, and art. These datasets contain many different string formats and involve a large variety of entities. We consider an average of 5 data extraction tasks for each dataset and manually label a subset of the strings in each dataset as positive or negative for each task. Specifically, we use 6 of the manually labeled examples for training and the rest for testing. Table 1 describes some example tasks for each domain. **Experimental Setup**. All of our experiments are conducted on a machine with an Apple M2 Max CPU and 32GB of physical memory, running the macOS 13.2.1 operating system. We run GPT-3 through the OpenAI API. For each task, we set the timeout to 60 seconds (excluding the time to query OpenAI). Figure 18. GPT-3 input structure for identifying substring of specific semantics. [New string] is a placeholder for the string we are querying about, and [Semantic Type] is the semantics we are asking the model to identify. ### Comparison with Other Automated Data Extraction Techniques There are several techniques that can be used to automate data extraction tasks. To answer our first research question, we compare Smore against the following alternative data extraction approaches: * ChatGPT-Regex-Synth(OpenAI, 2022): One way to automate data extraction is to synthesize standard regexes from positive and negative examples. To evaluate this approach, we use ChatGPT to synthesize standard regexes. If the synthesized regex rejects the positive examples or accepts the negative examples, we ask ChatGPT to synthesize a different regex for up to ten iterations.8 Footnote 8: We set the temperature to 0.7 for sampling. * ChatGPT-Exec(OpenAI, 2022): Another way to automate data extraction is to directly use ChatGPT. To evaluate this approach, we provide ChatGPT with positive and negative examples and then query it about strings in the test set. Hence, this approach does not require synthesizing a program; instead, it invokes ChatGPT on every test example. * FlashGPT (Verbruggen et al., 2021): Recent work has proposed an extension of FlashFill, called FlashGPT, that can query GPT-3 in addition to performing syntactic transformations and pattern matching. For our third baseline, we also compare against FlashGPT by giving it positive and negative examples and then using it to synthesize a program in their DSL. \begin{table} \begin{tabular}{c c} \hline \hline **Domain** & **Task Description** \\ \hline Business & Restaurants that are created before 2000 or after 2010 \\ Businesses located in California \\ \hline \multirow{2}{*}{Sales} & Products with Intel CPU that have more than 8GB memory \\ & TVs of size less than 50’ or resolution less than 1080P \\ \hline \multirow{2}{*}{Retail} & Website titles that start with product names and are followed by a url \\ & Product names that contain measurement information \\ \hline \multirow{2}{*}{Marketing} & Software engineering jobs that have specified working locations \\ & Business names with at least 3 words \\ \hline \multirow{2}{*}{Account} & Email addresses that have a country domain and where the username ends with number \\ & Software versions with at least 10 minor updates and more than one patch \\ \hline \multirow{2}{*}{Stock} & Company names with 3-letter abbreviation \\ & Company names with ticker symbols containing special characters \\ \hline \multirow{2}{*}{Science} & Location description with format State; County; More details \\ & Locations that are less than 11 miles from a road \\ \hline \multirow{2}{*}{Server} & Apache logs with file id \(>\)=151000 or in the format of a zip file with id \(\epsilon\)= 50 \\ & Photo files with numbers in their name \\ \hline \multirow{2}{*}{Museum} & Purchase made by using three different funds \\ & Artwork with two artists born in the 14th century \\ \hline \multirow{2}{*}{Exhibition} & Dimension of item between 10 and 50 inches \\ & Item that is associated with at least three categories \\ \hline \hline \end{tabular} \end{table} Table 1. Description of the sample tasks used in the evaluation. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Tool** & **\# Finished** & **P** & **R** & **F\({}_{1}\)** & **Synth Time (s)** & **Matching Engine** \\ \hline ChatGPT-Regex-Synth & 23/50 & 0.60 & 0.40 & 0.44 & - & Regex \\ ChatGPT-Exec & - & 0.60 & 0.77 & 0.65 & - & ChatGPT \\ FlashGPT & 15/50 & 0.45 & 0.83 & 0.58 & 3.16 & FlashGPT DSL \\ \hline \multirow{2}{*}{Smore} & 48/50 & 0.94 & 0.84 & 0.87 & 4.96 & Semantic Regex \\ \hline \hline \end{tabular} \end{table} Table 2. Evaluation results for Smore and data extraction baselines. P means precision and R means recall. _Main results_. Our main results are summarized in Table 2. We evaluate each tool in terms of precision, recall, and F1 score on the test set as well as synthesis time and number of benchmarks solved. The **P**, **R**, and \(\text{F}_{1}\) columns represent the precision, recall, and \(F_{1}\) score on the test set. Smore achieves the highest precision, recall, and \(F_{1}\) score among all the alternative data extraction approaches. In particular, Smore outperforms the second best approach, namely ChatGPT-Exec, by 22% in terms of \(F_{1}\) score. While ChatGPT-Exec and FlashGPT have fairly high recall, they have low precision. ChatGPT-Regex-Synth has similar precision to ChatGPT-Exec but has very low recall on the test set. Finally, FlashGPT and Smore are close in terms of recall, but Smore significantly outperforms FlashGPT in terms of precision (for benchmarks that both tools can synthesize within the time limit). Next, the column labeled "# Finished" in Table 2 shows the number of tasks that each tool is able to solve. For Smore and FlashGPT, solving a benchmark means they were able to find a program consistent with the positive and negative examples within the 60-second time limit. Solving a benchmark for ChatGPT-Regex-Synth means finding a regex consistent with the examples within 10 iterations.9 Since ChatGPT-Exec does not perform synthesis, this column is not applicable to it. Among all the synthesis-based approaches, Smore terminates for 48 out of 50 tasks, which is around twice as many as ChatGPT-Regex-Synth and around 3 times as many as FlashGPT. Footnote 9: Recall we keep querying for a different regex for up to 10 times if the synthesized regex does not match the examples. Finally, the column labeled "Synth time" shows the synthesis time in seconds for FlashGPT and Smore. Since we exclude the time to query OpenAI from synthesis time (this only takes at most a few seconds), this column is not applicable to ChatGPT-Regex-Synth. As we can see from this column, the synthesis time of Smore is around 5 seconds, so it takes slightly longer than FlashGPT (which takes around 3 seconds) for the 14 tasks that both of the tools can solve. However, Smore is able to synthesize a program for three times as many tasks as FlashGPT. _Failure Analysis for the baselines_. To provide some insight into the shortcomings of existing approaches, we briefly discuss the failure cases of the baselines. As expected, ChatGPT-Regex-Synth struggles with tasks that are hard to represent as regular expressions, such as matching all businesses that are in California. Although FlashGPT combines neural and symbolic constructs, its neural component processes positive and negative examples rather than semantic types. In other words, the neural constructs directly query GPT with positive and negative examples rather than querying whether a string matches a certain type. As a result, it frequently generates trivial programs that directly invoke GPT with the training examples as input. Hence, it ultimately ends up sharing the same limitations as ChatGPT-Exec. _Failure analysis for the Smore._ We examined instances where Smore is unable to complete the synthesis task within the allotted time and found that it encounters difficulties in tasks that demand a higher level of granularity from semantic pattern matching. For example, consider a task that involves finding restaurant names containing a person's name. For the positive example "Alice Chinese Bistro", the entity matcher may fail to recognize "Alice" as a person's name, causing Smore to fail to synthesize a program consistent with all examples. ### Comparison with Other Semantic Regex Synthesis Techniques To answer our second research question, we compare the neural-guided synthesis algorithm of Smore against the following two purely-neural or purely-symbolic baselines: * ChatGPT-Synth[OpenAI 2022]: To evaluate whether a purely neural synthesizer can solve these benchmarks, we use ChatGPT to create a synthesizer for semantic regexes. Specifically, our ChatGPT-Synth baseline queries ChatGPT to synthesize a _semantic regex_ that matches all positive examples and rejects all negative examples. If the generated semantic regex is inconsistent with the examples, we query it again for a different one. We repeat this process for up to 10 times, as done with our ChatGPT-Regex-Synth baseline in the previous subsection. * [leftmargin=*] * [leftmargin=*] ### User Study We conducted a user study to assess the efficacy of semantic regexes in aiding humans with data extraction tasks compared to standard regexes. We recruited 13 participants, consisting of 3 CS undergraduate students, 6 CS graduate students, and 4 professional software engineers who regularly use regexes in their work. We asked each participant to complete 4 data extraction tasks by writing a regex. The participants were given 5 minutes for each task and asked to write standard regexes for two randomly chosen tasks (out of the 4 total tasks) and semantic regexes for the other two. The four tasks used in the study are simplified versions of the benchmarks used in our evaluation -- we intentionally simplified the tasks so that they are doable within 5 minutes. _Setup._ To conduct this user study, we developed a command-line interface for Smore. For each task, the interface initially displays the prompt for the task (including 3 positive and negative examples) and then asks the user to input their answer. The tool randomly determines whether the answer should be a standard or semantic regex and only accepts user answers in the correct format. Upon entering a regex, the interface evaluates it against the test set and informs the user of their regex's performance, allowing unlimited attempts to enter a new regex within the 5-minute time limit. The details of the user study protocol are provided in the supplementary material. **Results**. We evaluate the quality of the regexes in terms of their \(F_{1}\) score on the test set. For each task, Table 20 presents \(F_{1}\) scores for (a) manually-written standard regexes ("Manual-Regex"), (b) manually-written _semantic_ regexes ("Manual-SemRegex"), and (c) semantic regexes generated automatically by Smore (the "Smore" column). Since some of the manually-written regexes have a precision or recall score of 0, the \(F_{1}\) score is undefined. In Table 20, we only show average \(F_{1}\) score across regexes for which the \(F_{1}\) score is defined. As we can see from Figure 20, manually-written _semantic_ regexes achieve a better overall \(F_{1}\) score (0.78) compared to standard regexes, for which the \(F_{1}\) score is 0.54. This result suggests that participants are more effective at performing these types of data extraction tasks using semantic regexes than with standard regexes. Another interesting aspect of Figure 20 is that the semantic regexes learned by Smore seem to be _even_ more effective than manually-written semantic regexes. In particular, for these four tasks, Smore learns regexes that achieve an overall \(F_{1}\) score of 0.92 compared to the \(F_{1}\) score (0.78) of manually-written semantic regexes. This result suggests that our proposed learning technique has the potential to improve productivity even for expert users who are generally comfortable with writing regexes. ## 8. Related Work In this section, we survey related work on program synthesis and data extraction. **Learning regexes from examples**. There is a large body of prior research on learning regular expressions from positive and negative examples (Alquez and Sanfeliu, 1994; Angluin, 1987; Firoiu et al., 1998; Gold, 1978; Parekh and Honavar, 1996, 2001; Rivest and Schapire, 1989). Our work builds on existing works that prune partial programs by evaluating the examples with respect to over- and under-approximations (Chen et al., 2020; Lee et al., 2016; Ye et al., 2021). In this work, we not only use the over-approximations for pruning but also for decomposing the synthesis tasks. **Information Extraction from Semi-Structured Data**. Past work has investigated similar extraction tasks, particularly for extracting lists from web sources (Chen et al., 2021; Lin et al., 2020; Pasupat and Liang, 2014; Raza and Gulwani, 2020), answering questions based on tables (Pasupat and Liang, 2015), and general information extraction from tabular data (Le and Gulwani, 2014; Wu et al., 2018). Recent work has specifically employed LLMs to extract information from tables (Cheng et al., 2023) or raw text (Dunn et al., 2022). Despite the prevalence of neural-based approaches that emphasize data semantics, our work uniquely targets the integration of both semantic and symbolic aspects of the data structure. **Neurosymbolic DSLs**. Recent work has considered so-called _neurosymbolic DSLs_ with both standard language constructs and neural components (Andreas et al., 2016, 2016; Bastani et al., 2022; Chen et al., 2021; Cheng et al., 2023; Gaunt et al., 2017; Huang et al., 2020; Jiang et al., 2021; Shah et al., 2020; Valkov et al., 2018; Verbruggen et al., 2021). Among these, most relevant to our approach are FlashGPT (Verbruggen et al., 2021) and Binder (Cheng et al., 2023). FlashGPT augments the DSL used in Flashfill (Gulwani, 2011) with semantic transformation operators that can be used to reason about the semantic properties of the input. However, FlashGPT relies on in-context examples and does not utilize explicit semantic types, which hinders its ability to reason about combined semantic and symbolic properties. On the other hand, Binder(Cheng et al., 2023) proposes a new program structure that extends programming languages, such as SQL, with a function that allows querying large language models (in particular, Codex). However, the constructs proposed in Binder focus mainly on SQL-related tasks and do not transfer well to the string-matching domain. **Program Synthesis Using LLMs**. The growing interest in leveraging LLMs for program synthesis (Austin et al., 2021; Chen et al., 2021; Cheng et al., 2023; Nijkamp et al., 2023; Zhou et al., 2023) stems from general-purpose models like ChatGPT and Codex demonstrating code generation capabilities from various specifications, including natural language and input-output examples. However, these models often generate code that violates syntactic and semantic rules due to their limited understanding of program syntax and semantics. To address this, several approaches (Jain et al., 2022; Poesia et al., 2022; Rahmani et al., 2021) integrate LLMs with symbolic methods like program analysis to improve code quality. In our work, we use LLMs to generate sketches and introduce a sketch repair technique to handle cases where the LLM fails to generate accurate sketches. **Compositional program synthesis**. Various approaches have been proposed for compositional program synthesis (Bansal et al., 2023; Feser et al., 2015; Huang et al., 2020; Polozov and Gulwani, 2015). Among these works, both \(\lambda^{2}\)(Feser et al., 2015) and FlashMeta (Polozov and Gulwani, 2015) perform compositional PBE by inferring input-output examples for sub-programs using the inverse semantics. In another example, Raza et al. (Raza et al., 2015) rely on the natural language description to decompose the synthesis problems into smaller sub-problems. Furthermore, Zhang et al. (Zhang et al., 2021) decompose the synthesis task into simpler sub-problems in the domain of UDF-to-SQL translation using a dataflow graph. Our work differs from prior research by presenting a new decomposition strategy on a typed sketch in the context of synthesizing string-matching programs. While our decomposition approach helps reject incorrect programs using inferred positive examples, the full result must still be tested against the negative examples to ensure correctness. ## 9. Conclusion We have presented Smore, a new synthesis-powered system for data extraction. The key idea behind Smore is the concept of _semantic regexes_, which augments the syntactic pattern matching capabilities of regexes with a semantic pattern matching construct of the form \(\{v:\tau\mid\phi\}\) which matches strings that have entity type \(\tau\) and that satisfy logical predicate \(\phi\) when interpreted as an instance of \(\tau\). As shown in our user study from Section 7.4, semantic regexes allow users to more easily perform data extraction tasks that are hard to do using standard regular expressions. In addition to proposing semantic regexes, we have also described a learning algorithm that can synthesize semantic regexes from examples. Our synthesis algorithm is neural-guided and uses a LLM to generate a _typed sketch_ where unknown parts of the regex have useful type annotations that are used to guide the search. Our synthesis algorithm is compositional and uses type-directed reasoning to find a completion of each hole in the sketch. Our evaluation shows that our proposed approach outperforms alternative data extraction techniques in terms of precision, recall, and \(F_{1}\) score. Our evaluation also shows the advantages of combining neural-guided sketch generation with type-directed compositional synthesis in terms of synthesis time. ###### Acknowledgements. This material is based upon work supported by the National Science Foundation under Grant No. nmnnnn and Grant No. mmmmmmm. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.